added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2021-10-21T16:03:23.448Z
|
2021-09-06T00:00:00.000
|
239655923
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/13/17/9978/pdf",
"pdf_hash": "8c3a4c92446b84607b96a0b0690957cf29d3593c",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1047",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"sha1": "97f62ed4edab22a279fe60f28e08b3e1a2ecfcab",
"year": 2021
}
|
pes2o/s2orc
|
Transitional Pathways towards Achieving a Circular Economy in the Water, Energy, and Food Sectors
: Achieving sustainable socio-economic development requires approaches that enhance resource use efficiencies and can address current cross-sectoral challenges in an integrated manner. Existing evidence suggests an urgent need for polycentric and transformative approaches, as global and local systems have come under strain. This study conducted a systematic literature review at the global level to assess the progress made towards achieving a circular economy between 2010 and 2020, a period covering the formulation of the Sustainable Development Goals (SDGs) and the initial five years of their implementation. The focus was on the potential of improved water and energy use efficiency, linking them to food production within the context of a circular economy. Identifying successes, opportunities, challenges, and pathways towards a circular economy from the literature review facilitated developing a conceptual framework to guide strategic policy formulations towards a more sustainable economy. A combination of transformative approaches is analysed in an integrated way in response to the 2030 global agenda on sustainable development. Thus, the study is informed by the initiatives to attain SDGs and mitigating negative environmental impacts due to waste and pollution. The premise is to enhance transformational change as a catalyst for employment creation and the attainment of a green economy while reducing waste. Transformative approaches have been identified to provide pathways towards global climate targets and protection of the environment from further degradation. They are a catalyst to achieve SDG 12 on ensuring sustainable consumption and production patterns.
Introduction
The urgent need to balance industrial development, the environment, social wellbeing, and economic growth is critical to achieving sustainability by 2030 [1][2][3]. As a result, efficient resource use and low-carbon emission strategies are crucial for promoting sustainable development and enhancing countries' overall economic growth [4,5]. Recent developments have shown that contemporary challenges require integrated and transformative approaches such as nexus planning, circular economy, sustainable food systems, and scenario planning that consider cross-sectoral interventions [2,6,7]. Of note is that these transformative approaches complement each other in their application as one informs or enhances the other or vice-versa [3,7,8]. In particular, the circular economy concept balancing demand, and supply [45,48,49]. On the other hand, climate extremes continue to degrade water, energy, and food resources, particularly in semi-arid and arid regions across the globe [43,48]. Water reuse is a viable option but has not been explored fully as an alternative to sufficient water supply [50,51]. As already alluded to, the food sector contributes significantly to environmental degradation and GHG emissions [39,52]. The transition to a circular economy creates a unique opportunity to create synergies to adopt water reuse technologies, renewable energy sources, and sustainable food systems as an alternative to water, energy, and food security. Critical to this is the importance of transformative approaches in considering the risks and opportunities and synergies, and trade-offs before adopting waste recycling technologies and environmentally friendly sources of energy [13,53]. However, transitioning from linear approaches to circular and integrated models requires policy shifts coupled with sound financial backup [3,45,54,55]. Due to the intricate interlinkages between water, energy, and food sectors, transformational change is possible by applying transformative approaches like nexus planning, particularly the water-energy-food (WEF) nexus, to enhance synergies and overall resource use efficiency, and sustainable management and development [3,44]. Rapid urbanisation and the increasing middle class have increased demand for water, energy, and food resources [56,57]. Currently, two-thirds of the global energy consumed contributes to 80% of greenhouse gas emissions [58]. All this is happening when the urban population continues to grow and is projected to increase to 6.9 billion people by 2025, which would account for 70% of the global population [59]. Thus, adopting the circular economy would enhance resource use efficiency, reduce waste, and use renewable energy sources by recovering materials and recycling waste into clean energy and minimising the trade-offs between economic growth, society, and the environment [60].
Therefore, the essence of the circular economy approach is to enhance resource use efficiency and ultimately resource security. This is vital for augmenting sustainability in the water, energy, and food sectors and facilitating coherent policy formulations that lead the adaptation and resilience. However, most studies discuss the circular economy in the context of cleaner production and reducing the environmental impact and waste production along the life cycle of a product [61][62][63]. Little has been done on the opportunities and challenges within the water, energy, and food sectors, particularly their role in transitioning into the circular economy. A systematic review of the literature was conducted to assess the progress on achieving the circular economy between 2010 and 2020, covering the planning phase in formulating the SDGs in 2015 and the first assessment phase of implementation (2015 to 2020). The premise was to assess progress and enhance the initiatives aimed at achieving SDGs [6,64]. This culminated in developing of a conceptual framework to guide strategic policy formulations towards a more sustainable economy and a greater environmental and human health. The literature search plan (Figure 1) was designed to include three different searching strategies that include: (a) grey literature databases, (b) customised search engines, and (c) targeted websites. This strategy was necessary for reducing the risk of omitting relevant literature and other sources. The literature search considered literature that included those without abstracts, summaries, and executive summaries from where the literature was screened. This was necessary as there is other literature without abstracts. Data extraction considered the organisation, year of publication (2010 to 2020), the intended audience, and the document's objectives.
Literature Review
The initial search across databases generated 1232 potential studies. These studies were further refined using the inclusion and exclusion criteria by screening titles and abstracts and eliminating duplicate studies ( Figure 1). The literature search was limited to peer-reviewed articles published in English between the years 2010-2020. After the application of the screening process was complete, 103 studies were identified to be relevant for this specific systematic review.
Development of a Transitional Framework towards the Circular Economy
The literature review facilitated the identification of the successes and bottlenecks associated with transitioning towards a circular economy. Identified challenges encountered in transitioning to a circular economy include changing the norm and adopting novel business models, focused integrated and cross-sectoral policy frameworks and standards, adequate financial investment, technological innovation, behavioural change in waste management, and human capacity development administrative capacity, among others. The comprehension of these pertinent issues enabled the development of a transitional conceptual framework formulated as a guiding strategic policy pathway towards achieving the circular economy. The framework integrates transformative approaches that include the circular economy itself, nexus planning, scenario planning, and sustainable agriculture systems as pathways towards the 2030 global agenda on sustainable development. The literature search plan (Figure 1) was designed to include three different searching strategies that include: (a) grey literature databases, (b) customised search engines, and (c) targeted websites. This strategy was necessary for reducing the risk of omitting relevant literature and other sources. The literature search considered literature that included those without abstracts, summaries, and executive summaries from where the literature was screened. This was necessary as there is other literature without abstracts. Data extraction considered the organisation, year of publication (2010 to 2020), the intended audience, and the document's objectives.
The initial search across databases generated 1232 potential studies. These studies were further refined using the inclusion and exclusion criteria by screening titles and abstracts and eliminating duplicate studies ( Figure 1). The literature search was limited to peer-reviewed articles published in English between the years 2010-2020. After the application of the screening process was complete, 103 studies were identified to be relevant for this specific systematic review.
Development of a Transitional Framework towards the Circular Economy
The literature review facilitated the identification of the successes and bottlenecks associated with transitioning towards a circular economy. Identified challenges encountered in transitioning to a circular economy include changing the norm and adopting novel business models, focused integrated and cross-sectoral policy frameworks and standards, adequate financial investment, technological innovation, behavioural change in waste management, and human capacity development administrative capacity, among others. The comprehension of these pertinent issues enabled the development of a transitional conceptual framework formulated as a guiding strategic policy pathway towards achieving the circular economy. The framework integrates transformative approaches that include the circular economy itself, nexus planning, scenario planning, and sustainable agriculture systems as pathways towards the 2030 global agenda on sustainable development. Figure 2 shows the countries that dominate circular economy research and implementation. Germany and China were the leading countries with over 11 circular economy-related research, followed by Australia, the United States of America, the United Kingdom, and Italy, which have between six and 10 studies. South Africa, Mexico, Canada, Spain, and Finland have one to five circular economy-related studies. Research on the circular economy in Africa remains minimal, with only South Africa having done more. South America and South-East Asia did not show any research related to the circular economy. Figure 2 shows the countries that dominate circular economy research and implementation. Germany and China were the leading countries with over 11 circular economyrelated research, followed by Australia, the United States of America, the United Kingdom, and Italy, which have between six and 10 studies. South Africa, Mexico, Canada, Spain, and Finland have one to five circular economy-related studies. Research on the circular economy in Africa remains minimal, with only South Africa having done more. South America and South-East Asia did not show any research related to the circular economy. The global distribution of circular economy research is still a long way to go besides the fact that it is a concept envisaged to drive economies towards sustainable development, reduce waste and global warming, and meet the targets of the Paris Agreement. Figure 2 demonstrates that more needs to be done in adopting transformative models that inform the circular economy. Current linear models have reached their limitations and there is now an urgent need for a transformational change from the norm towards circular and cross-sectoral planning [2,67,68]. The limited uptake of the circular economy at the global scale is a potential risk to achieving the SDGs by 2030 and could derail the progress being made in enhancing water, energy, and food security. Even those countries that have made some effort in driving the circular economy, the pace has been low [2,69]. The absence of relevant policies compounds the challenges of the circular economy's slow uptake to drive the model's implementation process [70,71].
Circular Economy Research by Sector
The bar graph in Figure 3 illustrates the various research contexts or circular economy research by sector. The energy sector has the most circular economy-related research, The global distribution of circular economy research is still a long way to go besides the fact that it is a concept envisaged to drive economies towards sustainable development, reduce waste and global warming, and meet the targets of the Paris Agreement. Figure 2 demonstrates that more needs to be done in adopting transformative models that inform the circular economy. Current linear models have reached their limitations and there is now an urgent need for a transformational change from the norm towards circular and cross-sectoral planning [2,67,68]. The limited uptake of the circular economy at the global scale is a potential risk to achieving the SDGs by 2030 and could derail the progress being made in enhancing water, energy, and food security. Even those countries that have made some effort in driving the circular economy, the pace has been low [2,69]. The absence of relevant policies compounds the challenges of the circular economy's slow uptake to drive the model's implementation process [70,71].
Circular Economy Research by Sector
The bar graph in Figure 3 illustrates the various research contexts or circular economy research by sector. The energy sector has the most circular economy-related research, followed by water, waste, and agriculture. Interestingly, the concept is also gaining prominence in other sectors such as the food systems, mining, hospitality, and construction industries, as evidenced by previous studies [10,19].
followed by water, waste, and agriculture. Interestingly, the concept is also gaining prominence in other sectors such as the food systems, mining, hospitality, and construction industries, as evidenced by previous studies [10,19]. The Fourth Industrial Revolution (4IR) brings some advanced technologies with the potential to accelerate the implementation of circular economy practices, which can promote the transition to a green economy and sustainable development [72]. Some of the innovations include communication networks based on the Internet of Things (IoT), Internet of Services (IoS), cloud manufacturing and computing, and cyber-physical systems [73,74]. The IoT is a digital transformation that is facilitating the storage, processing, and access of big data generated by various means and systems [75,76]. It is, therefore, promoting a circular economy culture into novel business models by facilitating the application of operational analytical models that integrate and support sustainable supply chains [77,78]. Thus, the 4IR and the IoT accelerate smart productivity by enhancing product tracking and access in near real-time and promoting smart production systems [78]. This allows easy access to information on production chains and tracking environmental impacts while improving resource use efficiency [77,78]. Furthermore, the IoT has become critical for informing strategic policy formulations that drive the circular economy [79]. As the IoT embraces all sectors, it is envisaged to accelerate the adoption of the circular economy by all sectors.
Circular Economy Research Progression (2010-2020)
The line graph shown in Figure 4 illustrates the progression of circular economy research between 2010 and 2020. Research on the circular economy was generally static between 2010 and 2016, with an average of about five publications per year. However, interest in the concept has since increased between 2018 and 2020, as indicated by the steep trend in research outputs related to the circular economy. The recent interest in the circular economy could be motivated by the global drive towards waste minimisation in an attempt to reduce resource insecurity, worsening pollution, and global warming [50]. The sudden increase in circular economy research from 2016 to date is also motivated by the amplified interest from policy and decision-makers towards adopting evidence-based research that advances the SDGs [62,79]. The Fourth Industrial Revolution (4IR) brings some advanced technologies with the potential to accelerate the implementation of circular economy practices, which can promote the transition to a green economy and sustainable development [72]. Some of the innovations include communication networks based on the Internet of Things (IoT), Internet of Services (IoS), cloud manufacturing and computing, and cyber-physical systems [73,74]. The IoT is a digital transformation that is facilitating the storage, processing, and access of big data generated by various means and systems [75,76]. It is, therefore, promoting a circular economy culture into novel business models by facilitating the application of operational analytical models that integrate and support sustainable supply chains [77,78]. Thus, the 4IR and the IoT accelerate smart productivity by enhancing product tracking and access in near real-time and promoting smart production systems [78]. This allows easy access to information on production chains and tracking environmental impacts while improving resource use efficiency [77,78]. Furthermore, the IoT has become critical for informing strategic policy formulations that drive the circular economy [79]. As the IoT embraces all sectors, it is envisaged to accelerate the adoption of the circular economy by all sectors.
Circular Economy Research Progression (2010-2020)
The line graph shown in Figure 4 illustrates the progression of circular economy research between 2010 and 2020. Research on the circular economy was generally static between 2010 and 2016, with an average of about five publications per year. However, interest in the concept has since increased between 2018 and 2020, as indicated by the steep trend in research outputs related to the circular economy. The recent interest in the circular economy could be motivated by the global drive towards waste minimisation in an attempt to reduce resource insecurity, worsening pollution, and global warming [50]. The sudden increase in circular economy research from 2016 to date is also motivated by the amplified interest from policy and decision-makers towards adopting evidence-based research that advances the SDGs [62,79].
Circular Economy Trends in the Context of Water and Energy
The challenges associated with resource insecurity, increasing inequality and poverty, environmental degradation, and climate change resulted in the formulation of the SDGs to achieve sustainability by 2030 [29]. These challenges and the SDGs' ultimate formulation have increased interest in circular economy research as applied to the energy and water sectors (Figure 3). Water and energy have dedicated goals-6 (ensure availability and sustainable management of water and sanitation for all), and 7 (ensure access to affordable, reliable, sustainable, and modern energy). The attainment of these two intricately connected goals depends on contributing to and benefiting from the achievement of other SDGs, particularly in the context of the circular economy-related SDG, which is Goal 12 (ensure sustainable consumption and production patterns) [20]. This is an indicator of the interdependences across the SDGs, highlighting the need for greater cooperation amongst sectors, and is the core of sustainable development [1]. In most countries, water and energy can be instrumental in achieving sustainability. They are either scarce, shared, or abundant resources that can positively contribute to environmental, social, and economic sustainability [80].
Therefore, adopting the circular economy principles in water and energy is fundamental in achieving the SDG agenda, driven by sanitation, renewable and clean energy, and water reuse. However, limited progress in changing existing legal frameworks has been a drawback in understanding the resources' interlinkages. There is still a need for guiding policies to direct the transition to the circular economy [9,55]. This is because existing legal frameworks were developed in linear production and consumption patterns [55]. Meanwhile, the two sectors can still adopt innovative technologies and practices and move towards the circular economy. However, it is important to note that transitioning to a circular economy is driven by both external and internal factors, which stakeholders need to anticipate, respond to, and influence to ensure clear progress towards a circular economy and sustainable development [81]. The transition from linear to circular requires a shift from conventional models designed around linear production and consumption patterns to models that support the circular economy [55]. There are critical intersections where water and energy converge throughout the transition to a circular economy, and opportunities arise to facilitate the transition [82]. These intersections serve as points of analysis and action, where stakeholders can get an insight into and create partnerships for informed transitioning to the circular economy.
Energy consumption for water use is greatest at the household level as it is used for heating and other domestic uses [83,84]. Currently, water networks and treatment plants consume an average of between 10% and 15% of national power production globally [85].
Circular Economy Trends in the Context of Water and Energy
The challenges associated with resource insecurity, increasing inequality and poverty, environmental degradation, and climate change resulted in the formulation of the SDGs to achieve sustainability by 2030 [29]. These challenges and the SDGs' ultimate formulation have increased interest in circular economy research as applied to the energy and water sectors (Figure 3). Water and energy have dedicated goals-6 (ensure availability and sustainable management of water and sanitation for all), and 7 (ensure access to affordable, reliable, sustainable, and modern energy). The attainment of these two intricately connected goals depends on contributing to and benefiting from the achievement of other SDGs, particularly in the context of the circular economy-related SDG, which is Goal 12 (ensure sustainable consumption and production patterns) [20]. This is an indicator of the interdependences across the SDGs, highlighting the need for greater cooperation amongst sectors, and is the core of sustainable development [1]. In most countries, water and energy can be instrumental in achieving sustainability. They are either scarce, shared, or abundant resources that can positively contribute to environmental, social, and economic sustainability [80].
Therefore, adopting the circular economy principles in water and energy is fundamental in achieving the SDG agenda, driven by sanitation, renewable and clean energy, and water reuse. However, limited progress in changing existing legal frameworks has been a drawback in understanding the resources' interlinkages. There is still a need for guiding policies to direct the transition to the circular economy [9,55]. This is because existing legal frameworks were developed in linear production and consumption patterns [55]. Meanwhile, the two sectors can still adopt innovative technologies and practices and move towards the circular economy. However, it is important to note that transitioning to a circular economy is driven by both external and internal factors, which stakeholders need to anticipate, respond to, and influence to ensure clear progress towards a circular economy and sustainable development [81]. The transition from linear to circular requires a shift from conventional models designed around linear production and consumption patterns to models that support the circular economy [55]. There are critical intersections where water and energy converge throughout the transition to a circular economy, and opportunities arise to facilitate the transition [82]. These intersections serve as points of analysis and action, where stakeholders can get an insight into and create partnerships for informed transitioning to the circular economy.
Energy consumption for water use is greatest at the household level as it is used for heating and other domestic uses [83,84]. Currently, water networks and treatment plants consume an average of between 10% and 15% of national power production globally [85].
Untreated sewage also contributes to GHG emissions. Thus, strategies related to energy and carbon should revolve around reducing costs for customers and minimising the impact on the environment. The energy sector should minimise consumption of non-renewable energy and make a positive contribution to zero-carbon cities [86].
Challenges and Opportunities in Achieving a Circular Economy in the Water, Food, and Energy Sectors
The insecurity of water, food, and energy resources is worsening in many regions of the world, particularly in urban areas of developing countries where households have resorted to using groundwater and biomass for domestic purposes [80,87]. However, the spatial extent of groundwater and its availability are unknown [88,89]. At the same time, the depletion rate of the forest is alarming [89], a scenario that has brought uncertainty in the supply of resources. As a result, many urban areas are rapidly running out of options, and now recognise the value of high-grade urban water treatment and the use of renewable sources of energy as cheaper and environmentally friendly alternatives [45,90,91]. This has become a reality worldwide, appearing more pronounced in water-scarce regions. In southern Africa, for example, there are urban areas that rely entirely on groundwater for their water supply [88]. Resource reuse, particularly recycling wastewater, reduces the environmental concerns of ever-increasing nutrient discharges in coastal waters [50]. The elimination of effluent discharges through treated wastewater application within compatible uses reduces the need for expensive nutrient removal treatment processes [50]. Wastewater treated to acceptable quality is useful for replenishing water supplies and reducing the demand/supply gap [50]. However, wastewater recycling and reuse in agricultural systems has faced criticism, as there are concerns that it could pose significant human and environmental health risks [92,93]. The risks include nutrient and sodium concentrations in croplands, as well as the presence of heavy metals and other contaminants like human and animal pathogens, pharmaceuticals, and endocrine disruptors in the environment when wastewater is used for irrigation [50,93].
As the world population is projected to reach 9 billion people by 2050, the demand for resources is also expected to increase [94]. Production systems are estimated to increase threefold than the current consumption, using about 140 billion tons of minerals, fossil fuels, and biomass per annum [95]. Global food production will have to increase by 50% during the same period [96]. To reduce waste during this period of unprecedented demand and achieve sustainability, there is a need to use resources efficiently. Otherwise, the challenges will be unsurmountable. Projections indicate that by 2030, food losses and wastes will reach about 2.1 billion tons, representing a third of food intended for human consumption [97] due to current linear economic models. This is happening while about 900 million people are food insecure worldwide [97]. These developments call for a transformational change in food systems through technological developments such as the IoT that enhance the circular economy initiatives, resource use efficiency, and food security [98]. The 4IR and the adoption of its smart technologies is the first step in transitioning towards sustainable food systems [99], and the provision of healthy and nutritious diets for all and at all times without compromising the environment [100]. This is the main reason why food systems are at the heart of Sustainable Development Goals (SDGs) and are linked to at least 12 of the 17 goals [29,101].
On the other hand, most electronic goods are disposed of in the environment. In 2016 there were about 45 million tons of electronic waste worldwide [102]. Nearly nine million tons of plastic waste annually end up in the ocean, as only~20% is recycled [103]. These challenges require an urgent shift from current linear models to circular ones that optimise the use of waste as a resource and extend the lifespan of products, parts, and components while reducing water and energy consumption. Although the circular economy concept has gained prominence in recent years, many challenges still need to be addressed to operationalise the concept [104]. As already alluded to, the pathway towards the circular economy has controlled intersections where water, energy, or materials meet to provide opportunities that facilitate the transition [82]. These intersections are an opportunity to Sustainability 2021, 13, 9978 9 of 15 analyse and opt for the best and informed options, guided by transformative approaches that lead to the circular economy. These other transformative approaches include nexus planning, sustainable agricultural systems, and scenario planning.
Strengths and Limitations of the Evidence
The authors adopted a systematic review approach and tried to be exhaustive during the selection process of papers to be reviewed. This work's main contribution is to provide a 'bird's eye view' on research and trends related to the circular economy. However, there are still gaps in the literature regarding sustainability and green economy-related aspects and their association with circular economy indicators. The study does not include conference papers; given the novelty of the circular economy concept, these could provide important insights regarding what areas of interest are emerging. Furthermore, a ten-year time scale was applied for this review; this could be a limitation as any articles before this time scale was not part of the search process. Lastly, three search engines were utilised to find relevant literature on the circular economy. This included Google Scholar, EBSCOhost, and Scopus. There are many additional search engines and certain articles that could have been missed during the selection process. Despite these pros and cons, it remains critical to provide policy and decision-making with some guidelines to achieve the circular economy. This has been the missing link between the circular economy and its adoption, particularly in the global South [105].
The results from the literature review facilitated the development of a conceptual framework to guide strategic policy formulations that drive economies towards a more sustainable circular economy. Transitioning towards circularity is urgent, given the speed at which the natural resource base is degrading and depleting [2]. The coming section describes the themes of the proposed circular economy conceptual framework.
Pathways to Achieve the Circular Economy
Importantly, adopting the circular economy is envisaged to enhance resource-use efficiency in the advent of resource depletion, degradation, and insecurity, integrating other concepts such as cleaner production and industrial ecology that also cover the food sector. The identified factors formed the basis for developing the conceptual framework. The framework is built in a way that allows the integration of related sectors in a nonlinear format. It includes industry, production chains, eco-industrial parks, and built and ecological infrastructures to support resource optimisation from private and public sectors [81]. Identified action levels that form the foundation of a circular economy include (a) seeking a much higher resource-use efficiency through the three Rs of Cleaner Production (reduce consumption of resources and emission of pollutants and waste, reuse resources, and recycle by-products), (b) the reuse and recycling of resources to facilitate the full circulation of resources in the local production system, and (c) the integration of different production and consumption to facilitate resource circulation among industries and urban systems [104,105]. The three levels (incorporated into the conceptual framework) require the development of smart technologies to facilitate the collection, storage, processing, and distribution of by-products, as indicated in Figure 5. Efforts to achieve all three levels include the development of resource recovery and cleaner production facilities. This requires investments in new ventures that would translate into job creation, opening opportunities for domestic and foreign enterprises, and economic development. An identified interactive approach to drive economies towards a new paradigm of the circular economy is the adoption and use of the theory of change to inform the operationalisation of a circularity-driven economy [3]. Sustainability 2021, 13, x FOR PEER REVIEW 10 of 16 Six identified thematic areas facilitate the transition to a circular economy; these include (i) linear economic models that increase waste, (ii) resource insecurity and degradation, (iii) development of smart technologies that guide the transition to the circular economy, (iv) resource use efficiency, (v) novel circular models that reduce waste and promote cleaner production, and (v) sustainable development ( Figure 5). Transitioning to cleaner and renewable energy sources is one of the most critical environmental issues that humankind needs to address urgently. The planet has been warming up since the 1st Industrial Revolution with the consequences of rising sea levels, increasing natural disasters, heat waves, and other extreme weather events [29]. Studies have shown a close link between human activity and global warming, and the main reason for this is the over-consumption of fossil fuels, which has led to an increase in GHG emissions in the atmosphere [106,107]. The circular economy is envisaged to transition to cleaner energy systems and water reuse. The two processes that lead to a circular economy include (a) producing differently, and (b) consuming differently [9]. This requires a shift in production systems, transitioning from carbon-based energy (oil, gas, coal) to clean energy (solar, wind, and hydro). The aim is to achieve efficiency in energy use, which is the difference between the energy used and the total energy consumed (often higher due to losses) and interpreted as energy productivity [108]. The circular economy is a pathway that guides the development of innovative and efficient energy solutions ( Figure 5).
As water is at the centre of sustainable development, it is critical for socio-economic development, energy, and food production. It is fundamental for ensuring healthy ecosystems and their services to humankind and the rest of the environment [50]. Water and sanitation are essential for minimising the burden of disease outbreaks and improving human health and economic productivity. Therefore, water reuse closes that loop between water supply and sanitation, and besides, it provides an alternative source of water [109]. In a circular economy, wastewater is treated to the right quality. It can be used to replenish degrading water sources, ensure sufficient supplies, and reduce the demand and availability gap ( Figure 5).
Policy Implications
An important component also required in transitioning economies to a circular economy is the political will which is reflected through appropriate policy frameworks. Circular economy-related policies form an integral part of the pathways needed in transitioning to a circular economy as they are a catalyst in transformational change [9,55]. This is particularly relevant because the current linear models driving industrial production are rooted in a practice where inputs are extracted, combined, processed, consumed, and discarded. There is an urgent need to formulate new policy frameworks that promote the new norm of resource "reduce, reuse, and recycle" from the current principle of the "take- Six identified thematic areas facilitate the transition to a circular economy; these include (i) linear economic models that increase waste, (ii) resource insecurity and degradation, (iii) development of smart technologies that guide the transition to the circular economy, (iv) resource use efficiency, (v) novel circular models that reduce waste and promote cleaner production, and (vi) sustainable development ( Figure 5). Transitioning to cleaner and renewable energy sources is one of the most critical environmental issues that humankind needs to address urgently. The planet has been warming up since the 1st Industrial Revolution with the consequences of rising sea levels, increasing natural disasters, heat waves, and other extreme weather events [29]. Studies have shown a close link between human activity and global warming, and the main reason for this is the over-consumption of fossil fuels, which has led to an increase in GHG emissions in the atmosphere [106,107]. The circular economy is envisaged to transition to cleaner energy systems and water reuse. The two processes that lead to a circular economy include (a) producing differently, and (b) consuming differently [9]. This requires a shift in production systems, transitioning from carbon-based energy (oil, gas, coal) to clean energy (solar, wind, and hydro). The aim is to achieve efficiency in energy use, which is the difference between the energy used and the total energy consumed (often higher due to losses) and interpreted as energy productivity [108]. The circular economy is a pathway that guides the development of innovative and efficient energy solutions ( Figure 5).
As water is at the centre of sustainable development, it is critical for socio-economic development, energy, and food production. It is fundamental for ensuring healthy ecosystems and their services to humankind and the rest of the environment [50]. Water and sanitation are essential for minimising the burden of disease outbreaks and improving human health and economic productivity. Therefore, water reuse closes that loop between water supply and sanitation, and besides, it provides an alternative source of water [109]. In a circular economy, wastewater is treated to the right quality. It can be used to replenish degrading water sources, ensure sufficient supplies, and reduce the demand and availability gap ( Figure 5).
Policy Implications
An important component also required in transitioning economies to a circular economy is the political will which is reflected through appropriate policy frameworks. Circular economy-related policies form an integral part of the pathways needed in transitioning to a circular economy as they are a catalyst in transformational change [9,55]. This is particularly relevant because the current linear models driving industrial production are rooted in a practice where inputs are extracted, combined, processed, consumed, and discarded. There is an urgent need to formulate new policy frameworks that promote the new norm of resource "reduce, reuse, and recycle" from the current principle of the "take-make-dispose" system. Current challenges of resource degradation, depletion, and insecurity have highlighted the need to formulate policies that stimulate circularity in resource extraction, production, and consumption [2,3].
The need to formulate policies that promote the circular economy is urgent as the current global economy is only 9% circular [55], an indication of the linearity of the global economy. This calls for more robust research that focuses on the policy mix in integrated and cross-sectoral pathways towards the circular economy and stimulates resource efficiency, stressing primary and supplementary policy frameworks such as material taxes, extended producer responsibility, and technical requirements [55]. The integration of circular economy-related policies is based on the fact that no single policy can advance the cross-sectoral interlinkages between industries and policy domains needed for circular economy transition.
Conclusions
This study assessed the progress made towards achieving the circular economy for the period 2010 to 2020. The focus was on the potential of improved water energy, and food resource use efficiencies through adopting a circular economy. The review showed that the circular economy gained momentum between 2018 and 2020 as a viable and practical alternative to current linear economic models. However, much of this progression has been driven by the Global North. Research on the circular economy from the Global South, especially in Africa, is still in its infancy, with only South Africa making some real strides towards the model. This highlights that the Global South may still be lagging in transitioning towards sustainable natural resource management. Despite the circular economy being an integrated approach, this study showed a bias towards an energy focus. There is a need to establish stronger linkages to water and food, motivating its adoption by policymakers. However, the literature review results facilitated the development of a proposed conceptual framework to guide strategic policy formulations that lead to sustainable economies. Embedding the circular economy within a more polycentric approach such as the water-energy-food nexus might help counter this bias and potentially breed success as this integration represents fundamental opportunities for transitioning. As a transformative approach, the circular economy approach can provide pathways towards sustainable development in terms of integrating economic, social, and sustainable natural resource management outcomes. Thus, it is fundamental in achieving several interlinked SDGs.
|
v3-fos-license
|
2022-12-31T16:06:12.997Z
|
2022-12-27T00:00:00.000
|
255291460
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.imrpress.com/journal/FBL/27/12/10.31083/j.fbl2712331/pdf",
"pdf_hash": "e1f6d24517a7b8dc6afb3c49f8903b265ed9b858",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1048",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6f540dd8c46e0fa6037508a7531a4949ce35f371",
"year": 2022
}
|
pes2o/s2orc
|
The C24:0 Sulfatide Isoform as an Important Molecule in Type 1 Diabetes
Particular molecules play pivotal roles in the pathogenesis of many autoimmune diseases. We suggest that the C24:0 sulfatide isoform may influence the development of type 1 diabetes (T1D). C24:0 sulfatide is a sphingolipid with a long carbon-atom chain. A C16:0 sulfatide isoform is also present in the insulin-producing beta cells of the islets of Langerhans. The C16:0 isoform exhibits chaperone activity and plays an important role in insulin production. In contrast, the C24:0 isoform may suppress the autoimmune attacks on beta cells that lead to T1D. Sphingolipid levels are reduced in individuals who later develop T1D but could be increased via dietary supplements or medication.
The question has been raised whether during development of type 1 diabetes (T1D), a certain molecule exists which drives the disease.Some molecules play key roles in the development of other autoimmune diseases, such as gluten in coeliac disease, desmosomes in pemphigus, and thyroglobulin in Graves' disease.These molecules are not virus initiators.In common for autoimmune diseases are involvement of the adaptive immune system and autoreactive T lymphocytes.Therefore, a disease-related molecule that drives the development of T1D may have an impact on immune responses but may not itself be directly attacked.
T1D is widely supposed to be initiated by an enterovirus [1,2], and antibodies and T cells are directed against beta-cell antigens such as insulin, Glutamic Acid Decarboxylase (GAD), IA-2, and the Zinc-transporter protein [3].At the very end, nearly all beta cells are destroyed by the adaptive immune system, an outcome that becomes unavoidable after having passed a "point of no return" at a critical juncture, which occurs approximately when the disease is diagnosed.Apparently, the adaptive immune system has no role in the development of type 2 diabetes (T2D), and this disease is driven by the inability of beta cells to produce sufficient quantities of insulin to meet demand.
Inherent physiology suggests that the production of insulin by the beta cells is fraught with danger.One active beta cell in the islets of Langerhans can release one million insulin molecules per minute directly into the blood [4].The beta cells are surrounded by cells from both the innate and the adaptive immune systems that are ready to react against any misfolded insulin molecules.Beta cells are the only endocrine cells in the human body that secrete highly immunogenic peptides; the adrenal and thyroid glands, as well as the cells that produce the sex hormones, deliver smaller molecules that are less immunogenic.The only other endocrine organ that produces peptide hormones is the pituitary gland, which is situated behind the blood brain barrier that gives a certain immunological protection.Furthermore, the pancreatic beta cells may be particularly vulnerable to cytokine-mediated toxicity and excessive insulin biosynthesis, due to their inability to counteract inflammation and oxidative stress.Plasma levels of oxidative stress markers increase during early onset T1D and are even higher by early adulthood.Oxidative stress occurs when reactive oxygen species levels overwhelm the various antioxidants, and beta cells are more sensitive to oxidative stress than other cells in the islets of Langerhans because they have low levels of antioxidant enzymes such as, e.g., glutathione peroxidase, and catalase [5].So, how do the beta cells protect themselves?For many years, we studied sphingolipids and found that a particular compound, sulfatide, acts as a chaperone for insulin [6].Sulfatide facilitates folding of the proinsulin molecule in the Golgi apparatus and preserves insulin crystals in secretory granules at pH 5.5 [7].When the pH increases to 7.4 during insulin secretion, sulfatide mediates monomerization of the insulin molecules and facilitates exocytosis of insulin granules outside the cell membrane [6,8].In addition, sulfatide also has anti-inflammatory properties [9], and it protects the surface of the beta cells against the immune system.The anti-inflammatory properties of sulfatide include decreasing the secretion of proinflammatory cytokines and chemokines [9], suppressing lipopolysaccharide-stimulated inflammation (via toll-like receptor 4) [10,11], and inducing natural killer T cells [12].
When human islets of Langerhans were analysed, sulfatide was only found in the beta cells [13].Two iso-forms of sulfatide occur in the islets of Langerhans, and these isoforms are distinguished by the lengths of their carbon-atom chains: C16:0 and C24:0 [14].Studies have shown that the short-chain isoform (C16:0) is responsible for all the physiological aspects of chaperone activity in insulin production [15].In contrast, the long-chain isoform (C24:0) is responsible for immunogenic suppression [16].We have shown that the drug fenofibrate increases the level of the C24:0 but not the C16:0 sulfatide isoform [17].Furthermore, in the non-obese diabetic mouse model, fenofibrate prevented the development of diabetes entirely [13].Fenofibrate has a good safety profile and has been used for many years to lower cholesterol.Interestingly, sulfatide can bind to angiotensin converting enzyme 2 (ACE2) [18], and patients with T2D exhibit less severe hypertension when sulfatide levels are high [19].Fenofibrate and sulfatide can counteract severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which enters cells via the ACE2 molecule [20].SARS-CoV-2 infection increased the incidence of T1D by approximately 2-fold [21], and the mechanism might be that low sulfatide levels may increase susceptibility to both T1D and coronavirus disease.
In T2D, the body experiences an increased demand for insulin due to excessive weight gain and intake of refined sugars and fatty acids, as well as to insulin resistance.Initially, the beta cells can produce nearly enough insulin; however, they cannot cope with the increased demand for insulin indefinitely.Interestingly, selective depletion of C16:0 but not C24:0 was observed in T2D animal models, such as db/db and ob/ob mice [22].As described above, the chaperone activity of sulfatide in insulin production is associated with the C16:0, not the C24:0, isoform; in particular, the C16:0 sulfatide isoform preserves insulin crystals as effectively as raw sulfatide, whereas the C24:0 isoform has no effect [15].
Among the various human T1D prevention trials conducted to date, only those that have involved immunosuppression have shown proven efficacy.Such trials have included therapy with antibodies against CD3 [23] and immunoglobulins or medications that reduce the activity of B lymphocytes [24].These are all anti-inflammatory effects, and the C24:0 sulfatide isoform may have similar effects in the islets of Langerhans [9,16].
Consequently, we suggest that the presence of a critical level of active C24:0 sulfatide plays a key role in preventing the development of T1D.The onset of T1D may be suppressed by the presence of the anti-inflammatory C24:0 sulfatide isoform.But why might levels of this C24:0 isoform become depleted?
In a previous study, we found that the level of sulfatide in the islets of Langerhans of recently diagnosed patients with T1D was only 23% of that found in control participants [13].This reduction in sulfatide levels was associated with reduced expression of enzymes involved in the metabolism of sphingolipids.We also identified poly-morphisms in eight genes (ORMDL3, SPHK2, B4GALNT1, SLC1A5, GALC, PPARD, PPARG and B4GALT1) involved in sphingolipid metabolism that may contribute to a predisposition towards T1D [13].Interestingly, sphingolipid levels are reduced in individuals who later develop T1D, even before beta-cell autoantibodies are present [25].
The relative proportions of the different sulfatide isoforms generated probably rely on the substrates that are available; therefore, the synthesis of C24:0 sulfatide will depend on the presence of adequate long-chain fatty-acid substrates (Fig. 1).Dietary constituents that are relatively rich in long-chain fatty acids include peanut oil, oats, fish, and bitter chocolate; in addition, some intestinal bacteria can produce long-chain fatty acids [26].On the other hand, butter and meat are rich in C16:0 fatty acids.Human breast milk is rich in long-chain fatty acids; and compared with infant formulas, human breast milk has a significantly higher composition of long-chain polyunsaturated fatty acids [27].Furthermore, breastfeeding is associated with a lower incidence of T1D.For these reasons, we suggest that the C24:0 sulfatide isoform may play a pivotal role in suppressing the autoimmune reactions that lead to T1D.
|
v3-fos-license
|
2018-05-03T00:19:16.195Z
|
2018-04-24T00:00:00.000
|
13660048
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/sdata201877.pdf",
"pdf_hash": "72d7bab0cdb85a2ef7a0b795831cf76595eb689d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1049",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "72d7bab0cdb85a2ef7a0b795831cf76595eb689d",
"year": 2018
}
|
pes2o/s2orc
|
109 years of forest growth measurements from individual Norway spruce trees
In 1892 a forest spacing experiment with four different spacing patterns was established with Norway spruce (Picea abies). From 1923 until 1997, when the stand was harvested, diameter, height and height to crown base of in total 4507 trees were measured up to 23 times. The original aim of the experiment during establishment was to analyse short term effects of different spacing patterns. The thinning regime followed state of the art forestry practises. During the observation time of more than 100 years, the individual observers and the measurement technology changed several times. Thus, the raw measurement data contain systematic and unsystematic measurement errors as well as missing data. We developed methods to complete missing data, smoothen implausible developments, and correct measurement errors. The data provided in the present study include spatially explicit individual-tree growth data which can be used to analyse the development of forest stands and its individual trees during one rotation period. The data can be used e.g. to parameterize and validate forest growth and competition models.
Background & Summary
In 1892, the Austrian Research Center for Forests established a spacing experiment with Norway spruce (Picea abies [L.] Karst.) in the Vienna Woods. The so called 'Hauersteig' experiment (Lat = 48.2254°, Lon = 16.1353°) comprised four plots with the following planting distances: 1 × 1 m, 1.5 × 1.5 m, 1 × 2 m and 2 × 2 m. The measurements of individual-tree growth started in 1923 at a stand age of 35 years. In 1997 at the age of 109 years, a final survey was conducted before harvesting the remaining trees. The first measurement included 4507 trees, 456 of which survived until 1997. Density reductions were implemented following common management practices. Overall, plots and individual trees were measured 23 times within intervals of 1 to 6 years. Typical measures include diameter at 1.3 m height, tree height and height of the living crown. Additional stem disc analyses of harvested trees allowed analyzing diameter growth along the tree trunk. During this time period, the measurement equipment had changed. In the early years, the diameter was measured crosswise with a caliper. Later on the girth was measured with a tapeline. During the early phase of the experiment, the height was measured with bamboo poles, then with ladder and tapeline. Later clinometers from different manufacturers and with different ways of measurement was used to evaluate the required distance to the tree. Discrepancies due to different observers and measuring equipment were reduced by comparing the repeated diameter and height measurements with the diameters and height from the stem analyses. Observation gaps were filled up by interpolation between early and later observations of the tree. Missing values were complemented by calculating averages of comparable measured trees. Finally, diameter and height were proved to have increased monotonically, except that a crown breakage was recorded. The provided spatially explicit individual-tree growth data can be used to analyse the development of individual trees and forest stands during one rotation period. It can be used to parameterize and validate forest growth models or compare different competition indices.
Methods
The experiment was established in spring 1892 using three year old seedlings of Norway spruce by applying the pit-planting method. The provenance was not recorded. To avert game damages, the trial was fenced in the year of establishment. In total, the trial had a size of 1.9 ha (160.4 m × 118.4 m) and was split up into four plots ( Table 1). The density within each of these four plots ranged from 2500 trees/ha up to 10000 trees/ha ( Table 1).
The very little juvenile mortality was redeemed by reinforcement planting over the first years to maintain the original spacing pattern. Until 1923 the plots had an undisturbed growth and only few plot observations had been done. The canopy closurethe crowns of neighbour trees are getting into contactwas reached in 1898 on plot 1, between 1901 and 1902 on plot 2, on plot 3 in 1899 in the 1 m direction and in 1903 in the 2 m direction, and in 1905 on plot 4. Beside the planted Norway spruce few trees of European larch (Larix decidua), Scots pine (Pinus sylvestris), European beech (Fagus sylvatica), sessile oak (Quercus petreae) and silver birch (Betula pendula) regenerated naturally on the trial.
In 1923 an observation area with a size of approximately 0.25 hectare was permanently established on each of the four plots. These core zones were surrounded by a buffer zone. The trees on these core zones were marked permanently with an individual number in the year 1923. On plot 1 only the elite trees got an individual tree number in 1923, the remaining trees on plot 1 where marked in 1925. The trees on plot 1 which had been removed between 1923 and 1925 were measured but had not been provided with individual tree numbers.
Measurements
Throughout the observation time different measurement techniques were used for assessing tree height and diameter of standing trees ( In addition, stem analyses were made from selected felled trees. For such stem analyses, stem discs were cut at specified tree heights and the number and width of the tree rings were measured from these discs. Thus, the height and diameter growth inside bark could be reconstructed. In winter 1988, the coordinates of the tree base position were measured and used to draw a map of tree positions and the corresponding crown shapes. Using this map the crown radii in 8 directions were measured and recorded (see Fig. 1). In other years only one crown diameter per tree was measured.
Diameter adjustments for measuring error
Throughout the lifetime of the trial, management and measurement were carried out by different observers using different measurement techniques (Table 2). These factors might have caused a bias in the obtained tree parameters. As for a sample of trees, stem discs at tree height of 1.3 m tree height have been harvested. We utilized stem disc analyses to compare the various diameter measurements during different observation years and calculated adjusted diameter values. This comparison was based on the following assumptions: 1) when diameter was measured during the vegetation period, we assumed that diameter increment followed a linear increase between the May 1 and August 31, which is close to the observations of (ref. 2). 2) Diameter assessment was either done by one or two cross sectional measurements or by measuring the girth, whereas during the stem analysis four radii from the centre in direction North, East, South and West were measured. The arithmetic mean of diameters was used to average cross-sectional measurements as it revealed comparable results as with the theoretical better geometric mean 3 . Orthogonal measured stem disc radii were averaged according to ref. 4 who showed that the arithmetic mean of the basal areas of the individual radii comes close to the real basal area of the stem.
To compare tree diameter from stem discs with the periodic measurements on living trees, we had to correct them for the missing tree bark and shrinkage after tree harvest and wood drying. The relative diameter differences (dmdd)/dd of stem disc diameter (dd) and repeated measurements (dm) were plotted over stem disc diameter in order to obtain bias correction separately for 1) all diameter measures with a calliper and 2) diameter measures with tape. In relation to the stem disc diameter, the diameter of unshrinked wood outside bark measured with calliper was found to be 7.0% larger and the diameter of unshrinked wood outside bark measured with tape was found to be 8.2% larger. The difference of 1.2% between calliper and tape measurement is likely caused by non-circular stems. Finally, the diameters of stem disc measurements have been increased by 7.0% to represent stem diameters of unshrinked wood outside bark. Now this adjusted stem disc diameter can be compared with the diameter from the repeated measurements and periodic fluctuations due to measurement error could be eliminated. These periodic fluctuations can be handled by (1) eliminating the absolute diameter difference, (2) the relative diameter difference or (3) the relative basal area difference. We choose the basal area to reduce differences by eliminating the median of the relative basal area differences for each year and subplot. After this diameter adjustment, outliers causing diameter increments lower than À 1 cm and À 0.54 cm/year respectively and larger than 1 cm/year were removed if there are enough other measurements. Negative diameter increments have been eliminated by computing an isotonic (monotone increasing nonparametric) least squares regression which is piecewise constant (isoreg).
To correct for missing values, i.e. individual trees that had not been measured during a regular field campaign, a list of observation dates was build for each plot and each individual tree. A missing tree observation was recorded when a plot observation date had no tree observation and occurred before the last tree observation. Diameters for such missing values were estimated by a spline function using the method "monoH.FC" by interpolating between the basal area of previous and subsequent measurement of the individual trees. After this complement it was assured that also these trees had a monotonically increasing diameter. Periods with no diameter increment were eliminated if possible by linear interpolation between previous and subsequent observation.
Height adjustment
Tree heights of young trees and removed trees had been measured directly either by using poles or ladders and tapeline. The height of larger trees had been calculated by triangulation using observer-tree distances and angles between tree basis and tree top. These height calculations had been done either implicitly, i.e. the tree height was read directly from the height measurement instrument, or the measured distances and angles had been recorded and the height calculations had been done in a second step at the office. In the latter case, measurement data had been recorded in fixed format (3 characters, angle given as decimal degree) on punch cards. Negative angles (direction downslope) had been coded by setting the first digit to 9. Post-measurement calculations of tree height had been done for measurements in the years 1978, 1983, 1989 and 1993. In 1978, the angles to the tree top and bases had been measured and the variable distance from the observer to the tree had been measured after 1 . In 19831 . In , 1989 and 1993 the slope distance observer-tree distance and angles to the top and bottom of the tree had been measured. The heights in 1978 could be calculated with where H is the height in dm, A the distance of the measurements marks of the tree in cm, C a device dependent constant (in this case 1/0.3), WO the angel to the tree top in°, WU the angle to the measurement mark on the stem in°and K the height of the measurement mark in dm (typical 13 dm above ground). The heights of 1983, 1989 and 1993 could be calculated with where D is the slope distance between clinometer and measurement mark on the stem in dm. Tree height measurements could contain certain biases due to changing measurement technologies, varying observation positions for individual trees if the trees are not exactly perpendicular, and the difficulty to identify the exact tree top especially for non-monopodial trees. In order to estimate the measurement bias and to remove implausible repeated measurements (i.e. decreasing tree heights in subsequent observations), we corrected height data with several logical verifications. In a first step we tested for negative height increments larger than 1 m. If such trees did not show signs of crown breaks or other crown damages, negative height increments were considered as measurement bias and removed from the dataset. In the second step we tested for extreme positive height increments of more than 1 m annually. Such values are implausible under the given site conditions and were also removed. For the remaining heights, an adjustment procedure on basis of stem disc analysis was applied similarly as for tree diameter. As stem discs have been collected in different heights, counting of tree rings allowed identifying the tree age of the respective disc height. It was assumed that this stem disc height was reached in the middle of the vegetation period and thus the counted age was reduced by a half year. Tree heights of ages between stem discs were estimated by using a spline with the method "monoH.FC". The median of the relation between the height from the stem analysis/height from repeated measurement for each plot and observation year was levelled out by multiplying the repeated measured height with a plot and year specific factor.
Then negative height increments of this adjusted tree heights were eliminated in cases where no crown break was recorded by isoreg and named hMon. Trees without any crown break that reached the final stand age of 109 years in 1997 were used to develop a height age curve in the form h = c0*log(1+exp(c2) *age**c3)**c1 according to ref. 5. This function was used to estimate tree height (hLogFun) of every tree in dependence of its age. The relation of hMon/hLogFun was fitted with a Generalized Additive Mixed Model (GAMM) by using a spline over age, diameter, x-and y-position for spruce and using only age and diameter for the other species by selecting the individual tree or species for applying a random effect on the intercept. With this GAMM the height of each tree (without a crown break) for each observation period was estimated. Negative height increments due to spline fittings were eliminated. Now these calculated heights are corrected with a multiplier so that they hit the -bias corrected and monotonic increasing-measured heights. This was done by linear interpolating the relation of these two heights between the observation dates. This fit was done once with all height measurements the other time only until the age when the crown break was recorded. The later height (height estimate without crown break even if there was a crown break recorded) was used in the next step for estimating the height of the crown base. Negative height increments of this height are eliminated, as long as there was no crown break and zero increments have been tried to be removed by using the average date of this constant height and interpolate the height for the observation dates linear by using the previous and next observation.
The measured height of the crown base needs to be lower or equal than the tree height. As spruce cannot expand its crown downwards the observed crown heights are brought to monotone increasing values with isoreg. With a GAMM the height of the crown base and also its ratio (height of the crown base / tree height) were estimated by using a spline over age, diameter, estimated height without a crown break, x-and y-position by selecting the single tree and if this is not possible, due to too few observations, the species for applying a random effect on the intercept. From these two crown height estimates (direct estimate of height of the crown base and based on the crown ratio estimate) the average was calculated. This average was brought to monotonically increasing values with isoreg. The crown heights were adjusted to hit the monotonically increasing observations and were checked again, if those heights were still monotonically increasing. Zero height changes were tried to be removed by linear interpolateing the crown height for the observation dates in between by using the previous and next observation. Finally it was tested if the crown base was not higher than the tree height.
Code availability
The code processing the data is included in the dataset and has the file name "dhc.r".
Data Records
The data is provided in comma separated values format (Data Citation 1).
Technical Validation
The precision of measurement instruments throughout the observation time changed, but today we cannot precisely judge the quality and observation scale of the instruments used at the beginning of the experiment. Therefore, we tested if the reading of the diameter measurement had an accuracy of 1 mm by checking if the remainder of a division from the diameter in mm by 10 is randomly distributed. For testing this hypothesis, a Chi-square test was applied where the observed distribution of the remainder values from 0 to 9 was compared with the expected equal distribution of remainder values. This analysis revealed that many observations do not have the accuracy of 1 mm, probably because traditional callipers typically have a 1 cm scale and finer measurements had to be estimated. Such estimates sometimes prefer 0 mm and 5 mm or even decimal places. The similar instrumental error was also expected for height measurements and thus we tested if the reading from height measurement had an accuracy of 1 dm by taking the remainder of a division from the height in dm by 10. It turned out that at least the distribution of the remainder of the given height value was in all cases close to a random distribution.
The individual-tree measurements were aggregated and converted to hectare values and plotted to see if their development is plausible. Figure 3 shows the stem number development which is in 1925 on three plots close to the stem number at plantation. The average diameter is increasing from around 10 cm to 40 cm over the time. The visual steps in Fig. 3 arise when the removed trees have a different average diameter than the remaining trees. Tree height is increasing during the observation from 10 m to 30 m. The plot with the widest (2 × 2 m) spacing seems to have the best (h50 = 34.2 m at age 100 years), the plots with 1 × 1m (33.2 m) and 1.5 × 1.5 m (33.3 m) average and the plot with 1 × 2m the lowest (32.6 m) site index. The height to diameter ratio (H/D) is an indicator of stability. The cumulative harvested basal area over the diameter of the harvested tree is an indicator which assortments and their amount could be harvested. Also basal area stock, total increment and cumulative removals over the observation period are shown. Plot 1x1m has the highest basal area increment where this lead was achieved in young years by having there higher stocks. The periodical increment is fluctuating from period to period especially in very short periods, where the "noise" caused by measurement errors seems to be higher than the measured "signal". The mean annual basal area increment is culminating for plot 1x1m around 1950 on a much higher level than the other 3 plots which are culminating around 1980. The observed values show plausible developments but existing random scatter is problematic for analyses depending on short time intervals.
|
v3-fos-license
|
2015-01-02T11:46:42.000Z
|
2014-06-09T00:00:00.000
|
119204408
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2014.12.014",
"pdf_hash": "4c4f010ca8668046a28340bb8bb9502d1f612813",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1050",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"sha1": "d4d1e65729109b6191e3ea27adb6e58ec90aa9f3",
"year": 2014
}
|
pes2o/s2orc
|
Integrals of motion in the Many-Body localized phase
We construct a complete set of quasi-local integrals of motion for the many-body localized phase of interacting fermions in a disordered potential. The integrals of motion can be chosen to have binary spectrum $\{0,1\}$, thus constituting exact quasiparticle occupation number operators for the Fermi insulator. We map the problem onto a non-Hermitian hopping problem on a lattice in operator space. We show how the integrals of motion can be built, under certain approximations, as a convergent series in the interaction strength. An estimate of its radius of convergence is given, which also provides an estimate for the many-body localization-delocalization transition. Finally, we discuss how the properties of the operator expansion for the integrals of motion imply the presence or absence of a finite temperature transition.
Introduction
The thermodynamic description of macroscopic bodies, as shown by Boltzmann in his work on the foundations of statistical mechanics, is based on the assumption that the underlying by the conservation of energy and momenta only. The long-time relaxation then only leads to a restricted (generalized) Gibbs ensemble.
Given this similarity, it was conjectured that, like in the non-interacting limit, an extensive set of (quasi-)local integrals of motion should exist in the MBL phase [33,35,34]. By definition, those do not evolve with time, as they commute with the Hamiltonian. They thus constrain the dynamics to remain very close to the initial condition in which the system was prepared. The existence of such local integrals of motion was recently proven for a particular spin Hamiltonian in [35], under reasonable assumptions bounding potential level attraction. The notion of locality used above refers to the set of degrees of freedom, which the conserved operator affects. Conserved quantities in integrable systems are not local in this sense, as they are sums over all space of certain local terms. 4 The non-locality (in our sense) of those integrals still allows for finite transport in integrable systems. A further important difference between MBL systems and integrable ones is the fact that MBL is robust with respect to any sufficiently small perturbation of the Hamiltonian, while integrability in 1d systems is broken by generic perturbations.
The aim of this paper is to show that quasi-local integrals of motion exist for weakly interacting disordered electrons, under the same set of assumptions that were made in the original work by Basko et al. [4] (henceforth referred to as BAA). We find such integrals by solving equations for conserved operators within perturbation theory. Our approach reduces the problem to the solution of a single-particle-like hopping problem in operator space, for which we present a solution in the strongly localized regime, and determine the radius of convergence of the construction. This furnishes an estimate of the delocalization very similar to that obtained by Basko et al. [4]. We hope that our technique will help to obtain analytic results on many-body localization in the future.
Outline and summary of this work
Here, we present a short outline of this work, summarizing the main steps, and the problems we address.
We are seeking integrals of motion for disordered electrons with weak short range interactions, as defined in Eqs. (1), (2). In Section 2 we coarse-grain the model, reducing it to an array of coupled quantum dots of size of the order of the single-particle localization length.
The non-interacting model has trivial integrals of motion, namely the occupation numbers of the single-particle eigenstates. We then look for their generalization in the presence of interactions, "dressing" these integrals of motion (Section 3). This leads us to a set of linear equations (Section 4, Eq. (40)) in the space of number conserving operators, which we expand in the basis (28) of products of single particle creation and annihilation operators. For any strength λ of the interaction, these equations define a unique set of conserved operators. The main question to analyze is whether they act locally, or whether they significantly affect a spatially unbounded set of degrees of freedom. We address the question of locality within the so-called forward approximation, introduced in Section 5, where we only determine the leading term in perturbation theory for the expansion coefficients. Since the interaction terms act locally, for the conserved quantities to be non-local increasingly high orders of perturbation theory must contribute to the expansion, i.e., the perturbative expansion diverges (Section 5.2). We represent diagrammatically the particle-hole creation processes, which dominate the forward approximation, in Section 5. 4. In order to study the statistics of the diagrams at high orders, we need to solve three main technical problems. One is the estimate of their number, due to the freedom in choosing the interaction vertices and their order. We solve this (Section 5.4) by introducing an integral representation that sums correlated diagrams sharing the same interaction vertices. This reduces the factorially many (in the order N of the perturbation theory) terms to an only sub-exponential number of terms, which are products of N denominators. The second problem concerns their statistical distribution. In the many-body problem the denominators are correlated even within the forward approximation, at variance with one-particle problems. Therefore determining the statistics of large deviations, which dominate the probability of creating excitations at large distance, is a challenge. We solve it using a transfer-matrix technique (Section 7). Finally, we have to count the number of processes leading to a given configuration in operator space, which is a combinatorial problem in the space of diagrams (Section 8). The last two ingredients allow us to determine the decay rate of the largest of these terms, which dominates the expansion. Requiring a positive spatial decay rate determines the range of convergence of the operator expansion in the forward approximation.
After solving these technical problems, we obtain the final result in Section 9, namely the existence of quasi-local integrals of motion for disordered electrons for sufficiently small interaction λ < λ c . We find λ c in the forward approximation: in the same spirit as Anderson's "upper bound" approximation, this is expected to yield a lower bound for the actual phase boundary for manybody localized phase of the lattice system at infinite temperature. In a final section, we discuss possible scenarios for a localization transition or crossover at finite temperature (Section 10).
Model Hamiltonian and coarse-graining
We consider a Hamiltonian describing weakly interacting, spinless electrons in a disordered background. At variance with the work by BAA, we consider a model on a lattice Λ, where (Λ) is the lattice Laplacian, V dis is a random disordered potential and U is a short range interaction. We choose to work with a lattice model, because in a finite volume its Hilbert space is finite, and both spectrum and energy per particle are bounded. This will allow us to take a meaningful limit of infinite temperature, and to make statements about many-body localization in that limit.
It is convenient to write the interaction in the form where ν is the density of states, and u(i − j) is a dimensionless, normalized, short-ranged interaction kernel. The dimensionless parameter λ measures the interaction strength.
We consider a disorder potential such that the single particle part of the Hamiltonian possesses only fully localized wave-functions φ α , α = 1, ..., |Λ|, with typical localization length ξ . Moreover, we are interested in the disorder regime relatively close to single particle delocalization, where ξ is significantly bigger than the lattice spacing a. Let us denote by δ ξ = 1/νξ d the average level spacing in a localization volume, and by W the band width of the single particle problem. The condition ξ ≫ a ensures that a large number of single particle wave-functions overlap significantly in space. This will provide a large parameter for our analysis.
It is convenient to switch to the basis of single particle wave-functions φ α , in which the Hamiltonian assumes the form where n α = c † α c α , and the Greek indices label single particle eigenstates obtained in the absence of interaction. We also choose a certain ordering relation "<" among the indices β, γ .
Our choice of the basis φ α is different from that of BAA, who worked with Hartree-Fock (HF) orbitals. Our choice allows us to work in full generality in the operator space, while HF orbitals depend on the non-interacting occupation numbers, i.e., the many body state around which one analyzes stability with respect to interactions. In Section 8.1 we will argue, however, that in the approximation in which we are working, we can neglect the interaction vertices U αβ,γ δ with two or more coinciding indices, even without resorting to HF, which resums most of those terms. Thus, the two different choices of basis sets lead essentially to the same combinatoric analysis of diagrams.
To simplify the above model further, we assume the single particle energies ϵ α to be random and uncorrelated. The interaction term U is antisymmetrized: U αβ,γ δ = U βα,δγ = −U βα,γ δ . We further simplify it by taking its matrix elements U αβ,γ δ to be local in space, i.e., they are assumed to be non-zero only if the corresponding single particle states have localization center within one localization volume. Hereby we define the localization center of a single particle state as Moreover, it is known that the matrix elements decrease rather rapidly (as a power law) when the energy difference between involved levels exceeds the level spacing in the localization volume δ ξ . This motivates the use of a simplified interaction in which we take u αβ,γ δ to be non-zero only if In these cases we assume where η αβ,γ δ is a random variable, box-distributed in [−1, 1].
Coarse-graining
Let us now coarse-grain the model: we assume that the interaction U αβ,γ δ connects wavefunctions either on the same localization volume or on neighboring localization volumes. For either vertices we assume the same amplitude λ, as long as the restrictions (7) on the energy levels are respected.
This differs from the coarse-graining by BAA, who divided the sample into d-dimensional regions of linear size ξ , restricted the single particle levels α to those regions, but included a (small) hopping term between localization volumes (elastic processes). In contrast the interaction term, responsible for inelastic processes, was restricted to scattering within a given localization cell.
Integrals of motion and absence of transport
In the absence of interactions (λ = 0), the occupation numbers n α of single particle levels are mutually commuting, conserved quantities. These operators are quasi-local in real space, as follows immediately from their expansion in the basis of lattice operators: where φ α is the corresponding localized single particle eigenfunction. By quasi-locality of the n α we mean that an operator c † i c j contributes in the expansion with a weight which decays exponentially in the distance between the localization center ⃗ r α of φ α and the sites it acts on (its support -here the sites i, j ).
By truncating the sum (9) to terms with support only within a neighborhood of mξ of ⃗ r α one obtains an operator, whose commutator with the Hamiltonian vanishes up to exponentially small terms. As m → ∞ the operator rapidly converges (in the operator norm) to the conserved n α . In the non-interacting case this follows directly from the spatial localization of the single particle wave-functions. Our goal is to find an analogue of these operators in the interacting case.
That such a generalization should exist was proven by Imbrie [35] under certain hypotheses on the spectrum in a 1d spin chain, for which he constructed a quasi-local unitary rotation U which essentially diagonalizes the Hamiltonian H . More precisely, it brings it to the canonical form where the k-spin interactions J i 1 ,...,i k decay exponentially with the diameter of their index set. Applying the inverse unitary on the conserved spins σ z i provides one with integrals of motion of the original Hamiltonian H , I i = Uσ z i U † . At the same time, Huse and Oganesyan [33], and independently, Serbyn et al. [34] argued for the existence of such local integrals of motion in general MBL systems.
Note that the set of conserved and mutually commuting quantities is by no means unique. For example, any set of independent polynomials of σ z i 's is conserved as well. A nice property of the set of σ z i , however, is the binarity of their spectrum, {−1, 1}, or the property that (σ z i ) 2 = 1. Knowing the eigenvalues of N independent integrals of motion like this allows one to unambiguously label the 2 N eigenstates of the Hilbert space of an N -spin system [33].
An alternative construction of conserved (but non-binary) quantities is discussed in [36] for a random spin chain, where infinite time averages of local operators are considered (such as n i (t) = e iH t n i e −iH t in our case). By definition of the time average, it commutes with the Hamiltonian. In an MBL phase one expects the average to remain non-zero, whereas it vanishes due to diffusion in an ergodic delocalized phase.
In this paper, we make a different choice, which nevertheless defines a unique set of binary integrals of motion. Our construction consists in two steps. We will first prove the existence of local integrals of motion in perturbation theory in λ, not requiring the binarity of their spectrum. This is the most difficult task and will take the largest part of the paper.
Construction of exact quasiparticles of the Fermi insulator
Since our procedure will leave us some freedom in the choice of integrals, in Appendix A we will show how this freedom can be used to fix the spectrum to be binary, order by order in the interaction λ. Notice that the latter amounts to the construction of exact quasiparticle occupation numbers of the interacting Fermi insulator. In contrast to Fermi liquids where such exact quasiparticle operators cannot be constructed, neither in real nor in momentum space, it becomes possible in the MBL phase. Rewritten in terms of these occupation numbers ñ α , the Hamiltonian can then be seen as an exact quasiparticle energy functional, which determines the energy E (qp) α of any quasiparticle as a function of the occupations of all others, as:
Complete set of local integrals implies absence of transport
Before outlining the construction of the integrals of motion, let us first show how their existence implies the absence of any d.c. transport, and hence many-body localization.
In order to show the absence of d.c. transport, consider the Kubo formula for the conductivity σ associated with the local current density J r , associated with a conserved quantity, such as charge or energy. Let be the current at frequency ω and position r arising in linear response to a spatially homogeneous field E, and denote by the spatially averaged current density, V being the volume of the system. At finite inverse temperature β, the dissipative part of the conductivity is given by: where Π(ω, r) is the Fourier transform of the retarded correlation function of the current operator, with Lehmann representation: Here Z is the partition function, and the limit η → 0 is to be taken after the thermodynamic limit.
Let us now show first that for a complete set of strictly local conserved quantities the conductivity vanishes with probability one in the thermodynamic limit. By strict locality operators we mean that they only act on degrees of freedom that belong to a compact spatial region with finite diameter ζ . We call a set of conserved quantities complete if for any two distinct eigenstates m ̸ = m ′ at least one of those integrals of motion takes a different eigenvalue.
For two eigenstates m, m ′ let Ĩ be such a distinguishing integral, with corresponding eigenvalues Ĩ |m⟩ =Ĩ m |m⟩ and Ĩ |m ′ ⟩ =Ĩ m ′ |m ′ ⟩, with Ĩ m ′ ̸ =Ĩ m . For a strictly local current operator and r sufficiently much bigger than ζ , it follows immediately that one of the two current matrix elements vanishes, since one of the two commutators vanishes. Thus, in Eq. (17) the sum over r can be restricted to r ζ . Furthermore, for any fixed eigenstate m the sum over eigenstates m ′ is restricted to a finite set, since J r ′ |m⟩ can differ only in a finite number (≤ exp(cζ d ), with c = O(1)) of integrals of motion from |m⟩. Thus, in the thermodynamic limit, where we have to send η → 0, the contribution to the δ-function vanishes with probability one, and thus Re[σ (ω = 0)] = 0. Note that the potentially singular term from m = m ′ does not contribute because ⟨m|J r |m⟩ = 0 by time reversal invariance. This discussion is of course over-simplified since the actual integrals of motion are only quasilocal, in the sense that there are corrections to a strict locality, which decay exponentially with the diameter of their support on a typical scale ζ . However, the derivation above reflects the essential mechanism by which a complete set of integrals of motion suppresses transport. Consider the matrix elements ⟨m ′ |J r ′ |m⟩ for eigenstates that differ significantly only in integrals of motion whose support is centered up to a distance xζ from r ′ . These matrix elements are then not exactly zero, but exponentially small in x. There are also exponentially many states m, m ′ which satisfy these criteria, and thus some energy differences E m − E ′ m in (17) become exponentially small. One might worry that these exponentially small denominators can contribute to the δ-function in the thermodynamic limit, leading to a non-zero conductivity. However, the very construction of the local integrals of motion outlined in the following, and the convergence of that procedure, strongly suggest that with probability tending to one as η → 0 the exponential smallness of the energy denominators is dominated by the decay of the matrix elements in (17), in the sense that at small η the contributions to the δ-function come with weights that are almost surely much smaller than η. If this were not the case, resonant energy denominators would systematically appear in the construction of the conserved quantities and prevent their locality. Therefore, the consistency and convergence of the following construction implies the suppression of d.c. transport in systems admitting a complete set of quasi-local conserved operators.
Recipe for the construction of integrals of motion
Let us now come back to the actual construction of quasi-local conserved operators. In order to find a generalization of the single particle occupation numbers to the interacting case one should construct an extensive set of |Λ| functionally independent operators 5 {I α }, which are quasi-local and satisfy Since the spectrum of the many-body system is almost surely non-degenerate, it follows that such conserved quantities also satisfy [I α , I β ] = 0. Their mutual commutativity implies that they form a commutative algebra. As we discussed above, the choice of a basis spanning this algebra is not at all unique. It is worth mentioning that if the operators I α commute with H and span the algebra of operators then we can write as we claimed above. The couplings J 's have similar exponential decay as those in (10).
Here we present a specific construction of conserved operators, which fixes the arbitrariness in their definition in a unique way. Our construction starts from the idea that at weak interactions the I α should be expected to be a perturbed version of the n α . Thus, we look for a perturbative series in λ, For the further discussion it is useful to introduce some natural operator subspaces. I α can be sought as an element of the space C of particle-conserving operators on the Hilbert space, and without loss of generality we may require it to be Hermitian. Since we will require [H, where the same ordering "<" as previously is chosen for the indices β, γ . 5 Functional independence means that no I α can be expressed as a function of all the other I β .
At the nth stage of perturbation theory one has to solve the equation In order for this equation to have a solution one has to make sure that [U, I (n−1) α ] ∈ O. 6 If this is the case, I (n) α is determined up to an element of K. In Appendix A we show how to use this freedom to impose binarity of the spectrum of I α , spec(I α ) = {0, 1}, i.e., I 2 α = I α . The latter allows these operators to be interpreted as generalized quasiparticle number operators of the interacting Fermi insulator.
Below we describe the construction of conserved I α based on a simpler choice, however. In particular, we claim that if our Hamiltonian is time-reversal invariant, and thus has real matrix elements in the basis of single particle eigenstates, there is a unique solution of (24) with I α ∈ O. This choice implies that the only term in the expansion of I α that commutes with H 0 will be the very first one, n α . To prove this at the perturbative level, we have to show that one always finds [U, I ] it follows that x(Ψ 0 ) is purely imaginary, and thus vanishes indeed.
From the above it follows that we can express the solution of Eq. (24) formally as which determines the successive terms in perturbation theory recursively. As we show in Appendix A, the recipe to construct a binary operator consists in modifying order by order the terms in the perturbative expansion by adding to each I (n) α a diagonal operator K (n) α ∈ K, which is determined by the previous orders in perturbation theory as: It is plausible that the convergence for binary operators is essentially the same as for the operators constructed below. Based on the above perturbative argument, we make the following ansatz for the conserved quantities: 6 Note that it is not obvious from the outset that this simple perturbative scheme should work and produce a local operator. Indeed we construct perturbation theory for an extensive set of operators which are all null eigenvectors of [H 0 , .]. In principle one should thus use degenerate perturbation theory for all these operators simultaneously, which could turn out to require a non-local change of basis. The further steps below show, however, that this is not the case.
where the sets I, J run over all sets of indices {β 1 < · · · < β N } of single particle states. Linear constraints on the coefficients A (α) I,J are found by imposing the conservation condition [I α , H ] = 0. The coefficients result as λ-dependent functions of the random energies ϵ α and of the random matrix elements U αβ,γ δ , which vanish in the limit λ = 0. Since the resulting operators I α are functionally independent for λ = 0, we expect the same to hold for any finite λ before the delocalization transition. Indeed it is hard to see how a polynomial of I β̸ =α 's could contain only a single diagonal term n α .
It is important to note that the expansion (28) should not be seen as an expansion in λ, but rather as an expansion in the support on which the operators O I,J act. A formal expansion in λ must always be re-summed locally when rare, but very small denominators are encountered, implying that the naive perturbative series (24), (25) has vanishing radius of convergence in λ [1]. In Appendix B we discuss a simple example where such a re-summation is necessary.
We point out that in any finite system the above ansatz, even though motivated by a perturbative consideration, uniquely determines a conserved operator even if perturbation theory does not converge, despite of re-summations. In that case I α is defined as the finite (possibly exponentially large) sum (28) whose coefficients satisfy the linear system of Eq. (40) below. In a delocalized regime that operator will have support on the whole system.
Convergence criterion
We argue that for sufficiently small λ the expansion (28) converges in the operator norm. The convergence holds in probability, that is, for any ϵ > 0: where r(I, J) = max β∈I∪J |⃗ r α − ⃗ r β | is the maximal distance between the localization center of the state α and any of the states β that are acted upon by the operator O I,J . P is the probability measure over the disorder realizations. This ensures that the series defining the operator I α converges almost surely, since ∥O I,J ∥ = 1 for all I, J.
The resulting operator I α is quasi-local in the sense defined above. As will become clear below, cf. Section 5.2, one can associate a length scale to the support of these operators like for the non-interacting case: truncating the expansion at that length scale yields operators that are conserved up to exponentially small corrections. This scale is essentially the localization length pertaining to the interacting problem.
The many-body delocalization transition is expected to happen at a sharply defined critical value λ = λ c of the interaction strength, at which thermalization and ergodicity are restored. It is natural to expect that this coincides with the delocalization of physically defined conserved quantities, such as the time average of local operators. There is also a sharply defined interaction strength λ = λ ′ c at which our integrals I α become non-local with probability one. Logically we cannot exclude that λ ′ c is slightly smaller than λ c (since it might be possible to find a prescription for conserved quantities that leads to more local operators than ours); however, we believe that within the approximations we are making, see Section 5, λ c and λ ′ c cannot be distinguished. We therefore use the notation λ c indistinctly for both critical values.
To discuss the convergence (29), we map the problem of constructing conserved quantities into an equivalent problem of a particle hopping on a disordered lattice whose sites are labeled by the Fock indices (I, J). In particular, the exponential decay of the coefficients of I α corresponds to the localization of the particle on that lattice, in analogy with the non-interacting case (9). In turn, the delocalization of the particle corresponds to the divergence of the operator expansion (28).
Explicit construction of the integrals of motion
In this section we present the equations defining A I,J in (28) and discuss how to solve them. To illustrate the procedure, we first solve exactly a non-interacting case and then proceed with the interacting problem.
Non-interacting single-particle example
Consider a non-interacting one-dimensional disordered Hamiltonian: where ϵ i are random energies and the hopping t is treated perturbatively. In this case, the ansatz is consistent. Imposing [H, I k ] = 0, we obtain a set of linear equations for the coefficients A (k) ij , one equation for each index k. If for identical indices we define: then the equations for A (k) ij with i ̸ = j can be compactly written as: In view of these equations, one may re-interpret A (k) ij as the wave-function amplitudes of a particle on a square lattice with sites (i, j), and correlated on-site disorder E i,j = ϵ i − ϵ j , subject to the constraint (32). An explicit expression for them can be given in terms of the eigenfunctions φ α of the Anderson problem (30) as: where the ω k α have to be determined from the constraint The exponential decay of the amplitudes (34) in the distance between the sites i, j follows from the localization in space of the eigenstates φ α . It implies the convergence of the expansion (31). Therefore, the operators I k are quasi-local conserved operators, similarly to the particle number operators n α in (9).
However, note that these two sets of operators differ, in particular (31) does not contain any diagonal terms (i = j ̸ = k). Using (34), (35) one can also explicitly check that the operators (31) do not coincide with the time average of the operators n k (t).
Interacting case
We now return to the interacting case. Since the operators I α will contain strings of c † 's and c's of arbitrary length, we need a way to deal with large index sets. We introduce the following notation: for any index set X = (x 1 · · · x N ), we define diagonal coefficients as zero, except if X = {α}: Moreover, for any l, m (with l < m) and any single particle labels γ , δ (with γ < δ), define the index sets: In general, the set X ··· ··· is obtained from X by eliminating the indices in the subscript and appending the ones in the superscript on the left. Note that the resulting sets are thus not ordered. Let σ [·] denote the sign of the permutation which orders the set, and define: Finally, for index sets with |Y| = |Z|, define the modified amplitudes: With this notation, the condition [H, I α ] = 0 is equivalent to the following set of linear equations for A (α) where (I, J) = (α 1 · · · α N , β 1 · · · β N ) and I ̸ = J. The diagonal coefficients appearing on the righthand side are defined in (36).
Topology of the operator lattice
Similarly as in the previous single-particle example, Eq. (40) can be thought of as a hopping problem for a single particle on a lattice with sites given by the Fock indices (I, J) and local, correlated disorder E I,J = N n=1 (ϵ α n − ϵ β n ). The hopping is provided by the interaction U , see The lattice topology, as determined by the interactions, is rather complicated. However, Eq. (40) has a clear hierarchical structure: the equation for index sets I, J of length N are coupled only to amplitudes with index sets of equal or shorter length. Therefore, the sites can be organized into generations, according to the length of their index sets. Hopping is possible only within the same generation (second term in Eq. (40)) or between consecutive ones (third term in Eq. (40)). In the latter case, the hopping is unidirectional, and thus the hopping problem is non-Hermitian.
The connectivity of the lattice is determined by the restrictions in energy, Eq. (7), and space (particles need to be in the same or in an adjacent localization volume) of the matrix elements U αβ,γ δ . Hoppings from a site (I, J) in generation N to a site (I ′ , J ′ ) in generation N + 1 requires a particle (or hole) in a state α to scatter to the closest energy level γ above or below α, while another particle-hole pair of adjacent levels (β, δ) is created. The particle β can be chosen in N loc ways with N loc given in (3), and there are two choices for γ and δ, respectively. Therefore, the number of Fock states (I ′ , J ′ ) accessible from (I, J) via the decay of a given quasiparticle α is: In contrast, hoppings from (I, J) to a site of the same generation correspond to processes where each member of a pair of particles (or holes) scatter to one of the two closest energy levels: there are 4 possible final states to which a given pair can decay. At this point we emphasize that we are not restricting ourselves to a specific many-body state or energy sector. Thus no assumption about the occupation of the levels or about the position of the Fermi level E F is made. This gives the largest possible connectivity K. It will be reduced to an effective connectivity once we consider the restriction of the integrals I a to subspaces of a definite energy by means of a projector over many-body states, Ĩ a = P I a P , where This projection will alter the connectivity K, so as to reflect the higher probability for some processes to be Fermi-blocked, since the involved levels might already be occupied. This yields an effective connectivity K eff , whose typical value depends both on the average energy density of the states E a and the average filling fraction of the band. It is not difficult to see that if we use typical values for occupation numbers as given by the Fermi distribution (without assuming the underlying states to be thermal), repeating the above considerations at finite temperature T ≪ E F we obtain K eff ∼ T /δ ξ , in analogy to the analysis in [4].
Simplifications due to large connectivity, ξ ≫ a
The requirement of convergence of the operator expansion, Eq. (29), can be interpreted as a localization condition for the hopping problem on the disordered lattice of Fock indices. In order to investigate under which conditions localization occurs, we introduce the main approximation of this work: we neglect the second term of Eq. (40), that accounts for the hopping between sites in the same generation.
This approximation is motivated by the following consideration, assuming that the number of single particle levels per localization volume, and thus K, is large: for operator sites with a density of Fock indices per localization volume much smaller than the maximally possible ∼ K/ξ d , the connectivity within the same generation is much smaller than the connectivity K among sites in different generations (41). Note, however, that transitions from a given state (I, J) due to the second term of Eq. (40) can involve any pair of particles or holes in the same localization volume. Therefore, for operators with a high density of indices per localization volume those transitions are as numerous as the third class of terms in Eq. (40). Our approximation of dropping the second term is therefore not fully controlled at sufficiently high orders in perturbation theory where operators with a high density of indices per localization volume appear. We postpone further discussions of the subtleties related to this approximation to Section 10.
Once the second term in (40) is dropped, the equations reduce to recursive equations for increasing generations, with the initial condition A (α) α 1 ,β 1 = δ α 1 ,β 1 δ α 1 ,α . However, only some of the amplitudes A (α) I,J in (28) are determined through the recursion, while we approximate all other amplitudes to be zero: in generation N , the non-zero amplitudes correspond to sites (I, J) that can be reached from (α, α) via directed paths of length N − 1. Retaining only these sites simplifies the structure of the lattice of Fock indices very substantially, see Fig. 1 The amplitudes on these sites (I, J) can be written as the sum over all directed paths that connect them to the root (α, α) in Fig. 1(b): The path weights ω path are of the form: in close analogy to forward approximations in single particle problems [1,[37][38][39][40].
The factor (−1) σ path takes into account the global fermionic sign associated with the path, arising from the sign factors in Eq. (40). However, we will see below that these signs are immaterial at the level of our approximation.
Note that the resulting expression for A (α) I,J is of order λ N−1 , that is, the lowest possible order in λ for amplitudes of operators involving 2N particle-hole indices. Indeed, at least N − 1 interactions are needed to create the corresponding excitations.
Probability of resonances on the operator lattice
Let us discuss the configuration in real space of the indices (I, J) with |I| = N , which are retained within the forward approximation, cf. Fig. 1(b). Since the amplitudes A (α) I,J are of order λ N−1 and the interaction is local, the indices satisfy r(I, J) ≤ N ξ : amplitudes involving single particle states sufficiently far away from the localization center α must belong to sufficiently high generations. Within the approximations made, the convergence criterion (29) can then be restated in terms of the generation number N as: for arbitrary ϵ > 0. A sufficient condition for Eq. (45) to hold is that for some z < 1 and for N * sufficiently big: The left hand side of Eq. (46) can be interpreted as the probability that no resonance 7 occurs at large distance from the unperturbed localization center (α, α). Whenever it holds, it implies the quasi-locality of the operators I α within the forward approximation: indeed, Eq. (46) implies that the first appearance of operators c β , c † β 's in I α , with |⃗ r β − ⃗ r α | ≈ N ξ and N ≫ 1 is with high probability exponentially small in N .
In the following we will show that Eq. (46) holds in a regime of small couplings λ; the critical value λ c at which (46) ceases to hold gives an estimate for the radius of convergence of the operator series, and thus for the boundary of the many-body localized phase.
Similarities and differences with localization problems on trees
The similarity to a one-particle problem allows us to revisit analogies and differences between many-body localization and single particle problems on lattices which have some features of a Cayley tree [41] (see also [42,43] and references therein). Indeed, in the simplified lattice Hoppings on the lattice correspond to vertices U α 1 α 2 ,β 1 β 2 in the graph. The energy E I,J of an intermediate state is the sum of the energy differences E α 1 α 2 ,β 1 β 2 = ϵ α 1 + ϵ α 2 − ϵ β 1 − ϵ β 2 associated with all preceding scatterings. The three excitations emanating from a vertex are associated to the outgoing legs as follows: the excitation with energy level adjacent to the incoming one is associated with the central leg. The upper and lower leg correspond to the particle and the hole, respectively, of the additionally created pair. The condition (7) requires them to have an energy difference of the order of δ ξ . of Fig. 1(b), the number of sites at distance N from the localization center (α, α) grows as K N with K given in (41). This exponential growth is analogous to the growth on trees and other hierarchical lattices, see e.g. [44]. However, we caution the reader that, despite superficial similarities, the calculation we will perform does not reduce to studying an equivalent single particle problem on a Cayley tree as in [45]. Indeed, in the latter problem there is a unique path leading from the root to a given site and thus there are no loops. In contrast, in the operator lattice, there are typically exponentially many diagrams (or effective paths) leading to a given site, and thus plenty of loops, similarly as in finite dimensional lattices. Nevertheless, it is usually the case that among those many paths only very few dominate the sum over all paths -an observation we will heavily rely on in the sequel.
Our present problem also differs from the study of the decay of excitations in a zerodimensional quantum dot, as considered in [41]. There, no genuine delocalization can take place due to the finite available phase space. Instead, it is essential that our operator expansion leave the localization volume of the initial state α, for delocalization to be possible beyond a critical interaction strength λ c .
Connection with many-body diagrammatic perturbation theory
Insight into the meaning of the forward approximation at the level of the many-body system is given by a diagrammatic representation of the paths, as shown in Fig. 2. Fig. 3. Loops in the many-body lattice corresponding to different processes with the same final state, and the corresponding ordered graphs. The graphs differ only in the order in which the interactions U 1 , U 2 , U 3 act. The weights of such paths are strongly correlated: they are all proportional to the same product of matrix elements, U 1 U 2 U 3 , and have highly correlated denominators. The sum over all these ordered graphs constitutes a diagram.
To any path of length N in the operator lattice we uniquely associate an ordered graph with N vertices. These graphs have two main branches representing the decay of the operators c α and c † α of the initial operator n α . Directed paths of length N on the lattice translate into graphs having the geometry of a tree, with a root and N nodes corresponding to the creation of particle-hole pairs. The intermediate states of the graph correspond to the sites (I, J) along the path in the operator lattice, their energy being E I,J . Note that the order of the sites along the path fixes the order of the interaction vertices in the graph.
Such graphs can be grouped into diagrams: members of the same diagram only differ in the ordering of vertices, while sharing the same geometry and labeling of the legs; they are obviously highly correlated among each other. An example is shown in Fig. 3, where all three paths connect the state (I, J) = (α 2 β 2 β 1 α 3 , γ 2 γ 1 δ 3 γ 3 ) to the root (α, α), and involve the same interaction matrix elements.
Such correlated paths exist for all diagrams with branchings (i.e., vertices where more than one of the outgoing excitations undergo further scattering). The order of the subsequent interactions on different branches can be permuted. This corresponds to different paths on the lattice and different ordered graphs, respectively.
Obviously we should sum over all possible vertex order permutations of branched diagrams with fixed geometry and labeling of legs.
Singly branched diagrams
Consider the sum of the energy denominators 8 of the three path weights in the example of Fig. 3. It is immediate to check that the following holds: where E i is the energy difference between out-and in-going states at the vertex i. Thus, the sum over the three paths weights in Fig. 3 can be written as a single term ω Γ : where η i is the random variable associated the vertex i. More precisely, ω Γ is the product of two weights of the form (44), describing the independent decay of the particle c † α and the hole c α , respectively. It can easily be checked by induction that this factorization generalizes to an arbitrary number of interactions in such singly branched diagrams: for any of them, a weight of the form (49) is obtained by summing over all the path weights. We refer to ω Γ as the weight of the effective path associated to the diagram, and denote the latter by Γ .
Multiply branched diagrams
Let us now discuss further branchings in the sub-diagrams describing the independent decays of the particle c † α and the hole c α . Consider a multi-branched decay of the single particle c † α , as shown in Fig. 4(a). There the particles γ and δ, which are produced in the first scattering, decay further through n vertices U i=1,...,n , and the vertex Ũ , respectively. The possible orderings of this diagram correspond to n + 1 correlated paths, which differ by the relative position of the vertex Ũ with respect to the U i . Their sum, does not simply factorize, but it can nevertheless be written in compact form through an integral representation, where ω − i = ω i − iϵ. Indeed, the sum Σ ′ (multiplied by the matrix elements of the correspondent vertices) must be equal to the retarded Green function associated to the independent, parallel decay of the particle γ and the hole δ, computed in the forward scattering approximation and at energy E 0 . For loop-free graphs like the one of Fig. 4(a), the decay processes of the particle γ and the hole δ are independent. In the time domain, the Green function of their joint decay is the product of the individual Green functions, which leads to the convolution (51) in frequency space.
The above formula is rather natural when relating with standard many-body perturbation theory. Indeed, after the summation over orderings of vertices, the diagrams of a fixed geometry are in direct correspondence with the diagrams obtained by BAA in the perturbative expansion of the Keldysh self energy in the imaginary self consistent Born approximation. The latter neglects the renormalization of the real part of the self energy and retains only processes where at each vertex an additional particle-hole pair is created. In our formalism, this corresponds to the directed paths jumping from generation to generation, see also the discussion in Appendix B. Not surprisingly, the statistical analysis of this class of diagrams will give an estimate of the radius of convergence for the operator expansion (28) which is similar to the criterion for the breakdown of stability of the localized phase found by BAA, or to its extension to infinite temperature [46]. Our further analysis is also very similar to the calculation in Ref. [47], but differs in some points, which will be indicated.
The expression (51) for a branched diagram is a random variable, whose probability distribution is hard to analyze. However, the analytic structure of the integrand can be exploited to rewrite Σ ′ as a sum over a much smaller number of terms than the number of orderings in Eq. (50). After performing the integral over ω 2 in Eq. (51), we find a number of poles in the complex plane of ω 1 . Using the residue theorem, we can write (51) as the sum over residues of the poles in the half plane, which contains less poles. In the particular example considered, closing the contour on the upper half plane yields the algebraic identity: The two terms in (52) have a similar structure as the denominators in the original path weight (44). For the considered sub-diagram, the sum over all the n + 1 orderings of vertices could thus be reduced to the sum of only two "effective path" weights.
General branched diagrams
A convolution formula analogous to Eq. (51) can be written for any branched diagram: to each branching one associates an integral of the form (51) with one auxiliary frequency per decaying branch, as well as an energy conserving δ-function for the vertex (see Appendix C for an example). Then one eliminates the δ-functions by integrating over the frequency variable, that occurs most often in the denominators. Using the residue theorem, the remaining integrals can be carried out, and the sum over all orderings of a diagram with fixed geometry can be expressed as a much smaller sum of weights of effective paths, as in the example above. The number of such terms is given by the product of the number of residues obtained for each auxiliary frequency.
The number of effective paths associated to a general diagram depends on its structure; to obtain an upper bound on this number, consider the diagram with the maximal number of branchings at fixed order N , see Fig. 4(b). In Appendix C, we show that in this case the number of effective paths scales as exp[log 3 (log N) 2 + O(log N log(log N))]. This upper bound implies that the number of effective paths associated to an arbitrary diagram is always sub-exponential in N .
Summing diagrams
In this section we show that in the localized region, at a any given order of the expansion, a few terms dominate the operator sum. The term with the largest coefficient in turn is dominated by the maximal diagram contributing to it.
Summing over diagrams and their effective paths
Let D I,J denote the set of all diagrams with final state I, J, each diagram being characterized by its geometry and the labeling of its segments. For any diagram d ∈ D I,J , let P(d) be the set of effective path weights ω Γ associated to it, following the procedure described in the previous section. The corresponding amplitude on the operator lattice can then be written as As we shall prove in the following section, the ω Γ are random variables with fat-tailed distributions. The effective paths associated to a diagram d ∈ D I,J all involve the same set of energies in their denominators and are thus correlated. Nevertheless, we argue that the tail of the distribution of their sum, S(d), is still very similar to the tail distribution of a single effective path, since in the case of a large deviation, S(d) is very likely to be dominated by the effective path with the biggest weight. Indeed, consider a rare set of energies E i , which produces an atypically large value of S(d). There is typically one single effective path for which all denominators become simultaneously small, while the combination of energies in the denominators of other effective paths are very likely to be suboptimal for a fraction of the denominators. Therefore, with high probability, S(d) will approximately be equal to the maximum over all effective paths weights: The set of energies E i that optimize distinct effective paths are typically different, and thus these rare events can be approximated as being independent from each other. Hence, the tail of the distribution of S(d) is enhanced with respect to the tail of a single path weight by a factor |P(d)|. We shall see, however, that due to the sub-exponential scaling of the number of effective paths, this enhancement is immaterial for the estimate of the radius of convergence of the operator series.
Inspecting the explicit examples of Eq. (52) or Eq. (C.3), one can see that there exist energy realizations for which cancellations occur between effective paths with significant weight. This happens when the single path weights are individually big, but Ẽ is much smaller than all the other energy variables E i , which leads to a cancellation between effective paths. However, such configurations require an atypically small Ẽ and do not occur with significant probability. Therefore the suppression of the tail distribution due to such effects is hardly relevant.
Correlations between effective path weights of different diagrams are even weaker than those above, since they share at most a fraction of all E i . Therefore we may approximate rare deviations of S(d) and S(d ′ ) as independent if d ̸ = d ′ . Given that the S(d) are themselves fat-tailed random variables, the sum over diagrams is dominated by the largest term. Therefore, the full operator amplitude A (α) I,J is likely to be dominated by one single effective path: where on the right hand side the maximum is taken over all effective paths from (α, α) to (I, J). As a consequence, for the tail of the probability distribution we obtain the approximation where P(d) is an average number of effective paths contributing to a diagram.
Summing over amplitudes: probability of resonances
Similarly to the effective path weights of different diagrams, also the amplitudes A (α) I,J associated to different sites I, J are weakly correlated, and we treat them as independent random variables. Let us now consider the probability in (46): Here we approximated the probability to satisfy the condition at each generation to be independent from the previous generations. As follows from (55) and from the fact that the effective paths ω Γ have fat tails, the amplitudes A (α) I,J have themselves a fat-tailed distribution. Their sum is therefore dominated by the maximal amplitude, and each factor on the right hand side (56) can be computed as: Using (55), the exponent in (57) is re-written as: The probability in (58) is a large deviation probability: indeed, the weights ω Γ of effective paths are of order O(λ N ): in order for ω Γ to be bigger than z N (with z arbitrarily close to 1), this decay factor must be compensated by an atypical smallness of the energy denominators. We devote the following section to the computation of the probability of these large deviation events. The calculation will reveal that, for λ sufficiently small, the probability decays exponentially with N . This decay competes with the exponential growth of the total number of effective paths of length N : which we estimate in Section 8 below. The competition between these two terms leads to a transition at a given critical value of λ, which we determine in Section 9.
Large deviations of paths with correlated denominators
In the previous section we argued that the large deviations of operator amplitudes are essentially determined by the large deviations of effective path weights. The weight of any effective path is the products of two terms, describing the decay of c † α and c α , respectively. In each of those terms (cf. (52) e.g.), the functional dependence on the E i is similar to that in the original path weights (44). We will first discuss the latter and then show that general effective paths behave essentially identically.
Because of the energy restrictions (7) the energy differences E αβ,γ δ /δ ξ are random variables of order O(1). For simplicity, we take them as independent Gaussian random variables with zero mean and unit variance. The denominators in (44) are partial sums of such energies, and we may write: where In path weights of the form (60) we are mostly interested in characterizing the distribution of the product of denominators. The numerator behaves as ∼ (λη typ ) N−1 , with η typ = exp[⟨log |η|⟩] = 1/e, and we neglect the Gaussian fluctuations of its logarithm.
The fact that the denominators in (60) are correlated distinguishes the many-body problem from single particle localization. These correlations are a feature that any perturbative treatment of MBL has to deal with, and it is thus important to develop a method to calculate the large deviations in this case.
The distribution function P N (y) of the logarithm of the product of denominators, can be obtained from its generating function, by inverse Laplace transform, where B is the Bromwich path in the complex k-plane.
In the present case, the relevant y scales linearly with N , and thus we define ỹ = y/N , and where the function has a well-defined limit, φ(ỹ, k), for large N . In that limit, the integral over k can be done by a saddle point approximation. The contour has to be deformed to pass parallel to the imaginary axis through k * = k * (ỹ), which satisfies: Large deviations correspond to ỹ = O(1). In the case of parametrically small interaction strength λ (which is relevant in the case of large connectivity K) we will see that we can restrict our attention to ỹ ≫ 1, see Section 9. For large values of ỹ, we will see that the saddle point tends to k * → −1.
The computation of the generating function G N is given in Appendix D. Here it suffices to say that the recursive structure of the denominators s i lends itself naturally to a transfer matrix expression for G N , which grows as the N th power of the largest eigenvalue.
The final result for the exponent at the saddle point is for ỹ ≫ 1. From this we obtain the large deviation probability: where C contains only negligible logarithmic corrections to the exponent, and
Comparison between correlated and uncorrelated denominators
It is interesting to compare the large deviation distribution (68) with the tails of the distribution of the random variable: where X i are i.i.d. Gaussian random variables with zero mean and unit variance. As derived in Appendix D, at leading order in N , up to a correction F → F − log 2/(2ỹ) + O(1/ỹ 2 ), both have the same form (68). Physically, this result can be understood as follows. By restricting to ỹ ≫ 1, we are concentrating on very rare realizations of Y N . Those are insensitive to the details in the structure of the denominators. Indeed, atypically big values of objects like ( N i=1 s i ) −1 arise from restraining the random walk (s 1 , · · · , s N ) to the vicinity of the origin. This boils down to computing the probability that s i is small conditioned on the fact that s i−1 was small. To leading order in the typical smallness of such denominators, one obtains the same result as by minimizing N denominators independently. The leading correction with respect to the case of i.i.d. denominators consists in a small suppression of the tail, since it is slightly less probable to encounter small denominators, when they are correlated.
The above reasoning can be extended to more general weights ω Γ , associated with effective paths. Indeed, the corresponding denominators are still products of single energies or partial sums (see Eq. (52) or Eq. (C.3)). In the limit of very large deviations (ỹ ≫ 1) they all share the same tail distribution (68), the only relevant parameter being the total number N of denominators. Therefore, approximating the numerator in ω Γ with its typical value (λη typ ) N and using (68), we finally obtain: with F given in (69).
Justification of neglecting interaction vertices with equal indices
We recall that we have neglected interaction terms U αβ,γ δ where two or more indices are identical. This will significantly simplify the combinatorics of counting diagrams. Let us now give a justification a posteriori for this approximation, by showing that such terms would make contributions which are down by factors of 1/K. Consider the various scattering processes with one pair of equal indices among the four legs of a vertex, whereby we restrict to one ingoing and three out-going particles. Consider first the scattering α → β with the simultaneous creation of a pair (γ , α). The constraints |ϵ α − ϵ γ | < δ ξ , |ϵ α − ϵ β | < δ ξ imply that all levels have to lie within δ ξ from each other. The phase space for such events is smaller by a factor of 1/K with respect to generic scattering processes where γ is unrestricted.
The second case is more subtle. It consists in a scattering α → β from a particle γ , which remains in place. If this is to be a resonant contribution one needs the energy increment E of the vertex to be | E| = |ϵ α − ϵ β | δ ξ /K. In a scattering where γ switches to a neighboring state δ, with |ϵ γ − ϵ δ | ∼ δ ξ , one can optimize α, β among the K different choices, such as to make E of order δ ξ /K. However, if γ remains in place, the optimum over the K choices for α, β will yield a parametrically bigger E = ϵ α − ϵ β , because of the repulsion between the neighboring levels α, β. Therefore such processes are systematically much less resonant than processes involving four distinct levels. 9
Combinatorics of diagrams
We now estimate the total number of diagrams N N at a given order N , cf. Eq. (59). For simplicity, we restrict here to the case of spatial dimension d = 1.
Consider any amplitude A (α) I,J with index set (I, J) = (α 1 · · · α N , β 1 · · · β N ). The localization centers r α i , r β i , cf. Eq. (6), of the single particle indices are distributed over a certain number of localization volumina of length ξ around r α , with a given number of single particle indices per localization length. Due to the energy restrictions imposed on the interactions, particles and holes belonging to the same localization volume are organized in pairs: members of a pair are produced in the same scattering process, and have an energy difference of order δ ξ .
Due to the fact that the interaction is local, only particle-hole pairs in nearby localization volumina can be involved in the same interaction vertex: this imposes some constraints on the geometry of the diagrams representing the scattering processes with (I, J) as final state. For example, states (I, J) having only one particle-hole pair per localization length must be associated to diagrams with no branchings in the decays of c † α and c α , since the particle-hole pairs must be created in a fixed order dictated by their spatial sequence, and thus no permutation is possible. In contrast, final states with several pairs per localization length can be reached by a variety of diagrams.
In the following, we construct the subset of diagrams corresponding to scattering processes with a "necklace structure", in which the particle-hole pairs are created in a sequence of n groups of m i=1,...,n pairs, each group belonging to a single localization volume. This furnishes a lower bound on the number of all diagrams. Note that m i is bounded by the maximal number of particle-hole pairs per localization volume (N loc = K/4), and n i=1 m i = N . Due to locality, pairs belonging to the ith and (i + 1)th group belong to neighboring localization volumina in real space; pairs belonging to different groups i, j ̸ = {i − 1, i + 1} might belong to the same localization volume.
This construction is done in two steps: first, for every group i we build all possible subdiagrams with final indices corresponding to the indices of the m i pairs, as illustrated in Fig. 5. In a second step, we connect sub-diagrams of neighboring groups by a single scattering vertex. We thus obtain a global necklace diagram, and count how many different diagrams with this structure there are. The counting is similar to Ref. [47], but here we include diagrams corresponding to final states with a non-uniform density of particle and hole indices per localization length, since these have a larger abundance.
A central ingredient for the combinatorics is the number of all possible geometries of diagrams with m interactions in a given localization volume, see Fig. 5. We denote this number by T m . It equals the number of trees with one root (of connectivity 2) and m nodes (of connectivity 4). As we derive in Appendix E: length ξ , and hence of the level spacing δ ξ , on λ. This in turn might induce a small shift of λ c . However, since their subsequent analysis boils down to dropping the same terms as we have argued above, this shift is expected to be a 1/K correction. Following the reasonings explained in Fig. 5, we find the number of necklace diagrams associated with fixed groups of m i pairs to be The origin of the various factors is explained in detail in Fig. 5: the factor m i counts the number of pairs which are created subsequently to the first pair entering the volume associated to the group i. One of those m i pairs belongs to the adjacent localization volume and creates the subsequent cascade of pair creations there. The second factor describes the choice of two levels (the level closest in energy above or below) to which an incoming quasiparticle may scatter at a vertex. The factorial term comes from the choice of assigning the m i pairs to the final legs of a given tree diagram in the localization volume of group i. Consider first the case in which only a single group i of pairs occupies a given localization volume. The number of choices of {m i } particle-hole pairs is then given by Indeed, a configuration of m i pairs of (disjoint) adjacent levels, and the remaining N loc − 2m i untouched levels in the same localization volume form a set of N loc − m i objects, out of which m i are pairs. This explains the binomial factor. For each pair, one can choose how to assign the two levels to particle and hole, respectively. This yields the factor 2 m i .
As we will see below, the relevant m i are of order O(1) ≪ K. We therefore approximate: Note that the necklace structure will in general fold back and forth in real space, such that several groups will get to lie in the same volume. Nevertheless, the above approximation remains good as long as the total number of pairs created in a given localization volume is significantly smaller than K.
Combining Eqs. (73)-(75), the total number of necklace diagrams is: where the average number of effective paths per diagram, P(d), scales sub-exponentially with N . The factors of 2 arise due to freedom of each group to scatter to the left or the right of the preceding group as long as there is still significant phase space in the corresponding localization volumina. The correction due to the finiteness of K ≫ 1 is small and was thus neglected. We now determine the distribution of group sizes {m i } which dominates the sum (76), writing and thus n m The Lagrange multiplier µ is fixed by the constraint: The saddle point solution can thus be written as Fig. 6(a). The probability that a given pair is created in a scattering process involving a total of m pairs in the same localization volume is plotted in Fig. 6(b). We see that most pairs are created together with a few more pairs within the same localization volume. Plugging (82) into the saddle point for N N , we find the number of diagrams to grow like (dropping pre-exponential factors) This result is based on the approximation that we only allow for diagrams with a necklace structure, where groups of m i pairs are connected by a single scattering between subsequent localization volumina. Performing the calculation without this restriction is difficult since it is less easy to control the spatial constraints. However, we can easily obtain an upper bound by realizing that all possible diagrams consist in all geometrically possible labellings of trees of size N . The number of trees grows as (27/4) N . For each label one has roughly 3K possibilities, as the pair must lie in a localization volume adjacent to or identical with the one of the pair preceding it on the tree. This yields the simple upper bound which yields a growth factor which is only about a factor of 2 bigger than the much more conservative estimate (83). Let us thus write with 10.6 < C < 20.25.
Effect of Fermi blocking
The above counting is still not entirely complete. Indeed, eventually the operators we have constructed should act on some many body states, and get annihilated when attempting to create particles on occupied states or holes on already empty states. In an infinite temperature state, and at a filling fraction ν each particle-hole creation operator has a chance to annihilate the state with probability 1 − ν(1 − ν), or, in other words, only a fraction of [ν(1 − ν)] N of all operators will not annihilate a typical infinite temperature state. One should thus modify the number of relevant diagrams to In the next section we use this result to determine the radius of convergence of the operator series. Similar considerations apply to finite temperature as we will discuss below.
Structure of the dominant operator terms
Our result differs from the similar analysis in Ref. [47]. The main difference consists in our assumption that the sum of diagrams that add up to the amplitude of a given operator O I,J is dominated by the biggest term (provided the considered amplitude is among the largest ones at that order). In contrast, the authors of [47] assumed that the exponentially many diagrams have comparable amplitudes, but random signs, and applied the central limit theorem to the sum. Moreover, we allow for fluctuations of the number of pairs generated in each localization volume instead of imposing a homogeneous spatial density. We find that in the restricted set of necklace diagrams the optimal distribution of group sizes m i s is peaked at values of order O(1), but still clearly larger than one. Upon folding of the necklace, the number of pairs per localization volume will become even more significantly larger than 1. Thus we see that multiple scattering processes within a localization volume significantly enhance the delocalization tendency. This shows that the many-body problem is genuinely different from an effective one-body problem, in which a simple excitation would propagate nearly ballistically, by shedding one particle-hole excitation in every localization volume.
Estimate of the radius of convergence
We now have all the ingredients to estimate the probability of resonances at generation N , in order to prove that for λ sufficiently small there are no delocalizing resonances and (46) holds true.
Consider the probability in expression (58). Using (71), we estimate: Note that the large deviation result applies since x ≥ log( z λη typ ) ≫ 1. Approximating the integral with the value of the integrand at the extremum, setting z = 1 and neglecting sub-exponential terms in N we obtain: Substitution of (89) and (87) into (58) yields: with Taking into account (56) and (57), we finally obtain: If G(λ, K) < 1, then, for N * sufficiently big, each of the factors in (92) is arbitrarily close to 1. Therefore, their product converges to 1 in the limit N * → ∞ (see also [48] for a similar reasoning). This allows us to conclude that, for all values of λ for which G(λ, K) < 1 holds, (46) holds, too, and the series in operator space (28) converges to a quasi-local operator. In this regime, the excitation of the single particle level α, localized at ⃗ r α , is very unlikely to create a distant disturbance at ⃗ r β with large L = |⃗ r β − ⃗ r α |, its probability tending to zero exponentially as L → ∞: there is no diffusion at small λ.
Comparison with a single particle on the Bethe lattice
It is interesting to note that the delocalization threshold (93) looks identical to the critical ratio between hopping and disorder strength for a single particle problem on a Bethe lattice (see Eq. (5.8) in [45]) with effective connectivity K eff = ν(1 − ν)(C/ √ 2π)K, which is a significantly bigger than the connectivity associated with each vertex, ν(1 − ν)K. This reflects the fact that in the many-body problem the same final state can be reached with many different decay processes. The results are nevertheless similar, because both problems are dominated by very few resonant paths, whereby the large local connectivity in the many-body problem ensures that different resonant paths are likely to be uncorrelated, even if they lead to the same final state.
Possible implications for delocalization in higher dimensions
According to the above calculation, in the dominating decaying processes only groups of O(1) particle-hole pairs are created at the same time in a localization volume. This suggests that the necklace-type diagrams are diffusing back and forth a lot. This contrasts with the model of BAA, where the hopping strength between adjacent volumina was assumed to be parametrically smaller than λ, which favored the particle-hole creation cascade to fully explore a localization volume before moving on to the next volume. The latter led them to conjecture a critical exponent for the localization length in higher dimensions by relating the decay processes of single particle excitations to self-avoiding random walks. This scenario hardly holds in our model, as the optimal processes are not of this kind.
Finite temperature
So far we have been discussing the convergence of the expansion of integrals of motion in the forward approximation. If the expansion converges, we have succeeded in constructing a complete set of quasi-local conserved quantities which entail the absence of transport in whatever state the system is, in particular at any temperature, including the limit T → ∞. Note again, that the latter limit is meaningful because we work on a lattice on which the energy density is bounded.
An interesting question arises when we ask about transport at finite temperature, and the possibility of a MBL transition as a function of temperature, as predicted by BAA. How would this reflect at the level of integrals of motion? If there is a finite temperature transition, one expects that the localized low T phase is still governed by local conservation laws which inhibit transport, while local integrals of motion do not exist at higher temperature. Clearly the latter rules out the convergence of the conserved operators in the operator norm. Rather one has to invoke that the norm of operators O I,J , when restricted to typical low temperature states, becomes exponentially small in N = |I|, if the index sets I, J contain a finite fraction of hole excitations above E F + T or particle excitations below E F − T . This effect may enhance the convergence of the series expansion. This is certainly so at the level of the forward approximation where the temperature T essentially replaces the bandwidth in the analytical estimates of our expansion. This will lead to a larger domain of (weak) convergence of the operator expansion, suggesting the possibility of a delocalization transition at finite temperature.
A similar consideration shows that the transition (93) at T = ∞ takes place in a regime where the operator expansion is not convergent in the operator norm, but converges only weakly on typical high energy states. This is due to the Fermi blocking discussed in Section 8.3. (28). In the pictures, the wave-functions are the single particle states contributing to (I, J) and (I ′ , J ′ ). Both operators involve degrees of freedom whose maximal distance to the localization center r α is the same: r(I, J) = r(I ′ , J ′ ); however, the length of the support N of the operators (shaded in the picture) increases when N grows in the first case, while it remains bounded in the second case.
However, a different scenario is possible as well. The operator series in Eq. (28), or subsequences of it, can diverge for two reasons: (i) Either the amplitude of terms with growing N do not decrease sufficiently fast, and thus the diameter of the support of these terms grows indefinitely. (ii) There can be subsequences of (28) whose terms have bounded index level N , but supports which wander off to infinity. These two possibilities are illustrated in Fig. 7.
Possibility (i) is what is obtained within the forward approximation. The fraction of terms at λ c , which survive when applied on finite T states, decreases rapidly with N . However, such a projector would not affect the convergence properties of a subsequence of type (ii). Upon restricting to finite T states the norm of the relevant operators is typically reduced by a factor, which remains bounded from below. Therefore the series will continue to diverge despite the projection.
To address the question of whether or not a finite temperature transition is possible one has to consider the interaction strength λ c at which the infinite T transition takes place, i.e., where the integrals of motion delocalize. If there is a subsequence of type (ii), which diverges at this point, the delocalization transition is a function of λ only, but independent of T . In the delocalized phase (λ > λ c ) transport would always remain finite, even though it may become very inefficient and strongly activated at low T . If instead there is no subsequence with bounded index cardinality, which diverges at λ c , a transition in temperature should be expected, as predicted by BAA. Such a transition was recently reported by a numerical study [17].
Physically the scenario (ii) corresponds to transport and delocalization driven by rare, compact, but mobile regions with a local "temperature" above the putative T c . At first sight one is tempted to rule this out because one would expect such a hot bubble to diffuse and loose its extra energy forever to the environment. However, the environment being in the supposed MBL phase cannot transport the extra energy to infinity, and thus there should be a finite recurrence time until the hot bubble forms again. Whether such a bubble would nevertheless have to remain localized, or whether its internally delocalized state would allow it to move around is a difficult open question. Recently, it was argued that big enough bubbles could undergo resonant delocalization [49]. At the level of integrals of motion these two scenarii translate into the above dichotomy about critical subsequences.
Note that a divergence of type (ii) by a set of operators with bounded support is made less likely by the large parameter K. We in fact invoked this large parameter to neglect these terms, similarly as BAA. However, it is difficult to exclude that there is no such divergent subsequence which contributes with a finite, but with a relative weight which is parametrically small in K. In that case, numerical approaches such as [17,50] would not capture this divergence.
It would be interesting to revisit the question of the finite T transition also as a function of density. In the low density limit, the effective connectivity K eff (resulting from projection onto typical states) can be reduced to K eff ≪ 1, in which propagation channels of type (ii) become parametrically favorable, and may be the ones to induce delocalization -if interactions can induce a transition at all under such circumstances.
Conclusion
In this work we have constructed explicit quasi-local integrals of motion within the weakly interacting regime, which we argued to imply the absence of any d.c. transport. We reduced the problem of constructing such operators to a non-Hermitian hopping problem in operator space, an idea that we hope to have potential for further more rigorous studies. We have also obtained an explicit recipe for constructing generalized occupation numbers of a Fermi insulator order by order in perturbation theory.
We have used the large parameter K (proportional to the number of sites in a single particle localization volume) to concentrate on processes where one more particle-hole pair is created at every order of perturbation theory. Within this forward approximation, and based on an analysis of rare resonances at large distance, we found an analytical estimate of the radius of convergence of this perturbative construction, yielding a critical value of the reduced interaction strength λ c = √ 2π /(Cν(1 − ν)2e K log K) with 10.6 < C < 20.25, at infinite T and filling fraction ν, similar to the prediction by BAA based on the analysis of the life time of a single particle injection.
We believe that the spatial structure of our integrals of motion provides a good picture for the "quantum avalanche" created by injection of an extra particle. We have found that the optimal way of its propagation is by exciting a necklace of groups of O(1) particle-hole pairs per localization volume. Due to the meandering of the necklace structure, several groups of such pairs may be created in the same localization volume, an effect which is enhanced in low dimensions.
The convergence of our construction for the local integrals of motion implies the absence of transport and equilibration at any temperature and density. Taken as such, it appears to be blind to potential phase transitions upon varying those parameters. However, projecting the operator series onto typical states with thermal single particle occupations, one may discuss the weak convergence of the operator expansion. In this vein, we have discussed the question of the existence of a genuine finite temperature transition, depending on the properties of the operator series at its critical point at T = ∞. Further investigations of this question would be interesting.
This procedure leads to a modified expansion for I α : with B (m) α given explicitly in Eqs. (26), (27). In the following, we work by induction on m. We set B (0) α = n α and we omit the index α for simplicity. We define the truncation to mth order of I : and assume that the property (A.1) holds to order O(λ m−1 ), namely: Note that I ≤0 is naturally binary, with (I ≤0 ) 2 = I ≤0 .
We denote with Î (m) the solution of the equation: in the subspace O, cf. Eq. (25), and definê The operator Î ≤m is not binary to order O(λ m ); however, we show that it is possible to add to Î (m) a suitably chosen operator K (m) in the kernel K of the linear map f (X) = [H 0 , X], so that is binary to order O(λ m ). To show this, it is sufficient to show that the difference (Î ≤m ) 2 −Î ≤m , truncated to order O(λ m ), is an element of the subspace K, i.e.: and using (A.5), we find which proves (A.8).
A simple computation shows that by choosing: the condition (A.4) is fulfilled to order O(λ m ). Eq. (27) follows from noticing that:
Appendix B. Local re-summation in the case of small denominators
In the following we present a simple example in which the perturbative expansion in λ, Eq. (22), diverges. Suppose that at order n the series expansion contains the term: where O = c † i 1 · · · c † i m c j 1 · · · c j m−1 is a string of operators with i, j ̸ = {α, β, γ , δ}, and that the amplitude J n = O(λ n ) therefore contains the energy denominator: Suppose E to be atypically small. One then easily finds a subsequence of the series (22), which contains arbitrarily high powers of the small denominator. Indeed, let us restrict the interaction to the term U αβ,γ δ (c † α c † β c γ c δ + h.c.) in the interaction U ; higher order terms in the perturbative expansion are obtained by subsequent application of (25) to J n ; this produces: with E αβ,γ δ = ϵ α + ϵ β − ϵ γ − ϵ δ . By iteration of this procedure, a sub-sequence of operators containing arbitrarily high powers of ( E) −1 is generated, preventing the convergence of the series if the term in brackets is larger than 1.
Divergences of this kind are of the same nature as local resonances encountered in single particle localization [1]. They have to be properly re-summed for the series expansion to make sense. For example, all terms multiplying Oc † β c γ c δ re-sum into a self-energy correction of the denominator in the first line of (B.3): The term in square brackets in (B.4) contains a very large self energy correction U 2 αβ,γ δ / E, which compensates the divergence in J n when E → 0.
Self-energy corrections like this are neglected in the forward approximation. Their main effect is to weaken the role of small denominators: As noticed by Anderson, small denominators essentially neutralize themselves by introducing enormous self-energies for the neighboring sites which then appear as very large denominators [1]. The resummation thus increases the convergence as compared to the naive perturbative expansion in forward approximation in [1]. In single particle localization problems with large connectivity, the critical hopping is increased by a factor e/2 [45], and a similar effect is expected here [4].
Appendix C. Evaluating diagrams as sums over effective paths: a more involved example
As an additional example for the evaluation of diagrams as sums over effective paths, we give the explicit expression for effective path weights associated to diagrams with the geometry of Fig. C.8.
For fixed indices on all segments, there are 105 different orderings of the interactions. 10 Their sum has the integral representation: where ω − i ≡ ω i − iϵ, and I 1 (ω) is the integral representation of the sum of all the weights of the subdiagram in the dashed frame, with incoming energy ω: .
(C.2)
By means of the residue theorem, I 0 can be rewritten as the sum over only 8 effective path weights: and f (X) = 1 Note that as a function of the E i and Ẽ i , I 0 has poles only due to denominators which involve the incoming energy E 0 , while I 0 remains regular as any of the Ẽ i → 0, due to cancellations among different terms.
The minimal number of effective paths associated to a diagram equals to the product of the number of residua of any of the performed integrals. This number can be determined from the structure of the diagram using the following rules: First, one eliminates the final leaves which are not associated to auxiliary frequencies, since they do not contribute with poles in the integral representation (Fig. C.9 represents the diagram of Fig. C.8, with these eliminated branches colored in gray). Then, one determines the directed path (branch) with the maximal number of interactions along it (red one in Fig. C.9). The auxiliary frequencies along this path are eliminated integrating the corresponding δ-functions. All remaining branches contribute one more residua than interactions along the branch. In the example of Fig. C.9, the three branches that remain after eliminating the red one contribute 2 residua each. The total number of effective paths is obtained by multiplying these numbers, which gives 2 3 = 8 in the present case. With the help of these rules, we count the minimal number of effective paths associated to the maximally branched diagram with N interactions, shown in Fig. C.10(a). We denote this number by |P|.
The maximally branched diagram consists of two regular rooted trees with L(N) ≡ log(N + 1)/log 3 generations. Since the weights of the two sub-diagrams factorize, we need to count only the effective paths associated to one of them, and square their number. We therefore consider one sub-diagram, and organize its branches according to the number of interactions along it (in Fig. C.10 As claimed in the main text, this number is sub-exponential in N .
Appendix D. Probability of large deviations in products of correlated denominators
Here we derive the probability of large deviations of effective path weights, i.e., the product of correlated denominators, as they occur in perturbation theory in the forward approximation.
We denote by s k = x 1 + · · · + x k the partial sums of i.i.d. random variables x i ≡ E i /δ ξ . Let us assume the x i to be unit Gaussian variables with probability density Consider the distribution function P N (y) of the random variable log |s i |, (D.2) and its generating function G N (k) Let us compute G N for N ≫ 1. We start by taking the expectation value over the joint distribution of x i = s i − s i−1 (s 0 ≡ 0): where the integral operator O k [·] acting on a function g is given by Consider now the basis of even functions: In this basis the linear action of O k is given by: with the matrix where c(k) = φ max,0 · m≥0 a m φ max,m , and φ max is the normalized eigenvector corresponding to λ max . Numerical results for the maximal eigenvalue are shown in Fig. D.11. They are obtained by truncating O to an increasing set of basis states (or chain of sites) m ≤ L. For k close to the singularity k = −1 the results rapidly converge with increasing size L. In this region, we can extract information on the limiting curve λ max (k) from the truncated chain. In particular, we see from the plot that both the function log λ max (k) and its negative slope diverge at k = −1, which will also follow form the analysis below. Hence, k −1 is the relevant region for the saddle point approximation of Eq. (65), if very large deviations ỹ ≫ 1 are considered.
Due to the proximity to a logarithmic divergence at k = −1, to order O(1 + k) the eigenstate φ max for k ∼ −1 is localized on the first site (n = 0) of the corresponding hopping chain: with an eigenvalue (D.12) Corrections to the maximal eigenvalue (D.12) can be evaluated perturbatively in the matrix elements O ik̸ =00 (D.8), which yields One can show that δλ(k) is analytic around k = −1 and satisfies δλ(k → −1) → 0. This is due to the fact that in nth order perturbation theory λ (n) max is proportional to denominators of the form 1/O n−1 00 ∼ (k + 1) n−1 . The leading term in δλ(k) results from: (k + 1) + O(k + 1) 2 . (D.14) A plot of the corrections to the maximal eigenvalue (D.12) is given in Fig. D It has a saddle point at k = k * (ỹ), determined bỹ To isolate the singularity in k = −1 we use the Laurent expansion of ψ (0) (x) around x = 0: where γ is the Euler constant. This allows us to recast (D.16) in the following form: Here, Q(·) is an analytic function with expansion: This yields the equation c(k * (ỹ)) λ max (k * (ỹ)) (D. 25) yields only logarithmic corrections to the exponent. As commented in the main text, when restricting to the linear term in (D.24), the large deviation statistics for the correlated denominators coincides with that of independent identically distributed energy denominators. Indeed, from Eqs. (D.10) and (D.12) it follows that to leading order in k + 1 the exponential growth of G N is almost equal to that of the generating function g N (k) = [2 k+1 2 Γ ( k+1 2 )/ √ 2π] N associated with products of N independent Gaussian denominators with unit variance. For ỹ ≫ 1, the tail of the distribution is determined by the residue of the pole of the generation function at k = −1, which is identical in the two cases. Repeating the above derivation of large deviations for independent denominators with generating function g N (k), one finds that it differs from (D. 23 In general, for k-body interactions we have T (n) = (k−1)n n /((k − 2)n + 1) diagrams. For k = 3 these are the numbers of binary trees with n vertices, or Catalan numbers.
There are two ways to solve Eq. (E.1) and find T n . The first one is to notice that its generating function T(x) satisfies T(x) = T (x) 2 , write Eq. (E.6) in terms of T and use Lagrange's inversion theorem again. Alternatively, one can use the explicit form of T (n) and apply a summation formula for the ratio of four Γ -functions to obtain: with L(N) = log(N + 1)/ log 3. Using that 2 P 1∞ k=1 3 ¡−k log(k) = 0.29, one finds that the minimal number of effective paths |P| for diagrams with this geometry, which is the square of the above, scales as: which should replace the estimate in Eq. (C.7). Among all the possible geometries of diagrams with a fixed number of interactions N , the maximally branched geometry is the one that maximizes the number of effective paths. Thus, Eq. (2) is an upper bound for the average |P(d)| introduced in Eq. (55), which we also expect to have an exponential scaling in N : with 0 < ®α < 0.58. Accounting for this correction, the total number of effective paths of length N , N N in Eqs. (59), is modified accordingly: Since the additional factor is only exponential in N , the conclusion about the convergence of the construction of integrals of motion for small enough interactions is unaltered. The effect of the correction is to slightly diminish the radius of convergence of the construction.
The precise effect of this correction depends on the relative weight of the effective paths and their mutual interference.
If we make the simplifying assumption that the effective paths associated to the same diagram can be treated as independent random variables, the sum S(d) in Eq. (53) is dominated by the largest term, and the factor |P(d)| enhances the tail of the distribution of A (®α) I,J as compared to the tail corresponding to a single path weight, see Eq. (58). The above discussed correction would thus modify by a factor e ®α the numerical constant C in Eq. (93). (Note, however, that this constant C was already subject to uncertainty, see Eq. (85), due to the approximations going into the estimate of N N .) Approximating e ®α ¼≈ e 0.58 = 1.79 we find that the result of Eq. (93) holds with the following uncertainty on C: 18.97 < C < 36.25.
The above assumption neglects, however, that the effective paths associated to a given diagram are not independent. Indeed, they involve the same energy variables in the denominators, but in different combinations. These correlations might be relevant when computing the large deviations for S(d). In fact there could be disorder realizations in which all the energy variables are simultaneously small, in such a way that there is no dominant effective path. In an extreme case, all !ω 0 contributing to S(d) might happen to be of the same order of magnitude and atypically large. These contributions will come with different signs and partially cancel, which counteracts the enhancement of the total amplitude. To estimate an upper bound for the effect of the exponential number of effective paths on the constant C we neglect those partial cancellations, and assume that the diagrams dominating the tails of S(d) are such that essentially all effective paths add up constructively with comparable weights. Under this extreme scenario, the large deviations of S(d) would be given in terms of those of a single path weight !ω by setting S(d) »∼ |P(d)|!ω. In this approximation, Eq. (93) is recovered with the substitution λ !→¸λe ®α . This shifts the estimated interval for C in Eq. (5) only by a logarithmic factor.
|
v3-fos-license
|
2019-03-17T13:09:11.820Z
|
2017-01-23T00:00:00.000
|
691169
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.19080/oajnn.2017.02.555589",
"pdf_hash": "87cf1048931a7e536de887f8120fe5f5134f3b55",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1052",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f8a4131ebd7665a3318bd95512d15d59aa070a0f",
"year": 2017
}
|
pes2o/s2orc
|
How Refractory is Super-Refractory Status Epilepticus- A Personal View!
The word, super-refectory is coined recently in status epilepticus (SE) when the seizures continue unabated for a day or more, despite rigorous management , leading to subsequent neuronal injury, most of the times and even death [1]. The SuperRefractory Status Epilepeticus is a rare but not uncommon type of ongoing seizure activity and the exact pathophysiology is still not yet studied well. This type of seizure activity is commonly encountered in intensive care units but the exact frequency is not yet known. There were many retrospective studies in the recent past, revealing approximately 15% of cases admitted with status epilepticus, became refractory and Holktamp [2] reported 20% of his patients with SE had recurrence of ongoing SE after even general anesthesia was withdrawn and in some other studies nearly 50% of the those requiring anesthesia became super refractory in the various settings. In the ideal management protocol of SE, in staged approach, seizures of less than 30 minutes (early SE).
Introduction
The word, super-refectory is coined recently in status epilepticus (SE) when the seizures continue unabated for a day or more, despite rigorous management , leading to subsequent neuronal injury, most of the times and even death [1]. The Super-Refractory Status Epilepeticus is a rare but not uncommon type of ongoing seizure activity and the exact pathophysiology is still not yet studied well. This type of seizure activity is commonly encountered in intensive care units but the exact frequency is not yet known. There were many retrospective studies in the recent past, revealing approximately 15% of cases admitted with status epilepticus, became refractory and Holktamp [2] reported 20% of his patients with SE had recurrence of ongoing SE after even general anesthesia was withdrawn and in some other studies nearly 50% of the those requiring anesthesia became super refractory in the various settings. In the ideal management protocol of SE, in staged approach, seizures of less than 30 minutes (early SE).
Benzodiazepines are administered by intravenous/rectal/ intranasal/buccal and sometimes intramuscular routs, but the main aim is to rapidly abort the seizures by any means in order to arrest seizure propagation, further. In stage 2 seizure continuation (established SE), long acting anticonvulsants namely Phenytoin, sodium valproate and sometimes phenobarbitone are given by intravenous route and control is anticipated up to 2hrs. Beyond this period, if patient reaches stage 3 (refractory SE) within 120 mts, one should resort to general anesthesia, muscle relaxants to achieve burst suppression in EEG wherein seizures are aborted both by clinical but also by electrographic al methods. Many protocols have been laid down as algorithm flow charts in many centers , approved by international bodies and the main aim of such aggressive management is to abort the seizure propagation and avoid further neuronal damage [3].
Despite all these aggressive measures, then why some of the patients go on to develop resistance to treatment and become super refractory SE (24 hrs or more)? Although often encountered in patients with established epilepsy with underlying cryptogenic or secondary etiology (head trauma, infection, infarction, cerebral bleed) it is not uncommon to see this in previously healthy people also. Probably the normal physiological process that could terminate the seizure activity has failed in them, mostly the receptors of the axonal surface becoming externalized rather than internalization. In this process, there is considerable reduction in GABA receptors (inhibitory) and substantial increase in gluteminergic (excitatory) receptors.
Once the GABAnergic receptors levels are reduced the GABAnergic drugs (phenobarbitone, benzodiazepines) are likely to become ineffective in this ongoing process of seizure activity. Moreover, normal inhibitory GABA-A mediated currents in the external ionic milieu become excitatory with changes in extra cellular chloride concentrations. Besides, inflammatory cascade activation during ongoing seizure propagation, bloodbarrier leakage leading to increase in extracellular potassium and mitochondrial failure with oxidative stress could also Super-refractory status epilepticus is named for those continuous un-abated ,prolonged seizure activity, despite aggressive treatment including general anesthesia, lasting more than 24 hours. There is ongoing serious damage to the neuraxis, in such cases, if seizures continue, over longer time and the important issue in the management is to prevent the damage by interfering its activity, liberally in the initial stage of the seizures, itself. As the seizures prolong, structural as well molecular changes of the neurons could interfere in the management profile also, however aggressive, it could be.
Open Access Journal of Neurology & Neurosurgery
perpetuate the process of continued insult [4]. Additionally gene expression within few minutes of seizures and lack of synchrony of neuronal network could prevent seizure termination however aggressive the treatment is given. The underlying damage to the cerebral internal milieu is devastating with necrosis, gliosis and network reorganization, ultimately leading into cell death. This is initiated first by excitotoxicity and further driven by gluteminergic receptor over-activity. Once the damage initiated, calcium influx into the cell triggers cascade of chemical reactions and lead to cell necrosis or apoptosis. This chain of events usually takes place after the continuation of the seizure process and lead to neuronal remodeling, activation of several molecular signal pathways to activate programmed cell death. As a result, in the long term histological structural changes namely neurogenesis, angiogenesis are seen. In order to prevent these irreversible events, aggressive management to the extent of general anesthesia to induce electrographic burst suppression is highly recommended and several neuro protective measures (barbiturate come, hypothermia, steroids, intravenous immunoglobulins, ketamine) are attempted to prevent sequel of excitotoxicity although the efficacy of later measures is unknown.
Discussion
The management of super refractory SE is very challenging and always managed in intensive care units with general anesthesia, endotracheal intubation and ventilation, cardio pulmonary monitoring. Maintaining the hemodynamic status is the main stay in the management portfolio as drugs used to control seizures at maximum doses would invariably give rise to hypotension, bradycardia and respiratory arrest. Midazolam infusion, thiopental, pentobarbital and propofol are preferred to induce deep sedation and burst suppression although there is low threshold for the later drug usage in many centers for children because of its potential lethal side effects (propofol infusion syndrome) [5].
Selection of the above medications depend upon the availability, personal experiences and acceptable limit of side effects. Midazolam is a safe medication as continuous infusion because of its strong and established anti-epileptic action but tachyphylaxis and rapid tolerance is the main disadvantage; as a result, seizure emerge in nearly 50% of the patients. Barbiturates infusion have been used conventionally in the past in SE and apart from its known anti epileptic action, hypothermia induced by this drug could be theoretically beneficial as neuro-protection. However, due to long half-life, the prolonged anesthetic effect even after the drug has been withdrawn may be a major problem of extubation. Ketamine, a NMDA receptor antagonist is used in some centers because of it least cardiac depressant effect and could be considered as second choice, if the anesthetic drugs fail.
What about anti epileptic drug armamentarium, apart from above? Many patients end up taking cocktail of several drugs namely carbamazepine, phenytoin, phenobarbitone, valproate, topiramate, levetiracetam, but there is no evidence that single drug is superior to other medications [6]. So it's the choice for the treating neurologists to decide about the combinations as most of the times, theses patients are on polytherapy. However, it is recommended not to use more than two anti-epileptic medications with different mode of actions at higher doses and to avoid frequent and abrupt switch-over of the drugs. Magnesium sulfate infusion is a safe and least toxic medication and must be attempted in every patient with super refractory SE. what about steroids and immunomodulators? Recent evidence prove that inflammation play an important role in seizure propagation and moreover the recently evolved autoimmune encephalitis with increase in N methyl D aspartate antibody and status epilepticus, steroids or immunomodulators could only control seizure activity [7].
With this background, many centers use these medications, even inadvertently, in refractory SE with varying results. Other measures namely, ketogenic diet, hypothermia, electroconvulsive therapy are available and being tried as experimental modes of therapy with no conclusive evidence of its use. So in conclusion, super refractory SE is a grave situation in the stages of SE evolution with relatively high mortality and morbidity. Yet, there is no consensus opinion regarding effective management of this and one has to be aggressive and rational in the initial stages of treatment of SE and theoretically prevent patients progressing to this serious stage of SE. Treatable causes of SE must be identified early and managed appropriately. An acceptable treatment protocol and guidelines should be formulated in every center, agreed upon by the governing committee in concurrence with the neurologists and effectively treat the condition [8].
|
v3-fos-license
|
2024-06-26T13:12:24.545Z
|
2024-06-26T00:00:00.000
|
270712474
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcvetres.biomedcentral.com/counter/pdf/10.1186/s12917-024-04119-3",
"pdf_hash": "af49c3e3ef0627979c3bdd1c3372d3bd50919818",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1054",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "38b7f34a4b81167f72bd54c48e5004f6155eed68",
"year": 2024
}
|
pes2o/s2orc
|
Four novel Acinetobacter lwoffii strains isolated from the milk of cows in China with subclinical mastitis
Background Acinetobacter lwoffii (A. lwoffii) is a Gram-negative bacteria common in the environment, and it is the normal flora in human respiratory and digestive tracts. The bacteria is a zoonotic and opportunistic pathogen that causes various infections, including nosocomial infections. The aim of this study was to identify A. lwoffii strains isolated from bovine milk with subclinical mastitis in China and get a better understanding of its antimicrobial susceptibility and resistance profile. This is the first study to analyze the drug resistance spectrum and corresponding mechanisms of A. lwoffii isolated in raw milk. Results Four A. lwoffii strains were isolated by PCR method. Genetic evolution analysis using the neighbor-joining method showed that the four strains had a high homology with Acinetobacter lwoffii. The strains were resistant to several antibiotics and carried 17 drug-resistance genes across them. Specifically, among 23 antibiotics, the strains were completely susceptible to 6 antibiotics, including doxycycline, erythromycin, polymyxin, clindamycin, imipenem, and meropenem. In addition, the strains showed variable resistance patterns. A total of 17 resistance genes, including plasmid-mediated resistance genes, were detected across the four strains. These genes mediated resistance to 5 classes of antimicrobials, including beta-lactam, aminoglycosides, fluoroquinolones, tetracycline, sulfonamides, and chloramphenicol. Conclusion These findings indicated that multi-drug resistant Acinetobacter lwoffii strains exist in raw milk of bovine with subclinical mastitis. Acinetobacter lwoffii are widespread in natural environmental samples, including water, soil, bathtub, soap box, skin, pharynx, conjunctiva, saliva, gastrointestinal tract, and vaginal secretions. The strains carry resistance genes in mobile genetic elements to enhance the spread of these genes. Therefore, more attention should be paid to epidemiological surveillance and drug resistant A. lwoffii.
Background
Mastitis is a common disease in dairy cows, which threatens the development of the dairy cattle industry worldwide.The disease causes significant economic losses by reducing milk production and milk quality [1,2].Mastitis is caused by many pathogens, and the predisposing factors include a dirty environment, improper feeding and management, hormone disorders, breast defects, and other factors.Bacteria, viruses, and fungi are the main causes of mastitis.The causal microorganisms of mastitis are complex.In general, Staphylococcus, Streptococcus, and Escherichia coli are the main pathogenic bacteria that cause mastitis, followed by Corynebacterium pyogenes, Pseudomonas aeruginosa, Pasteurella and Klebsiella.There are two types of mastitis based on clinical manifestations: clinical mastitis (CM) and subclinical mastitis (SCM).Subclinical mastitis is more prevalent than clinical mastitis, and cow-to-cow transmission is the primary route through which the disease spreads [3].Subclinical mastitis has a systemic effect on the reproduction capacity of the infected animal [4], and it can be diagnosed based on the presence of inflammatory mediators and specific bacteria in milk and a reduction in milk production [5].
Streptococcus and Staphylococcus are the main bacteria species that cause subclinical mastitis.Recent studies have shown that several bacterial species associated with bovine mastitis, such as environmentally ubiquitous Acinetobacter, Bacillus, Enterobacter, and Enterococcus, are readily isolated from milk samples [6,7].These pathogens are developing multiple drug resistance (MDR) to common antimicrobial agents used in mastitis therapy.Acinetobacter are Gram-negative bacillus with 112 Acinetobacter species in this genus.The majority of species are nonpathogenic types readily available in the environmental materials.Members of the Acinetobacter genus easily cause infection in immunocompromised individuals and animals.The most common infections are nosocomial, predominantly respiratory tract infections, septicemia, meningitis, endocarditis, wound and skin infections, and urogenital tract infections.The most common Acinetobacter species that cause infections is Acinetobacter baumannii, followed by Acinetobacter calcoaceticus and Acinetobacter lwoffii [8].All species are ubiquitous in nature and can easily be isolated from soil, water, food, and sewage [9].In this study, we isolated 4 A. lwoffii strains from raw milk samples of cows with subclinical mastitis in Jilin Province in China in 2021.The antimicrobial susceptibility, resistance, and the genes conferring the resistance in the isolates were determined.
Identification of strains and genetic evolution analysis
The milk samples from cows with subclinical mastitis were analyzed for the presence of mastitis-related bacteria.Four Acinetobacter strains, including JL1, JL2, JL3, and JL4, were obtained from the analyzed raw milk samples.
A phylogenetic tree constructed using the neighborjoining method showed that the four strains grouped together in one clade and showed high homology with Acinetobacter lwoffii (Fig. 1).
Antimicrobial susceptibility phenotypes
The susceptibility and resistance profile of the four strains were tested against 23 antibiotics.The antimicrobial susceptibility results showed that the four strains were resistant to multiple drugs (Fig. 2).All the strains were resistant to ampicillin, oxacillin, ceftazidime, cefoxitin, cefazolin, ceftiofur, ciprofloxacin, enrofloxacin, tetracycline, amikacin, streptomycin, gentamicin, and trimethoprim/sulfamethoxazole (JL1 and JL3 isolates were resistant to Ampicillin-Oxacillin-Ceftazidime-.
Presence of resistance genes
The distribution of resistance genes across the four strains is shown in Table 1.Two beta-lactamase genes were detected.Bla TEM was detected in all four strains, while bla SHV was detected in two isolates.Among the aminoglycoside resistance genes, 4 aminoglycoside modifying enzyme genes were identified.The aadA1 gene which confers resistance to streptomycin was detected in all 4 isolates.The aac(3')-IIc gene, which confers resistance to gentamicin, was detected in all 4 isolates.The aph(3')-VII gene, which causes resistance to kanamycin, was detected in all 4 isolates.The aac(6')-Ib gene, which causes resistance to kanamycin and amikacin, was detected in all 4 isolates.Only one 16 S rRNA methylase gene rmtB was detected in JL1.Three plasmid-based antibiotic resistance-associated genes were detected.OqxA was present in all four, oqxB was also present in all four, and qnrB was present in two isolates.Among the tetracycline and sulfonamide resistance genes, tet(A), tet(C), and tet(G) were present in four isolates, tet(K) was present in one, sul1 was present in two, while sul2 was present in three isolates.Among the chloramphenicol resistance genes, the cat2 gene was detected in all 4 isolates.
Discussion
Mastitis is a common disease in dairy cows, which influence the lactation period and milk production.The quality and shelf life of raw milk and related products will be reduced by the increase of microbiology [10].Pathogenic bacteria in raw milk could cause serious food safety and even affect human health.Because mastitis is debilitating and painfulit, also touches on animal welfare issues [11].Staphylococcus, Pseudomonas, Streptococcus, Pasteurella, Enterobacter, Klebsiella, Corynebacterium, Enhydrobacter, Bacillus, Lactococcus, Lactobacillus, Paenibacillus, Bacteroides, Massilia, Chryseobacterium, Enterococcus, Psychrobacter and Acinetobacter have been previously detected in raw milk of cows with mastitis [12][13][14][15][16][17][18][19].In this study, 4 Acinetobacter lwoffii strains were detected in raw milk.This study is very important because little is known about the role of foods in the transmission of Acinetobacter spp.No standard protocols for recovering these species from foods exist [20,21].Acinetobacter lwoffii is an aerobic, Gram-negative coccobacillus common in the environment and a normal flora in the human respiratory and digestive tract [22].According to the most recent scientific literature, members of the Acinetobacter genus are the second most common nonfermenting pathogens isolated from clinical samples after Pseudomonas aeruginosa [23].Acinetobacter strains also colonize animals respiratory and urinary tract, including food animals, fish, chickens, birds, and dogs [24][25][26][27][28]. Acinetobacter is widely distributed in the external environment, such as water, soil, baths, soap boxes and other wet places [29,30].The bacterium has strong adhesion and easily adheres to various medical materials, where it may become a storage source of bacterial infections.These bacteria survive on inanimate objects, in dry conditions, in dust, and in moist conditions for several days.A study showed that in raw milk of cows with mastitis, the detection rate of Acinetobacter baumannii was higher than Acinetobacter lwoffii [31].However, Acinetobacter baumannii is still the most common Acinetobacter species that causes infections.Other prominent species include Acinetobacter ursingii, Acinetobacter parvus and among all, Acinetobacter lwoffii has been increasingly reported.Ribeiro Júnior JC isolated 9 Acinetobacter lwoffii strains from 20 refrigerated raw milk samples [32].
Since bovine mastitis results in huge economic losses, its prevention and treatment have attracted global attention.Antibiotics are the main treatment options for the disease.In recent years, Acinetobacter species have been the most common pathogens associated with opportunistic infections resistant to multiple antibiotic classes.In this study, the A. lwoffii strains isolated were resistant to multiple antibiotics at variable patterns.All the strains were only susceptible to 6 out of the 23 commonly used antibiotics.In a previous study, the Acinetobacter strains detected from milk samples of cows suffering from clinical mastitis were resistant to all the antibiotics tested (oxytetracycline, vancomycin, lincomycin, nitrofurantoin, ceftriaxone-tazobactam, cefotaxime, erythromycin, amoxicillin-sulbactam, and penicillin) [6].Acinetobacter species isolated by Raylson Pereira de Oliveira showed variable phenotypic resistance to antimicrobials and were completely resistant to ampicillin, penicillin, and vancomycin [31].Acinetobacter strains which isolated from human milk were resistant to oxacillin, ampicillin, clindamycin, cephalothin, amoxicillin and erythromycin [33].Acinetobacter species isolated from birds on a freerange farm were resistant to ampicillin, cefazolin, ceftazidime, chloramphenicol, nitrofurantoin, rifampicin and tetracycline which are on the WHO list of essential medicines [34].The resistance profile of the strains detected in this study has never been reported in the past.The difference in the drug resistance pattern for A. lwoffii is due to the use of different drugs in different regions.The high antimicrobial resistance of A. lwoffii strains may be due to the overuse and abuse of antimicrobials in disease treatment.Acinetobacter baumannii as the important pathogen in healthcare associated infections shows serious multiple-drug resistance [35,36].There are not any Acinetobacter baumannii strain isolated from the analyzed milk samples in this study.However, it suggested that other Acinetobacter species isolates may play a role in maintaining severe antibiotic resistance in milk.
There are many reports concerning the resistance mechanism of Acinetobacter lwoffii have been published.In Sofia Mindlin's study, Acinetobacter lwoffii carried antibiotic-resistant genes (heavy metal resistance) in plasmids [37].Liang detected a novel plasmid-encoded ANT(3")-IId in Acinetobacter lwoffi strain isolated from a chick on an animal farm in China [38].Two β-lactamaseencoding genes, OXA-496 and OXA-537, were for the first time reported in Acinetobacter lwoffii and Acinetobacter schindleri isolates from a chicken farm [39].In this study, a total of 17 genes that mediate resistance to beta-lactam, aminoglycosides, fluoroquinolones, tetracycline, sulfonamides, and chloramphenicol were detected across the four strains isolated, and some of these genes were carried in the plasmid.In this study, the relationship between the carrying of resistance determinants and the phenotypic resistance profile were not coincident.Some strains having resistance genes while susceptible to the antimicrobial.Maybe the resistance genes were not expressed, so that they did not show resistance to the antimicrobial.Some strains were resistant to antimicrobial without the related resistance genes.It was possible that there were other resistance mechanism.The metabolic abilities of Acinetobacter spp.are often attributed to their plasmid-encoded genes because these genes encode proteins that can degrade organic compounds [40].Plasmid mediated gene transfer plays an important role in the transmission of antibiotic resistance genes, pathogen degradation pathways and pathogenicity determinants.The transfer of mobile genetic elements such as plasmids, insertion sequences (ISs), transposons and integrons play an important role in the acquisition of resistance determinants or features providing a selective advantage [41].These transposable elements can move within the bacterial genome.Plasmid-based genes encode numerous features that provide a selective advantage to the bacteria, and they can be transferred horizontally to other bacteria of the same or different species.Therefore, plasmids are believed to play an essential role in the evolutionary events of a given microbial community [42].Numerous articles have documented the presence of Acinetobacter spp. in raw milk, dairy products, and powdered bovine milk [21].Acinetobacter spp.are common microbes found throughout nature.Acinetobacter spp. in raw milk may have originated from environmental sources.Also, Acinetobacter spp.can contaminate other animals, people, medical devices, and environmental factors.In addition, Acinetobacter spp.strains that carry plasmid-based resistance genes may transfer these genes to other strains through the horizontal mechanism.Acquisition of plasmids that mediate antibiotic resistance transfers these traits to the recipient bacteria, which seriously threatens effective clinical treatment of diseases caused by such bacteria.The prevalence and mechanisms of antibiotic resistance have been widely reported, but the transfer mechanisms of multi-drug resistant genes are remain unclear.The research of antibiotic resistance gene transfer were good for devising innovative solutions to combat the current antibiotic resistance crisis [43].
The prevalence and mechanisms associated with antibiotic resistance have been widely reported, but the mechanisms of multidrug resistance gene transfer.It is important to continue antibiotic resistance gene transfer to design innovative solutions to combat the current antibiotic resistance crisis.
Conclusion
We isolated 4 A. lwoffii stains from raw milk samples of cows with subclinical mastitis in China.Genetic evolution analysis with the neighbor-joining method showed that the 4 strains displayed a high homology with Acinetobacter lwoffii.The antimicrobial susceptibility test which using Kirby-Bauer disk diffusion and refered to Clinical and Laboratory Standards Institute results showed that all the strains were multi-drug resistant and were only completely susceptible to 6 of the 23 tested antibiotics.Acinetobacter lwoffii strains, which inhabit the udder of cows, showed considerably variable multidrug resistance patterns.The antibiotic resistance genes were diverse and varied across the four strains.A total of 17 resistance-associated genes, including plasmid-based genes, were detected.These genes promoted resistance against 5 drug categories, including beta-lactam, aminoglycosides, fluoroquinolones, tetracycline, sulfonamides, and chloramphenicol.Our findings suggest that Acinetobacter lwoffii can contaminate milk, human and animal bodies, medical devices, soil, water, and other environmental features.Thus, keen attention should be paid to the epidemiological surveillance and drug resistance of Acinetobacter lwoffii in the listed sources.
Sample collection and isolation of bacteria
In 2021, four raw milk samples were collected from cows diagnosed as subclinical mastitis on one farm in Jilin Province in China.The collection was performed aseptically, and the samples were placed in sterile tubes and immediately stored under refrigerated conditions until analysis.
In the laboratory, each milk sample was inoculated on Trypticase Soy Agar plates supplemented with 5% sheep blood and incubated aerobically at 37 °C for 48 h.After bacterial growth, colonies for suspicious pathogens were further sub-cultured for identification.For DNA extraction, 300 µL 1×TE buffer was added to a small proportion of the bacterial colony and transferred to a 1.5 mL centrifugation tube.The tube was hit at 100 °C for 10 min before incubation on ice for 5 min.The mixture was then centrifuged at 17,000 × g for 5 min, and the supernatants were collected and stored at 4 °C till further use.The isolates were identified using PCR by targeting 16s rRNA using universal primers.The quality of the PCR products was analyzed using 1% agarose gel electrophoresis and visualized under UV light.All PCR amplified positive products were sequenced by Kumi Biotechnology (Jilin) Co., Ltd (Jilin, China) and identified using the BLAST program using data in the National Center for Biotechnology Information (NCBI) database.
Genetic evolutionary analysis
The bacterial sequences were aligned using the ClustalW program in MEGA 7.0 software.The phylogenetic trees from evolutionary distances were built using the neighbor-joining method.P-distances for nucleotides were reconstructed using the same software.The referential strains of Acinetobacter were all published strains.The clustering stability of the neighbor-joining tree was evaluated by bootstrap analysis with 1,000 replicates.
The bacterial isolates were inoculated in Trypticase Soy Broth and incubated at 37 °C for 6 to 8 h until turbidity developed to 0.5 McFarland's standard.A small bacterial culture inoculum was spread onto sterile Mueller Hinton agar plates using sterile cotton swabs.The plates were incubated at 37 °C for 16 h, and the diameters of growth inhibition zones were recorded.
Table 1
The results of resistance genes in isolates
|
v3-fos-license
|
2024-03-13T06:17:56.215Z
|
2024-03-11T00:00:00.000
|
268361762
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "953cb2eb0582248cbf3e3e76c598f24d167014ef",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1055",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "17338c1f0e05b156562e965195ea7e1725140b4b",
"year": 2024
}
|
pes2o/s2orc
|
Low energy consumption layout of exhibition buildings in Yangtze River Delta region
The issue of high energy consumption in exhibition buildings has long been a focal point in the field of architectural design. However, current energy consumption assessments for exhibition buildings mainly focus on post-construction evaluations, lacking corresponding guidance during the initial design phase. To address this issue, this study selected 48 well-known exhibition buildings both domestically and internationally as research subjects. Utilizing scatter plot curve fitting, it was discovered that there exists a nonlinear quadratic relationship between the building area of the first floor and the courtyard area. Based on this relationship, four typical layouts were established to match the climatic characteristics of Hangzhou, a representative region in the Yangtze River Delta of China. Taking into account regional architectural features, the study specifically examined the impact of different orientations and window-to-wall ratios on energy consumption levels. The influence of these factors on energy consumption was analyzed using the DesignBuilder software. The results revealed that there exists an optimal window-to-wall ratio for exhibition buildings, with parallel, L-shaped enclosed south-facing courtyards, and U-shaped enclosed east-facing courtyards showing greater energy efficiency. This research provides guidance for designing exhibition buildings that are energy-efficient and foster a harmonious indoor–outdoor relationship.
As urbanization progresses, the size of cities continues to increase.Exhibition architecture, as a type of building with cultural communication purposes, has been increasingly emphasized for its importance, but it has also brought about numerous issues 1 .For example, compared to ordinary buildings, exhibition architecture has higher energy consumption due to the nature of its exhibitions, including long-term mechanical ventilation, the use of heating and cooling systems, continuous use of lighting equipment, and large-sized display screens used for demonstrations.The diversity of interior spaces in exhibition buildings leads to insufficient natural lighting, low utilization of spatial resources, and high heat loss 2 .In comparison to regular office buildings, exhibition architecture presents more complex energy consumption challenges.Firstly, exhibition buildings require large-scale spaces to accommodate a substantial volume of exhibition works.During daily operations, they result in substantial energy consumption.However, because the indoor space of exhibition buildings can exceed 100 square meters with a height of over 15 m, whereas the average human height is about 2 m near the ground, the incongruity between the spatial characteristics of exhibition buildings and human body scale leads to significant energy waste.Secondly, in order to compensate for insufficient natural lighting, exhibition buildings often incorporate courtyard spaces and extensive glass curtain walls to enhance visitors' leisure experience, which also contributes to heat loss and energy consumption.Therefore, the energy-saving design strategies for exhibition buildings have a certain level of specificity, with the incongruity between spatial characteristics and human body scale, as well as the thermal losses caused by enclosure structures, being both the distinctive features and the challenges of energy-saving design for exhibition buildings.The current situation is due in part not only to the strict physical environmental control requirements for the collections in museum exhibition buildings but also, to a larger extent, to the lack of effective physical environmental control scheme requirements in the initial design phase.This also highlights the importance of overall layout design for energy saving from another perspective.With the formulation of carbon neutral routes, researchers are increasingly focusing on reducing building energy consumption and carbon emissions [3][4][5][6] .Studying exhibition architecture as the research object is of significant importance in promoting the realization of carbon neutrality.
To date, many researchers have attempted various approaches to reduce energy consumption in exhibition buildings, yet they still face some challenges.For instance, the Shanghai twenty first century Minsheng Art
Literature review
In the field of building energy efficiency, research literature can be broadly categorized into two main types: those focusing on the legal regulations and design standards related to building energy efficiency, and those addressing the theoretical experiments and statistical analyses in the field of building energy efficiency.The following sections will provide a comprehensive review of the research dynamics both domestically and internationally in these two areas: In terms of legal regulations and design standards for building energy efficiency, the "Law of the People's Republic of China on the Conservation of Energy" issued in 1997 was the first to incorporate building energy efficiency standards into law.Subsequently, various energy consumption statistical standards for civil buildings were introduced, laying an important foundation for conducting scientific research on building energy efficiency.
In theoretical experiments and statistical analyses, research in China started relatively late, with the research content primarily focused on actual measurements.The surveyed building types were mainly office buildings, and there has been limited research on exhibition buildings.The historical development can be summarized as follows: The team led by Tu 25 conducted the earliest unit area energy consumption surveys and data statistics for government office buildings.In 1990, scholars from Shenyang Jianzhu University conducted field measurements and on-site surveys of energy-related data for buildings in cold regions, laying the foundation for energy consumption research in cold regions.In 2010, Lin et al. 26 developed a predictive model that can quickly respond to the energy demand of buildings in the design phase, considering the limited known parameters and multiple optimization options in the design stage.By combining Matlab's genetic algorithm, the optimal solution could be automatically obtained.In 2017, Sun et al. 27 constructed the GANN-BIM energy-saving design platform by studying the information model of public buildings in severe cold regions.This platform controlled the building form variables to conduct a large number of experiments, thereby obtaining energy-saving design strategies.
In the field of research on low-energy buildings, Western countries, represented by the United States, have an early start.In terms of legal regulations and design standards for building energy efficiency, the Energy Policy Act of 1992, passed by the U.S. Congress, comprehensively addressed energy-related areas, aiming to improve the energy efficiency of various facilities such as civil buildings and electrical equipment.In the realm of statistical analysis, the research organization D&L International Ltd conducted comprehensive statistical analyses of building energy consumption and related data in 1980, publishing the Building Energy Consumption Statistical Yearbook in 2000.From 2006 to 2015, the energy consumption per unit area of public buildings in the United States decreased by 2% annually.In the theoretical research field of energy-efficient design, in 2011, the team led by Austrian architect Ursula Frick first developed the parametric performance design plugin Geco.Suyoto et al. 28 used the Geco tool to conduct parametric design research based on solar radiation using a public building as an example, proposing a logic for parametric performance design.Subsequently, the development of tools such as Designbuilder and Ladybug + Honeybee 29 has enabled architects to simulate the performance of parameters such as floor area, orientation, window-to-wall ratio, and envelope structure, leading to widespread application in foreign research on parametric energy-efficient design.
As standards continue to advance, theoretical research on low-energy buildings has seen rapid development.Researchers have placed greater emphasis on passive energy-saving technologies to reduce building energy consumption.Optimizing the window-to-wall ratio is considered an effective and important method 30 .Chi et al. 31 studied variations in residential orientation and researched the optimal solutions for different orientation angles and window-to-wall ratios (compliant with Chinese building regulations).The results showed that a favorable window-to-wall ratio significantly reduced building energy consumption.Troup et al. suggested that the window-to-wall ratio can help reduce energy consumption in office buildings and demonstrated its potential correlation with building area 32 .Asfour 33 , through studying residential buildings in the Arab region, found a significant correlation between passive energy-saving through window-to-wall ratio control and courtyard layout.Furthermore, the layout of the courtyard and the orientation of the building also exhibited strong correlations with building energy consumption.
In summary, due to differing levels of development among countries, energy conservation goals vary, resulting in significant differences in domestic and international regulations and standards for building energy efficiency.Furthermore, variations in energy consumption calculation tools contribute to differences in energy consumption assessments.OpenStudio is a commonly used energy assessment tool in the United States for energy-efficient design, while in China, self-developed predictive models, EnergyPlus, or other tools based on the core algorithm of EnergyPlus are more prevalent in energy-efficient design.Therefore, it is necessary to discern the research findings of existing parametric energy-efficient designs abroad and establish energy-efficient design strategies applicable to different climatic regions in accordance with China's specific conditions and energy-saving standards.Regarding variable parameters, the form factor, which describes the building form, is included in China's energy-saving standards, but its application in building energy-efficient design standards is not yet rigorous enough 34 .Both domestic and international research agree that the building envelope structure affects energy consumption.The window-to-wall ratio, as a crucial indicator for the insulation of the envelope structure, holds significant research value.Additionally, considering the unique nature of courtyard space in exhibition buildings mentioned earlier, orientation affects the layout of courtyards in exhibition buildings, exerting a significant impact on energy consumption.Currently, research on the envelope structure's window-to-wall ratio and the orientation of courtyards has not yet encompassed exhibition buildings, which has had an adverse impact on reducing energy consumption in this building type.www.nature.com/scientificreports/
Research hypothesis
Based on the findings of previous researchers, we propose the following hypotheses regarding the impact of the window-to-wall ratio on energy consumption, based on four typical layout models in this study: H1: When the window-to-wall ratio of a building is extremely low, a significant amount of energy is required for lighting and heat dissipation.As the window-to-wall ratio gradually increases, the energy required for lighting and heat dissipation decreases.However, when the window-to-wall ratio becomes extremely high, the large area of glass can lead to severe heat loss.Therefore, there may exist an optimal window-to-wall ratio that minimizes energy consumption in building design.H2: During the operational period of a building, the optimal window-to-wall ratio for minimum energy consumption may be related to the building's floor area.H3: During the operational period of a building, the optimal window-to-wall ratio for minimum energy consumption may be related to the courtyard layout.Based on the optimal window-to-wall ratio for minimum energy consumption, we make hypotheses regarding the influence of orientation indicators on building energy consumption.H4: When the exhibition space avoids excessive daylighting and the courtyard space provides good indoor-outdoor visual interaction, there exists an optimal courtyard orientation layout that minimizes energy consumption.H5: When the layout types of exhibition buildings differ, there are variations in the optimal orientation angles for minimum energy consumption.H6: When the orientation of the courtyard affects building energy consumption, the determining factor is the projected area of windows facing west.
Research methodology
The energy-saving efforts in many developed countries worldwide started earlier than in China, and their research methods on energy consumption have great reference value for our country.EnergyPlus combined the heat balance and weighting factor methods and was suitable for the schematic design phase of building projects.It not only helped architects intuitively establish building models and window-to-wall types, but also had simple algorithms that did not require long computation times 36,37 .DesignBuilder, a simulation software based on EnergyPlus, underwent building energy consumption simulation tests by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).Its annual cumulative heating and cooling load prediction results were compared with predictions from eight other national energy agency-designated energy consumption software, showing good accuracy with a maximum calculation deviation of not exceeding 5.2%.Moreover, DesignBuilder had a powerful built-in database that included most commonly used building materials and their corresponding parameter information 38 .Its strong energy simulation functionality effectively integrated HVAC systems, natural ventilation, building components, and indoor lighting equipment, meeting various computational needs 39 .Therefore, in this study, we simulated energy consumption usingBuilder.
Research roadmap
The research roadmap of this study is illustrated in Fig. 1.A total of 48 well-known exhibition buildings from both domestic and international sources were selected as the research objects.Data regarding the first-floor building area, courtyard area, floor height, and floor plan were collected and organized.Scatter curve fitting was performed, revealing a non-linear quadratic relationship between the first-floor building area and courtyard area.
Based on this functional relationship, four typical layout prototypes were established, and CAD software was used to create drawings.Energy consumption simulations and analyses were conducted using DesignBuilder, focusing on Hangzhou, which represents the typical climate characteristics of the Yangtze River Delta region in China.Taking into account the regional architectural features, particular attention was paid to the impact of different orientations and window-to-wall ratio on energy consumption levels.The optimal layout forms for exhibition buildings were identified, providing guidance for the schematic design phase of exhibition buildings.Lastly, a case study was conducted on the Liangzhu Cultural Museum in Hangzhou to validate the findings by comparing the energy efficiency before and after optimization.
Collection of research data
To conduct comparative research, this study selected 48 well-known exhibition buildings from both domestic and international sources, including works by Pritzker Prize laureates, architectural websites (such as ArchDaily and gooood), and relevant architectural journals (Table 1).In these buildings, the first-floor area is represented by the gray-white region, while the courtyard area is represented by the yellow region.Due to the complexity of actual layout variations, the layout forms of the buildings can be roughly categorized into five types: parallel, L-shaped enclosure, U-shaped enclosure, square enclosure, and segmented enclosure.The parallel layout mainly exhibits a side-by-side arrangement between the building and courtyard.The L-shaped enclosure layout features a courtyard enclosed on both sides in the form of the letter "L".The U-shaped enclosure layout showcases a courtyard www.nature.com/scientificreports/enclosed on three sides in the shape of the letter "U".The square enclosure layout presents a courtyard enclosed on all four sides in the shape of the Chinese character "口".The segmented enclosure layout demonstrates a building with multiple sections enclosing three sides of the building.(The segmented layouts with enclosure on both sides or all four sides have limited examples and are not included in the scope of this study).
According to Table 1, the first-floor building area X and the courtyard area Y were utilized for regression analysis using Excel software, resulting in four types of functional relationships: quadratic, logarithmic, power, and linear functions (Fig. 2).
According to the "MedCalc Common Statistical Analysis Tutorial" 40 , a coefficient of determination (R 2 ) greater than 0.3 is deemed meaningful in the regression equation.Specifically, the R 2 for the quadratic function is approximately 0.75, for the logarithmic function is approximately 0.52, for the power function is approximately 0.66, and for the linear function is approximately 0.69.This indicates a correlation between the first-floor building area and the courtyard area, with the correlation being most significant in the case of the first-floor building area and courtyard area for the quadratic function, as expressed by the equation: This non-linear quadratic function is suitable for exhibition buildings with a first-floor building area ranging from 2000 to 14,000 m 2 .Particularly, when the first-floor building area is below 7000 m 2 , the scatter of data points on both sides of the curve is relatively low, indicating that this function accurately represents the relationship between the first-floor building area and courtyard area.Based on this functional relationship, twelve experimental groups were set up for energy consumption simulations to investigate the impact of window-to-wall ratio and orientation on energy consumption in different-sized exhibition buildings.This function has significant implications for the schematic design phase of exhibition buildings.
Simulation condition settings
The aforementioned architectural cases can be classified into five typical layouts: parallel, L-shaped enclosure, U-shaped enclosure, square enclosure, and segmented enclosure.Since segmented enclosure involves multiple building units and has a relatively low proportion in exhibition building cases, this study focuses on the first four types of single-unit layouts (Table 2).In the diagrams, the white regions represent the buildings, while the grid regions represent the courtyards.According to the "Energy-saving Design Standard for Public Buildings" (GB50189-2015), the window-to-wall ratio for each orientation of a building should not exceed 0.70 41 .Additionally, to meet the requirements of ventilation, heat dissipation, and winter wind protection, public buildings in the Hangzhou area should have a window-to-wall ratio greater than 0.10 16 , with a preference for courtyard layouts facing south, west, and east directions 42 .Moreover, based on the "Daylighting Design Standard for Buildings" (GB/T 50033), to ensure that the exhibition halls have an illuminance standard of 200 lx to 300 lx on the floor and to avoid glare and the adverse effects of direct sunlight on the exhibition experience, the window-to-wall ratio for the west-facing side is uniformly set at 0.10.
From the above cases, it can be observed that the first-floor area of exhibition buildings is concentrated in the range of 2000 m 2 to 13,000 m 2 .To further investigate the correlation between the first-floor area and energy consumption, the first-floor area was divided into twelve experimental groups, ranging from 2000 to 13,000 m 2 .Equation (1) was then used to calculate the corresponding courtyard area (Yn) by inputting the first-floor area (Xn), as shown in Table 3.
The energy consumption simulation software used in this study was DesignBuilder V6.1.The meteorological parameters of Hangzhou City, a typical city in the Yangtze River Delta region, were selected as the simulation conditions (Tables 4, 5, 6).The activity mode chosen was "Display and Public Areas" under the category "Libraries/ Museums/Galleries".The personnel density, lighting system, ventilation conditions, and indoor temperature all
The impact of window-to-wall ratio on energy consumption
In order to investigate the variation trend of the minimum energy-efficient window-to-wall ratio among the four types of layouts under different first-floor areas, four layout models were constructed in the DesignBuilder software (Table 7).The first-floor areas ranged from 2000 to 13,000 m 2 , with a total of 12 groups.The simulation calculated the annual energy consumption of the building model when the first-floor area was 2000 m 2 , considering different window-to-wall ratios ranging from 0.1 to 0.7 (with the west-facing window-to-wall ratio fixed at 0.1).The results of the calculation are shown in Fig. 3.
From the above figure, it can be observed that when the first-floor area is 2000 m 2 , the annual energy consumption per unit area exhibits a decreasing trend followed by an increasing trend as the window-to-wall ratio increases.Among the four types of layouts, the parallel layout has the highest minimum energy-efficient window-to-wall ratio and the smallest U-shaped layout, with the order of energy consumption being parallel > L-shaped > U-shaped > square.Additionally, from the upward trend of the curve, it can be inferred that the energy consumption does not vary significantly near the minimum energy-efficient window-to-wall ratio, but increases rapidly as the ratio deviates further from the minimum.The square layout shows the largest variation, indicating a significant impact of the window-to-wall ratio on the enclosed structure of the square layout.The simulation results of the 12 experimental groups in the study are consistent with the above conclusions.
To further investigate the variation pattern of the minimum energy-efficient window-to-wall ratio when the first-floor area changes, the minimum energy-efficient window-to-wall values mentioned in the above were used for simulation experiments and the simulation results are in Fig. 4. The first-floor area increases the magnitude of the change energy consumption per unit gradually decreases.When the-floor area is ≥ 9000 m 2 , the difference in energy consumption per unit area is below 2 kWh/m 2 , indicating that it has minimal impact on the total annual energy consumption.This suggests that as the first-floor area increases, the change in energy consumption per unit area gradually approaches zero.
As shown in Fig. 5, the minimum energy-efficient window-to-wall ratio increases with the increase in the first-floor area.When the first-floor area is less than 9000 m 2 , the minimum energy-efficient window-to-wall ratio increases rapidly.However, when the first-floor area is greater than or equal to 9000 m 2 , the change in the minimum energy-efficient window-to-wall ratio stabilizes.The difference in annual energy consumption per unit www.nature.com/scientificreports/area is below 2 kWh/m 2 , indicating minimal impact on the total annual energy consumption.When the firstfloor area is 9000 m 2 , the minimum energy-efficient window-to-wall ratios for the parallel, L-shaped, U-shaped, and square layouts are 39.7%, 38.6%, 32.6%, and 30.6%, respectively.In Fig. 6, the comparison between the curves of these layouts at window-to-wall ratios of 39.7%, 38.6%, 32.6%, and 30.6% with the original minimum energy-efficient window-to-wall ratio is presented.When the first-floor area is greater than or equal to 9000 m 2 , the two curves almost overlap, indicating the existence of a critical value for the minimum energy-efficient window-to-wall ratio in exhibition building design.This value provides crucial references for low-energy emissions reduction during the design phase of exhibition buildings.
In conclusion, after comparing the simulation results of the 12 model groups, it is found that the annual energy consumption per unit area of exhibition buildings exhibits a decreasing-then-increasing trend with an www.nature.com/scientificreports/increase in the window-to-wall ratio.This indicates the existence of a minimum energy-efficient window-to-wall ratio, with the parallel layout having the highest ratio and the square layout having the lowest ratio, in the order of parallel > L-shaped > U-shaped > square.The variation in the window-to-wall ratio has a significant impact on the square layout.When the first-floor area is less than 9000 m 2 , the minimum energy-efficient window-towall ratio for the four layout types increases as shown in the graph.When the first-floor area is greater than or equal to 9000 m 2 , the minimum energy-efficient window-to-wall ratios for the parallel, L-shaped, U-shaped, and square layouts are 39.7%, 38.6%, 32.6%, and 30.6% respectively.Furthermore, under this window-to-wall ratio condition, the annual energy consumption per unit area remains relatively constant as the first-floor area increases.This window-to-wall ratio is applicable to the walls with different orientations in exhibition buildings, considering energy-saving design specifications and avoiding excessive solar heat gain and glare.
The impact of building orientation on energy consumption
In order to investigate the energy consumption trends of the four layout types under different orientations, models were built in DesignBuilder software with the following layout settings as shown in Table 8.Since the courtyard side requires a maximum amount of view windows, the window-to-wall ratio for the courtyard-facing walls is set to 0.6, while the window-to-wall ratio for the remaining walls is selected as the optimal value for each layout type mentioned above.Energy consumption simulations were conducted for the 12 experimental groups, each consisting of four layout types.Based on different orientations of the courtyard, a total of 20 plan forms were considered.The simulation results for a first-floor area of 2000 m 2 are shown in Table 8.From the data presented in the charts and tables, it can be observed that the west-facing layouts of parallel, L-shaped, and U-shaped enclosures exhibit the highest annual energy consumption.Excessive window-to-wall ratios on the west side can significantly contribute to heat loss 43 , thus making it unsuitable to place the courtyard on the west side.The higher energy consumption of the south-tilted west and south-tilted east layouts in the square enclosure is www.nature.com/scientificreports/due to the larger window projection area towards the west.Further analysis is required in practical engineering considering massing and breakage design.
From the annual total energy consumption, it can be observed that the orientation of the courtyard is an important factor affecting the energy consumption of exhibition buildings.According to Table 6, the parallel enclosure, designated as Type As, has the lowest energy consumption, which increases when the orientation deviates towards the east or west.The L-shaped enclosure, designated as Type Bs, has the lowest energy consumption when oriented towards the east or west, and increases when the orientation deviates from these directions.The U-shaped enclosure, designated as Type Cs, has the highest energy consumption, which decreases when the orientation deviates towards the east or west, with Type Ce having the lowest energy consumption among them.In the square enclosure, the west-facing window projection areas in Types Dsw and Dse are larger, resulting in higher energy consumption compared to Types Dw, Ds, and De.To further analyze the reasons for the variation in energy consumption with orientation, Fig. 7 is provided.The main reason for the decrease in energy consumption in Types As and Bs is the reduction in heating energy consumption during winter, with a decrease of approximately 5% and 3%, respectively.This indicates that for parallel and L-shaped enclosures, a south-oriented courtyard can better maintain indoor temperatures during winter and reduce heating energy consumption.For Type Ce, the main reason for the decrease in energy consumption is the reduction in cooling energy consumption during summer, with a decrease of approximately 11%, which is much greater than the 1% increase in heating energy consumption during winter.Therefore, an east-oriented courtyard is more suitable for U-shaped enclosures.The variation in cooling energy consumption during summer is greater for square enclosures, as they are more influenced by the western sun exposure.Types De, Ds, and Dw have lower energy consumption compared to Types Dse and Dsw, which have larger west-facing projection areas.www.nature.com/scientificreports/ In conclusion, the parallel, L-shaped, and U-shaped enclosures are not suitable for placing the courtyard on the west side, while the square enclosure should minimize the window projection areas on the east and west orientations.The south-oriented courtyard in the parallel and L-shaped enclosures effectively reduces heating energy consumption during winter, while the east-oriented courtyard in the U-shaped enclosure significantly reduces cooling energy consumption during summer.Therefore, a south-oriented courtyard is more suitable for parallel and L-shaped layouts, while an east-oriented courtyard is more suitable for the U-shaped layout.Square enclosures with east, west, and south-oriented courtyards are more energy-efficient.These conclusions are applicable to exhibition buildings with a first-floor area ranging from 2000 to 13,000 m 2 , although the influence of orientation on energy consumption weakens as the first-floor area increases.
Example verification: Liangzhu Culture Museum
The Liangzhu Cultural Museum, completed in 2018, is located on the banks of the canal in the Liangzhu Cultural Zone in Hangzhou.The project occupies an area of approximately 46,595.9m 2 , with an exhibition area of around 4000 m 2 and a building height of 14.42 m.The specific functional areas are shown in Fig. 8, with the outdoor space highlighted in green.The first-floor area of the Liangzhu Cultural Museum is approximately 6700 m 2 , and the courtyard space occupies an area of about 2000 m 2 , which aligns with the previously mentioned nonlinear function relationship.
However, in the actual operation of the museum, due to the excessive height of the Liangzhu Cultural Museum, the air conditioning system was installed at the top of the exhibition halls by the architects in order to achieve a clean indoor space.This not only resulted in inconvenience during the initial equipment installation phase but also led to a decrease in the cooling and heating effectiveness of the air conditioning system during the later operation, causing significant energy waste.After conducting performance simulation analysis of the building energy consumption of the Liangzhu Cultural Museum using DesignBuilder simulation software (Fig. 9), it was found that the heating and cooling energy consumption in the exhibition building during winter was high in the specific region.The specific values are shown in Table 9.
To achieve the goal of a good indoor-outdoor interaction and low energy consumption in the Liangzhu Cultural Museum, the following optimization strategies are proposed from the perspectives of orientation and window-to-wall ratio, as shown in Fig. 10.Firstly, based on Fig. 4, the ground floor area of the Liangzhu Cultural Museum is 6700 m 2 .For Courtyards ①, ②, and ⑤, which are in a parallel layout, the optimal window-to-wall ratio for minimum energy consumption is between 38.7 and 39.2%.For the Square Enclosed Courtyard (③), the optimal window-to-wall ratio is between 29.1% and 29.8%.For the U-shaped Enclosed Courtyard (④), the optimal window-to-wall ratio is between 31.1 and 31.8%.In addition, considering the characteristic of exhibition halls not having opening windows and avoiding extensive west-facing windows, the window-to-wall ratio for the exterior walls of the exhibition halls and west-facing walls is set to 10%.Secondly, in terms of orientation, the long side of the building should preferably face south to minimize the impact of west-facing sunlight.Since the project has already been completed and changing the orientation would also affect the optimization of outdoor wind environment, efforts should be made to mitigate the corresponding energy consumption loss by adjusting the window-to-wall ratio.In future bidding and design processes, priority can be given to the consideration of orientation.
As shown in Table 10, the DesignBuilder software provides more intuitive data to reflect the changes in energy consumption before and after the optimization of the design.After the optimization, the annual heating energy consumption can be reduced by 92,999.51kWh, the annual cooling energy consumption can be reduced by 12,865.83kWh, the annual lighting energy consumption can be reduced by 1634.54 kWh, and the annual internal power equipment consumption can be reduced by 1747.29 kWh.The total energy savings for the year amount to 109,247.17 kWh, resulting in an energy saving rate of approximately 7.0%.Through calculations, it is estimated that 109,247.17kWh is equivalent to the energy produced by the combustion of 13,437.06kg of standard coal.If converted to smokeless washed anthracite coal (with a conversion coefficient of 0.900 0 compared to standard coal), it would be 14,930.07kg.The market price of smokeless washed anthracite coal in China is approximately 970 yuan/ton.Therefore, after the design optimization, the annual energy cost can be reduced by approximately 14,482 yuan.
Scope and methodology of the study
Firstly, the sample size of well-known exhibition architectural cases from both domestic and international sources is crucial for the scientific validity of this research 8 .Considering the feasibility of data collection and on-site investigations during the actual research process 17 , this study primarily covers 48 exhibition architecture works, which are mainly designed by Pritzker Prize laureates and renowned architectural design teams.This approach aims to conduct a more comprehensive, in-depth, and reliable research analysis of exhibition architecture worldwide 12,22 .Similar related studies typically refer to a minimum of 30 cases to ensure the scientific rigor of data analysis.Therefore, based on the examination of 48 well-known exhibition architecture cases from both domestic and international sources, this paper proposes a nonlinear quadratic function relationship between the ground floor area and courtyard space area.This approach, based on "data experience", offers architects a reference for exhibition architecture design.However, to validate the feasibility of the function itself, further support from additional case studies is required.To address this issue, in subsequent research, the author will continue to provide more reliable data support for this study through methods such as site visits, multi-channel data collection, etc.Secondly, there are certain limitations regarding the scale and form characteristics of the research objects 25 .In the literature analysis, it can be observed that due to technical limitations of computer software and the difficulty in quantifying complex forms, many studies have resorted to simplifying architectural forms and summarizing typical layouts for simulation-based research 44 .In this study, we focus on the relationship between the ground floor area and courtyard area, primarily examining regular architectural forms with land sizes ranging from 10,000 to 50,000 square meters.This is because outdoor wind environment and building energy consumption are more likely to be influenced by such architectural typologies.However, specific research and exploration on complex architectural forms, irregular courtyard spaces, the number of courtyards, and elevated building levels have not been conducted yet, which will be the direction of future research.In our subsequent preliminary studies, we are excited to find that parameterizing layout factor parameters quantitatively and conducting quantitative simulation analysis of complex forms from a specific entry point will offer a fresh perspective on Changing window to wall ratio Opening windows www.nature.com/scientificreports/window-to-wall ratio condition, the annual per unit area energy consumption remains relatively constant as the ground floor area increases.(3) As the ground floor area increases, the impact of orientation on energy consumption decreases.Parallel and L-shaped enclosures are more suitable for south-facing courtyards, while U-shaped enc with the courtyard placed on the east side are preferable.None of the mentioned layouts are for a courtyard placed on west side.Analysis of the window projection on the west side is needed for the square enclosure, aiming to minimize energy consumption loss by reducing the window projection surface on the west side.In conclusion, this study provides a reference for the design of low-energy exhibition buildings with a good indoor-outdoor interactive relationship.It should be noted that this research still has certain limitations.The case study land sizes mentioned range from 10,000 to 50,000 square meters, and the forms are regular, without specific research and exploration on complex forms and multiple building clusters (such as segmented enclosures).Additionally, the square enclosure shows similar energy consumption results for east, west, and south orientations in the simulation results, but in actual projects, the situation becomes more complex due to the need for block interruptions and the establishment of elevated levels to ensure building use.This will be the direction of future research.
www.nature.com/scientificreports/complied with the "Energy-saving Design Standard for Public Buildings" (GB50189-2015) and the "Daylighting Design Standard for Buildings" (GB5003-2013).The air conditioning system employed fan coil units with fresh air systems and air-cooled chillers.The target illuminance for the work surface was set at 200 lx, with a lighting energy consumption of 13 W/m 2 .
㎡Figure 2 .
Figure 2. Functional relationship between first floor building area X and courtyard area Y.
Figure 4 .Figure 5 .
Figure 4. Correlation between standard floor area and energy consumption per unit area.
U-shaped enclosure (d) Square shaped enclosure
Figure 7 .Figure 8 .
Figure 7. Energy consumption situations with a first-floor area of 2000 square meters.
Figure 9 .
Figure 9.The energy consumption simulation results of the Liangzhu Cultural Museum.
Figure 10 .
Figure 10.The before and after comparison of the form with optimized window-to-wall ratio.
Table 1 .
Data of 48 well-known exhibitions building.
Table 2 .
Different layout model settings.
Table 3 .
Area of 12 experimental groups.
Table 4 .
Building model parameters.
Table 6 .
Exhibition building opening schedule.
wall ratio Total annual energy consumption (kWh) Energy consumption per unit area (kWh/m 2 )
Energy consumption comparison curve between the minimum energy-efficient window-to-wall ratio and the 9000 m 2 window-to-wall ratio.
Table 9 .
Specific values of energy consumption of Liangzhu Culture Museum.
Table 10 .
Comparison of energy consumption before and after optimization.Annual energy savings = Total annual energy consumption Original solution − Total annual energy consumption after optimization.
|
v3-fos-license
|
2022-01-31T14:43:30.728Z
|
2022-01-01T00:00:00.000
|
246413375
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.cell.com/article/S0896627321010898/pdf",
"pdf_hash": "7b557e36af092c7b9469e8746d1d5b0a67bc65f5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1056",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "732a8fce365be01b4b7442203638c536b87dcfef",
"year": 2022
}
|
pes2o/s2orc
|
Epigenomic priming of immune genes implicates oligodendroglia in multiple sclerosis susceptibility
SUMMARY Multiple sclerosis (MS) is characterized by a targeted attack on oligodendroglia (OLG) and myelin by immune cells, which are thought to be the main drivers of MS susceptibility. We found that immune genes exhibit a primed chromatin state in single mouse and human OLG in a non-disease context, compatible with transitions to immune-competent states in MS. We identified BACH1 and STAT1 as transcription factors involved in immune gene regulation in oligodendrocyte precursor cells (OPCs). A subset of immune genes presents bivalency of H3K4me3/H3K27me3 in OPCs, with Polycomb inhibition leading to their increased activation upon interferon gamma (IFN-γ) treatment. Some MS susceptibility single-nucleotide polymorphisms (SNPs) overlap with these regulatory regions in mouse and human OLG. Treatment of mouse OPCs with IFN-γ leads to chromatin architecture remodeling at these loci and altered expression of interacting genes. Thus, the susceptibility for MS may involve OLG, which therefore constitutes novel targets for immunological-based therapies for MS.
INTRODUCTION
Genome-wide association studies (GWASs) have allowed the identification of hundreds of single-nucleotide polymorphisms (SNPs) associated with susceptibility for multiple sclerosis (MS), many of which are located near genes expressed in the central and peripheral immune system (International Multiple Sclerosis Genetics Consortium, 2019). The etiology of MS is thought to involve malfunction of peripheral immune cells, which migrate to the central nervous system (CNS) and target oligodendroglia (OLG)-derived myelin (Sen et al., 2020), and many of the current therapeutical approaches in MS target different modes of action of immune cells (Tintore et al., 2019). Oligodendrocyte precursor cells (OPCs) in the adult CNS are recruited to MS lesions and are thought to contribute to remyelination during disease remission (Franklin and Ffrench-Constant, 2017), although this capacity is hindered upon disease progression Yeung et al., 2019). OPCs transition to an immune-like state in the context of MS and demyelination (Absinta et al., 2021;Falcão et al., 2019;Fernández-Castañeda et al., 2020;Jäkel et al., 2019;Kirby et al., 2019). Immune OPCs can present antigens to CD4 and CD8 T cells (Fernández-Castañeda et al., 2020;Kirby et al., 2019).
Here, we have investigated the chromatin accessibility (CA) landscape at single-cell level in OLG in the experimental autoimmune encephalomyelitis (EAE) mouse model of MS, using single-cell assay for transposase accessible chromatin using sequencing (scATAC-seq) alone and in combination with scRNA-seq. We found that a cohort of immune genes increases CA in the context of EAE, whereas others already exhibit open chromatin in control (Ctr)-OLG, suggesting chromatin priming for these genes. Interferon gamma (IFNγ) leads to their transcriptional activation, by a combination of increased CA, changes in the chromatin architecture, removal of histone marks, and/or regulation of the binding and activity of the transcription factors signal transducer and activator of transcription (STAT)1 and BTB and CNC homology (BACH)1. Single-cell multi-ome analysis of the healthy human brain indicated that chromatin priming occurs in several neural cell types. Moreover, MS susceptibility SNPs overlap with CA regions in OLG, some of which associated with genes regulated by IFN-γ, suggesting that MS risk might not be exclusively associated with immune cell types.
Differential CA allows identification of disease-specific OLG in the EAE mouse model of MS
We profiled CA of OLG at the single-cell level in EAE, by performing two iterations of scATAC-seq on the CNS of Sox10:Cre-RCE:LoxP(EGFP) mice Matsuoka et al., 2005;Sousa et al., 2009) induced with MOG35-55 peptide in complete Freud's adjuvant (CFA). The tissue was collected at the peak of EAE, score 3 or from control mice treated with CFA alone (CFA-Ctr) ( Figure 1A). We sorted and pooled GFP + (labeling OLG) and GFP − cells 4:1 from freshly dissociated spinal cords of 4 CFA-Ctr and 2 EAE mice, before tagmentation and droplet generation (10x Genomics Chromium scATAC-seq, Figure 1A). We also used plate-based scATAC-seq (Pi-ATAC [Chen et al., 2018]), with fixed dissociated GFP + and GFP − cells from brains and spinal cords of 2 CFA-Ctr and 3 EAE mice tagmented and sorted in single wells, prior to barcoding and amplification ( Figure 1A).
Clustering of the 10x scATAC-seq dataset, considering genome-wide differences in CA, led to the identification of 20 clusters ( Figure 1B). CA at the Sox10 locus was detected in all clusters except clusters 4, 10,12,15,16,17, and 18 ( Figure S1A). Clusters 4, 10, 15, and 17 presented CA at the Aif1 locus (marker for microglia and related cells, which might include macrophages [MiGl]). CA at the Ptprz1 (marker of OPCs) and Mog (marker of mature oligodendrocyte [MOL]) loci was detected in smaller distinct subsets of clusters presenting CA at Sox10 ( Figure S1A). Notably, both OPC and MOL populations derived from EAE mice clustered separately from CFA-Ctr animals, whereas MiGl came exclusively from EAE ( Figures 1C and 1D).
CA was observed mainly in intergenic (26.43%), intronic (48.83%), and promoter regions (13.84%) ( Figure S1B). To classify cell types, we integrated EAE scRNA-seq data Figure S1C) with the scATAC-seq gene activities over promoter regions by identifying shared correlation patterns (Butler et al., 2018). We classified MiGl, vascular leptomeningeal cells (VLMCs), pericyte-like cells (PLCs), OPCs, and different MOL populations, segregated between CFA-Ctr and EAE (Figures 1D, S1D, and S1E). Similar results were obtained using Pi-ATAC, with a stronger EAE segregation in the spinal cord than in the brain ( Figure S1F). Genes expressed in specific populations showed enriched CA in the scATAC-seq populations, for example, Pdgfra and Ptprz1 for OPCs, Mbp and Plp1 for MOLs, and Aif1 and Cxcr3 for MiGl ( Figure S1G). These results indicate that OLG derived from EAE mice have an altered CA state.
Increased CA at promoters and enhancers of immune genes in single OPCs and MOLs in EAE mice
We inspected the genes that are closest to peaks differentially accessible in different OLG populations between CFA-Ctr and EAE mice, by performing gene ontology (GO)-analysis (ClueGO [Bindea et al., 2009] and using the Genomic Regions Enrichment of Annotations Tool (GREAT [McLean et al., 2010]; Table S1). Genes nearest to the regions with higher CA in EAE-OPCs were involved in "neuroinflammatory response," whereas in MOL1/2-EAE in "response to molecule of bacterial origin," among others ( Figure S2A; Table S1), and in MOL5/6-EAE in "positive regulation of cytokine-mediated signaling pathway," "response to virus," among others ( Figure S2A; Table S1). These GO terms indicate that CA changes in EAE-OLG populations are related to immune pathways.
We then investigated which gene loci have CA both in OPCs/OLs and MiGl, the resident immune cells in the CNS, and which are specific for each of these cell types. We found that while there is overlap between MiGl and OLG, most genes were unique to MiGl ( Figure S2B). Gene set enrichment analysis (GSEA) (Borcherding et al., 2021) indicated that biological processes involved in IL6/JAK/STAT3 and TNF-α (via NF-κB) signaling are active specifically in MiGl, whereas IFN-γ and IFN-α responses are also induced in OPCs and MOLs in EAE (Figures 1E and S2C). The chromatin at gene loci involved in the inflammatory response is open in MiGl and, to some extent, in Ctr-and EAE-OPCs, unlike Ctr-or EAE-MOLs ( Figure 1E), suggesting a differential chromatin immune profile within OLG, depending on their differentiation stage.
Cell-type-specific expression has been associated with distal enhancers, while promoters have been suggested to provide limited information (Heintzman et al., 2009;Thurman et al., 2012). We observed that 11.48% of differential CA peaks between OLG from EAE and CFA-Ctr mice were at promoter regions ( Figure S1B). Tnfrsf1a ( Figures 1F and S2D), a MS susceptibility gene encoding for TNF receptor superfamily member 1A (De Jager et al., 2009), had increased CA at the promoter in both MOL1/2-EAE and MOL5/6-EAE populations, which also presented increased gene expression in EAE scRNA-seq data (Figure S2E). Similar correlations could be observed at the major histocompatibility complex (MHC)-I locus (Psmb9/Tap1 and Psmb8; Figures 1F, S2D, and S2E). Ciita, a master regulator of the MHC-II pathway, had EAE-OPC-specific CA at one intronic promoter (promoter IV, which is IFN-γ-responsive and active in non-professional antigen presenting cells [Muhlethaler-Mottet et al., 1997;Nikcevich et al., 1999]), whereas MiGl had CA at several promoter sites ( Figure S2D). Thus, CA at promoters provides relevant information to predict cell types and possibly cell states.
Furthermore, 33.65% and 49.02% of differential CA peaks between OLG from EAE and CFA-Ctr mice were at intergenic and intronic regions, respectively ( Figure S1B). Promoter-enhancer contacts can be estimated from scATAC-seq data by assessing chromatin co-accessibility. We calculated peak to gene interactions by correlating peak CA and gene expression data with ArchR Pliner et al., 2018), allowing prediction of the overall promoter-enhancer interactome of OLG in EAE and CFA-Ctr. We observed CA at putative enhancers distal to immune genes in different EAE populations. Putative enhancers at the Tnfrsf1a locus that linked to the Tnfrsf1a promoter had CA in MiGl only,or in all EAE populations (Figures 1F and S2D). Although the promoter of MHC-I pathway gene Tap2 did not have differential CA, it connected with a putative enhancer downstream that did present increased CA in EAE-OLG and linked to promoters of other MHC-I pathway genes in the same locus ( Figures 1F and S2D). Similarly, MHC-II gene H2-ab1, which did not present increased CA at the transcription start site (TSS), linked to putative enhancers upstream and downstream with CA in MiGl and EAE-OPCs or in all EAE populations ( Figures 1F and S2D). Thus, EAE-derived OLG have increased CA both at the promoters and enhancers of genes related to immune pathways.
Primed CA of immune genes in single OPCs and MOLs
We observed that several immune genes, such as Psmb9, Tap1, Tap2, and H2-ab1, with increased expression in EAE-OLG, were accessible in Ctr-OLG at their promoters ( Figures 1F and S2D), suggesting chromatin priming. OPCs had more CA at immune genes than MOL in Ctr conditions (e.g., at the Tap1 and Tnfrsf1a loci; Figures 1F and S2D), which suggests that OPCs might be more amenable to transition to immune states than MOLs. We then assessed the correlation of CA over 500-bp promoter regions (gene activity score) with gene expression from the scRNA-seq populations. Genes with increased expression in EAE and increased CA (Type1) could be divided into (1) genes with no or very low CA in Ctr, and (2) genes already presenting CA in Ctr. Top GO terms in OPCs (EAE versus Ctr) for Type1a genes were "response to IFN-β" and "negative regulation of viral genome replication," whereas Type1b genes were involved in "antigen processing and presentation of peptide antigen" (Figure 2A; Table S2).
Genes with increased expression in EAE-OPCs, but no change in CA (Type2), could also be divided in (1) genes that had high CA in Ctr and in EAE (Type2a; GO terms related to "mitotic cell cycle" and "DNA metabolic process" [ Table S2]) and (2) genes with low CA in both Ctr and EAE (Type2b). Likewise, genes with higher expression in MOL1/2-EAE and MOL5/6-EAE, and increased (Type1) or similar (Type2) CA, are associated with immune processes (Figures 2A and S2F; Table S2). Type2a genes present a higher level of expression than Type1 genes in both OPCs and MOLs, which might be consistent with already open chromatin at promoters. Genes that lose expression in EAE-OPCs/MOLs, with GO terms related to the regulation of cilium assembly and metabolic processes, do not lose CA (Type3; Figures 2A and S2F).
We then assessed the distribution of immune genes found in GO terms as "immune response" (GO:0002250) and "immune system process" (GO:0002376) within the different RNA/CA types. Only a minority of these immune genes are expressed or present CA in OLG ( Figure S2G; Table S2), consistent with the regulation of a small subset of immune genes in OLG when compared with MiGl ( Figure S2B). Although up to a fourth of these genes were in the Type1a category, nearly half of the genes in OPCs, MOL1/2, and MOL5/6 in immune GO terms were found in the intermediate primed category (Type1b) and the primed category (Type2a), confirming that most of these immune genes are already primed at the chromatin level in OLG ( Figure S2G; Table S2). We also analyzed the normalized aggregated scATAC coverage over the putative ArchR-predicted enhancers for Type1-Type4 genes ( Figure 2A) and found that the primed state of the chromatin also applies to enhancers ( Figure S2H). 2B and S3A). GSEA on the gene loci with accessible chromatin showed an enrichment within the hallmarks IFN-γ and IFN-α response for all the EAE-OLG populations, like the enrichment in MiGl ( Figure S3B). Immune-related genes, such as Ifit2, Nlrc5 ( Figure 2C), H2-Ab1, B2m, and Tap2 ( Figure S3C), already exhibited CA in Ctr-OPCs and Ctr-MOLs, but with no or low expression in Ctr and induced expression in EAE. For other MHC-I related genes, such as Psmb9, Tap1, and Psmb8, an increase in CA and an induction of expression occurred simultaneously in EAE-MOLs ( Figure S3C), which confirms our previous findings with unimodal scATAC-seq and scRNA-seq. Thus, our analysis indicates that Ctr-OLG are already in a primed immune chromatin state, with the observed increased expression of primed immune genes in EAE most likely controlled by mechanisms other than CA.
IFN-γ modulates CA only in a subset of immune genes in OPCs
IFN-γ induces expression of immune genes in OPCs, like the phenotype observed in EAE Kirby et al., 2019). IFN-γ response is one of the hallmarks induced in EAE-OLG at the CA level. To investigate whether modulation of CA is involved in IFN-γ-mediated activation of immune transcriptional programs, we treated primary mouse OPCs with IFN-γ for 48 h and performed bulk ATAC-seq and RNA-seq. Interferon response genes Stat1, Stat2, and Irf1, and MHC-I and MHC-II genes H2-k1, H2-q7, H2-ab1, and H2-aa, among other genes involved in immune regulation, presented increased expression ( Figures 3A and S4A-S4C).
There were fewer CA sites altered upon IFN-γ treatment (47 with log 2 fold change ≥1.5), when compared with the number of genes with altered expression (867) ( Figure 3A; Table S3). Nevertheless, we found similar GO terms when analyzing peaks within proximal (promoter) regions of genes, including "response to IFN-β" and "defense response to protozoan" ( Figure S4D; Table S3). We also found more changes in CA at annotated enhancers (463) ( Figure S4E; Table S3). Fewer genes were downregulated upon IFN-γ treatment (171 versus 696) ( Figure 3A), with GO terms related to system and CNS development ( Figure S4C; Table S3). Among these genes were Sox8, Myrf, and Plp1 ( Figure S4F), consistent with the negative impact of IFN-γ on OPC differentiation (Chew et al., 2005;Kirby et al., 2019). We did not find any loss of CA at their promoters or annotated enhancers ( Figure S4F).
A larger subset of the upregulated immune response genes already had CA at their promoter in control OPCs (272 Type2), and only a few genes altered their CA upon IFN-γ exposure (78 Type1) ( Figure 3B). We could again divide the Type1 and Type2 in two subgroups. The top GO terms for those type of genes that were already accessible in Ctr but had both increased CA and expression (Type1b, 56 genes) were "antigen processing and presentation of peptide antigen," "defense response to virus," and "response to IFN-β" ( Figure 3B; Table S3). Type2a genes, which had high CA in Ctr that did not change upon IFN-γ treatment while their expression was induced, had top GO terms as "positive regulation of proteolysis," "IκB kinase/NFκB signaling," and "positive regulation of apoptotic signaling pathway," and Type2b genes, which had low CA, had top GO terms as "cellular response to IFN-β" and "negative regulation of innate immune response" ( Figure 3B; Table S3). Some of the Type2 genes, such as Irf1, Stat3, and Ifit2, had enhanced CA within their gene body or in the regions upstream of the TSS, pointing toward promoter priming and specific enhancer regulation of expression. Other genes, such as the Gbp gene family, presented increases in both CA and gene expression upon IFN-γ treatment ( Figure 3C).
Although there were communalities between the immune regulation observed in OPCs in vivo in EAE and the effects of IFN-γ in OPCs ex vivo, there were also differences within the different Types (Figures 2A and 3B; Tables S2 and S3). Intersection between OPCs in vivo in EAE and OPCs upon IFN-γ treatment shows low overlap between genes associated to the different Types, where Type2a genes showed the highest intersection with 22 genes shared between EAE-OPCs and IFN-γ-treated OPCs ( Figure S4G). The environment surrounding OPCs in EAE is complex, due to exposure to additional cytokines other than IFN-γ, which might contribute to the observed differences. Nevertheless, our data indicate that OPCs already exhibit primed CA in a large subset of immune genes, prior to inflammatory insults both ex vivo and in vivo.
Transcription factors involved in immune regulation have increased motif accessibility during the transition to immune OLG
Since transcription factors (TFs) are key modulators of transcription, we applied chromVAR (Schep et al., 2017) to the 10x scATAC-seq dataset to determine TF motif variability in EAE. As expected, we observed clusters of TFs of the Ets and AP-1/bZIP families with enriched accessible motifs in MiGl, whereas members of the SRY-related HMG-box (SOX) and basic helix-loop-helix (bHLH) families had increased motif accessibility (MA) in Ctr-OPCs and Ctr-MOLs ( Figure 4A; Table S4). We found differential TF activity for 147 (OPCs), 105 (MOL1/2), and 94 (MOL5/6) non-redundant motifs between EAE and Ctr, respectively (fold change ≥1; adjusted p value < 0.05), including FOS, SMARCC1, and TFs with known immunoregulatory functions, such as the TF BTB and CNC homology (BACH)1, BACH2, and the basic leucine zipper ATF-like TF BATF ( Figures 4A, 4B, S5A, and S5B; Table S4).
A subset of TFs in OLG showed low expression or were not expressed at all in Ctr and EAE (67 in OPCs, 136 in MOL1/2, and 210 in MOL5/6 TFs were expressed in less than 50% of cells in both Ctr and EAE), such as Bach2 in OPCs, Bach1 in MOL1/2, and Klf4 in MOL5/6 ( Figure S5B), indicating that they will not be driving the MA changes. However, some of the identified TFs did exhibit increased expression in OLG from EAE mice (22 TFs in OPCs, 26 TFs in MOL1/2, and 21 TFs in MOL5/6 with delta EAE-Ctr expression >0.1; Figure S5B). In addition, a subset of these TFs had similar expression in Ctr-and EAE-OLG (40 TFs in OPCs, 77 TFs in MOL1/2, and 187 TFs in MOL5/6 with delta EAE-Ctr expression <0.1), such as Sox8 (in MOL1/2), Smarcc1 (in OPCs and MOL1/2, but not MOL5/6), and Nfix ( Figure S5B). Thus, TFs as SOX8, implicated in OLG development (Stolt et al., 2004) and MS susceptibility (International Multiple Sclerosis Genetics Consortium, 2019), might work in concert with TFs other than the ones they usually cooperate with during development and homeostasis, in order to regulate immune gene transcription in OLG in EAE/MS.
STAT1 and BACH1 are involved in IFN-γ-mediated regulation of immune genes in OPCs
BACH1, which regulates inflammatory macrophage differentiation (Igarashi et al., 2017), is expressed in both Ctr-and EAE-OPCs, presenting increased MA in the latter and, despite very low expression, also in MOL1/2 ( Figures 4B and S5B). We transfected primary OPCs with siRNAs targeting Bach1 before treating with IFN-γ for 6 h ( Figure S6A, n = 4). qRT-PCR and bulk RNA-seq of IFN-γ-treated OPCs upon Bach1 knockdown indicated downregulation of the target gene and upregulation of Hmox1 (Figures 4C and S6B-S6D; Table S5), consistent with the role of BACH1 as a transcriptional repressor of Hmox1 (Igarashi et al., 2017). We also observed an increased expression of MHC-I genes, such as H2-Q4, H2-Q7, and MHC-II pathway gene Cd74, upon IFN-γ treatment (Figures 4C, 4D, and S6D; Table S5). Thus, our data suggest that BACH1 negatively regulates IFN-γmediated induction of a subset of immune genes in OPCs.
We analyzed the binding of STAT1 in OPCs by performing CUT&Tag (Kaya-Okur et al., 2019). STAT1 binding at regulatory regions of the immune-related genes identified by bulk RNA-seq was increased in OPCs upon IFN-γ treatment ( Figures 4G and S6I). Moreover, genes that were up-or downregulated by STAT1 knockdown had increased binding of STAT1 in OPCs treated with IFN-γ compared with Ctr-OPCs ( Figures 4G, 4H, and S6I). Thus, our data indicate that TFs as BACH1 and STAT1 participate in the regulation of IFN-γ-induced immune gene expression in OPCs.
Enhancer-promoter contacts at immune genes in OPCs are altered upon IFN-γ treatment
Mechanisms other than increased CA must be required for the transcriptional activation of primed immune genes in the context of EAE/IFN-γ treatment. We first profiled H3K27ac with Cut&Run (Skene and Henikoff, 2017) to identify active enhancers. An increase of H3K27ac was observed at enhancers and promoters of immune genes, such as Tgtp1/2, Zbp1, Gbp9, Cd74, Nlrc5, H2-aa, and H2-eb1 ( Figure 5A; Table S6). CCCTC-binding factor (CTCF) is required for acute inflammatory responses in macrophages (Stik et al., 2020), and an increase in CTCF-mediated promoter-enhancer interactions at the MHC-I and MHC-II loci occurs in B cells (Majumder and Boss, 2010;Majumder et al., 2014). We observed increased binding of CTCF in enhancers and at the promoters of many immune genes ( Figure 5A; Table S6).
To evaluate whether promoter-enhancer interactions were modulated by IFN-γ treatment, we performed HiChIP (Mumbach et al., 2016) targeting H3K27ac in OPCs treated with IFN-γ. We used the activity-by-contact (ABC) model (Fulco et al., 2019) to predict promoter-enhancer interactions in both Ctr-and IFN-γ-OPCs based on CA and H3K27ac-HiChIP. We found that 46.26% (322 genes, 223 upregulated and 99 downregulated) of differentially expressed genes upon IFN-γ treatment exhibited reconfiguration of the enhancer-promoters contacts ( Figure 5B; Table S6). For instance, we observed an increase in predicted interactions at the Psmb9-Tap2 locus ( Figure 5C). An enhancer downstream of Tap2 harbors a STAT1/STAT2 motif ( Figure S5C) with STAT1 binding in IFN-γ-OPCs ( Figure S6I), which also has increased CA and H3K27ac upon IFN-γ treatment and might be the major enhancer connecting with all the genes in the locus ( Figure 5C). The CA ( Figure 1F) and STAT2 MA ( Figure S5C) at this enhancer were also increased in EAE-OLG. Interactions between numerous enhancers and promoters in the neighboring H2-ab1-H2-eb1 loci were also increased in IFN-γ-OPCs ( Figure 5C).
We analyzed the normalized ATAC coverage over the putative enhancers predicted by ABC for Type1-Type4 genes ( Figure 3B). There was an increase in CA upon IFN-γ treatment versus Ctr in enhancers at Type1 genes, but we also observed an increase at Type2 genes ( Figure S7A), suggesting that CA at enhancers might play a role in transcriptional activation. These results suggest that altered CA, together with increased CTCF binding, H3K27ac deposition, and promoter-enhancer interactions, might contribute to the activation of an immune gene transcriptional program in primed OPCs.
Inhibition of EZH2, the enzyme responsible for the deposition of H3K27me3 ( Figure S7C), with EPZ011989 (Burr et al., 2019) (EZH2i) led to a reduction of H3K27me3 overall ( Figure S7D) and at specific genomic loci ( Figures S7E and S7F) and a derepression of TFs involved in specification and morphogenesis ( Figures 6D and S7G; Table S7). Importantly, EZH2i led to an upregulation of the expression of a subset of immune genes, such as H2-d1 and Tap1, whereas others remained repressed. Spiking EZH2-inhibited cells with IFN-γ for 6 h led to an increased upregulation of MHC-I and MHC-II genes H2-q6, H2-q7, H2-ab1, H2-aa, and H2-eb1 and genes involved in cytokine signaling including Cxcl9 and Cxcl11 ( Figures 6E, S7G, and S7H; Table S7). These results suggest that H3K27me3 removal synergizes with IFN-γ to promote immune gene transcription in OPCs.
SNPs and outside variants conferring susceptibility in MS are located at accessible regulatory regions in OLG in the mouse and human CNS
To investigate the CA of immune genes in the human brain, we performed simultaneous scATAC and RNA-seq (10x Genomics multi-ome) in brain gray matter from two healthy individuals, identifying the major cell types present in the CNS in both modalities (Figures 7A and S8A). Strikingly, the promoters of MHC-I pathway genes TAP2, PSMB8, TAP1, PSMB9, and HLA-B were accessible not only in MIGL, but also in OPCs, MOLs, ASTRO, inhibitory neurons (INHNEU), and excitatory neurons (EXCNEU) ( Figures 7B and 8B). Nevertheless, PSMB9 and PSMB8 were only expressed in MIGL although at very low levels, whereas TAP2 was expressed in most cell types except MOLs and EXCNEU ( Figure 7B). There was more specificity for MHC-II-related genes. Analyzing Type1 genes from mouse OLG ( Figure 2A) in the human multi-ome dataset, we find that most genes are expressed and accessible in human MIGL and to a lesser extent also in ASTRO, but not in OLG (except a small subset in OPCs), in concordance with their low expression in mouse Ctr-OPC and Ctr-MOL ( Figure S8C). Type2 genes show the same expression pattern in human but are accessible in all cell types ( Figure S8C). We also analyzed snATAC-seq from the adult human brain from 10 healthy individuals . As in the multi-ome data, we observed CA at the regulatory regions of MHC-I-related genes in OLG, but not in MHC-II genes ( Figure S8B). Thus, our data suggest that chromatin priming at a subset of immune genes occurs in human OLG and in other neural cell types.
Given that some of these immune loci bear SNPs associated with MS susceptibility, we assessed whether we could identify putative MS susceptibility loci associated with CA in OLG. We cross-referenced the coordinates of mouse OLG and MiGl peaks from the EAE scATAC experiments with the location of MS susceptibility SNPs (International Multiple Sclerosis Genetics Consortium, 2019) and outside variants (Factor et al., 2020). Linkage disequilibrium score regression (LDSC) (Finucane et al., 2018), multi-marker analysis of genomic annotation (MAGMA) (de Leeuw et al., 2015), and genomic regulatory elements and GWAS overlap algorithm (GREGOR) (Schmidt et al., 2015) indicated an enrichment at chromatin accessible regions and associated genes mainly in MiGl, consistent with association of MS susceptibility with gene expression in MIGL International Multiple Sclerosis Genetics Consortium, 2019) (Figures 7C and S8D; Table S8). We also found enrichment for OLG, although with not the same significance level as for MiGl (LDSC: MiGl versus All: coefficient p value 1.05E-6; MOL5/6 [Ctr versus EAE], coefficient p value 0.008834828; OPC [Ctr versus EAE] coefficient p value 0.048301018). Similar results were obtained for MAGMA, although only for the mouse scATAC-seq EAE dataset (Table S8). GREGOR analysis on the human multi-ome regulatory regions suggested an overrepresentation of MS-associated SNPs in ASTRO and OLIGO ( Figure 7C), whereas the same analysis on the human scATAC data suggested enrichment in MiGl regulatory regions, but also in neural cells ( Figure S8D; Table S8). This analysis also indicated an enrichment to a lesser extent in OLG from the mouse EAE and Ctr spinal cord ( Figure 7C; Table S8). Moreover, mouse primary OPCs, regardless of treatment with IFN-γ, also presented enrichment of CA with SNPs associated with MS susceptibility ( Figure 7C).
Analysis of multi-ome data and CA (scATAC-seq) data from adult human brain derived from healthy individuals revealed that these MS susceptibility SNPs overlapped with CA in diverse CNS cell types in a non-disease context ( Figure 8A). We found that AA_DQβ1_position_−5_L (HLA-DQB1/H2-Ab1) ( Figure 8B), rs10918297 (UCK2/Uck2), and rs35703946 (IRF8/Irf8) SNPs overlap in OLG-specific ATAC peak regions, in human and/or mouse (EAE), and are classified as Type2a genes for MOL1/2 (H2-Ab1, Uck2) and OPC (Irf8) (Tables S2 and S8). Irf8 is an intermediate primed gene (Type1b) in MOL1/2 and MOL5/6 (Table S2). In addition, SNPs located at the human MHC locus HLA-B (SNP ID AA_B_position_45_TK) and the non-MHC locus ZHX3 (SNP ID rs62208470) coincided with CA in human MiGl and in OLG and at the corresponding mouse loci (H2-d1/H2-l and Zhx3) in both EAE and CFA-Ctr (Figures 8A, 8B, and S8B; Table S8). Table S8). In addition, a putative enhancer region of H2-eb1 also presented CA in EAE-OLG, but neither in human OLG nor in MIGL (SNP ID rs67476479_CA at HLA-DRB1). The outside variant SNP rs6498169 (Factor et al., 2020), located in between the loci of the master MHC-II regulator Ciita and Socs1, a major regulator of inflammation, also presented CA specifically in EAE-OPC and EAE-MOL, but not in Ctr-OLG or MiGl ( Figure S8E; Table S8). Socs1 is defined as Type1b in MOL5/6 ( Table S2). The non-MHC SNP rs7975763 within the Pitpnm2 locus was overlapping mainly with CA in OPC-EAE ( Figure S8E; Table S8). This suggests that the CA of these SNPs might increase only in MS patients and not in healthy individuals.
MHC SNPs such as SNP ID
Our data suggest that these SNPs might be involved in the modulation of regulatory regions in OLG, leading to altered transcriptional output and ultimately to altered function of OLG in the context of MS. We investigated potential TF-binding sites at specific SNPs, by intersecting our results with predictions of SNPs with potential effect on TF binding changes (SNP2TFBS) (Kumar et al., 2017). We found that the binding of TFs as EGR1, SP1, ETS1 and SOX10 might be affected in OLG in EAE or in OPCs treated with IFN-γ (Table S8). Moreover, scanning the region around MS-associated SNPs, 500 bp upstream and downstream, and filtering for expression (scRNA-seq) led to the identification of several other TFs whose binding might be potentially affected by OLG cell state transitions in the context of EAE (Table S8).
Genes associated with MS susceptibility SNPs can be modulated in OPCs by IFN-γ
We investigated whether IFN-γ treatment would affect the chromatin landscape and transcription of genes associated with MS SNPs and outside variants. The SNP located at H2-Ab1 (SNP AADQb1-position_−5_L at HLA-DBQ1) overlaps with high levels of H3K27me3 in OPCs ( Figure S8F; Table S8). Upon IFN-γ treatment, H3K27me3 is removed at the SNP locus, together with an increase in H3K27ac, H3K4me3, CTCF binding, and interactions with a H2-ab1 enhancer ( Figure S8F). IFN-γ also led to altered promoterenhancer interactions in OPCs (Table S6).
Many SNPs are in intergenic regions; therefore, we investigated whether any SNP not assigned to a gene interacts with genes with altered expression upon IFN-γ treatment. SNP rs7191700 downstream of the SOCS1 locus did not overlap with CA in Ctr-OPCs or human OLG and MIGL ( Figures 8A, 8C, and S8B). However, upon IFN-γ treatment, connections from the SNP locus were formed to the Socs1 promoter together with increased CA and expression ( Figure 8C; Table S8).
The SNP rs2248359-C, neighboring the CYP24A1 gene encoding a protein involved in vitamin D3 degradation, has been suggested to link MS risk and vitamin D metabolism in the brain (Ramasamy et al., 2014). We found that a regulatory region overlapping with another SNP (rs2248137) at the Cyp24a1 promoter presented increased interactions with the Bcas1 promoter upon IFN-γ treatment ( Figure 8C; Table S8). Bcas1 is important for early myelination (Fard et al., 2017;Ishimoto et al., 2017), and its expression is decreased upon IFN-γ treatment ( Figure 8C). The interacting SNP region overlaps with high levels of H3K27me3, which might explain decreased expression of Bcas1 upon IFN-γ treatment ( Figure 8C). Thus, our data suggest that SNPs at the CYP24A1 locus might not only be involved in vitamin D metabolism in MS, but also in OL differentiation and possibly myelination. Moreover, regulatory regions overlapping with MS susceptibility SNPs control the expression of neighboring genes and are amenable to modulation by IFN-γ in OPCs, most likely affecting both the transition of OPCs to immune and differentiated states.
DISCUSSION
In this study, we find that OLG are primed at the chromatin level to be able to activate immune gene programs in the context of disease. This primed immune chromatin state is a cellular state in which immune genes have CA in homeostatic conditions at their promoter regulatory regions but have low or no expression. This state might also constitute epigenetic memory from past biological processes the cell has endured. Type1b genes involved in antigen presentation are already accessible in OLG ( Figures 2A, 3B, and S2F) but have a further increase in CA when the cells are exposed to an inflammatory environment. Type2a genes involved in immune processes in MOL1/2 and primary OPCs have high CA that does not change when the cell is exposed to an inflammatory environment. The observed differences between scATAC-seq and scRNA-seq indicate that OLG present open chromatin at (1) genes regulating biological processes that are already transcriptionally active and (2) genes that might be transcriptionally activated upon a given stimulus. Exposure to an inflammatory environment such as EAE and IFN-γ treatment might not significantly change the CA status but lead to gene expression by other chromatin-related mechanisms such as resolution of the bivalency for H3K4me3 and H3K27me3, rearrangement of CTCF binding, and remodeling of promoter-enhancer interactions. While our data support the role of distal enhancers in establishing cell identities (Heintzman et al., 2009;Thurman et al., 2012), they also highlight the importance of promoters not only in this process, but more importantly, in the transition between functional and disease-specific cell states within the same cell identity.
The expression of immune genes in OPCs and MOLs in the context of MS adds new roles to the functional portfolio of OLG, which might also occur in Alzheimer's disease (AD) (Zhou et al., 2020) and in aging (de la Fuente et al., 2020;Dulken et al., 2019;Spitzer et al., 2019;Ximerakis et al., 2019). Primed immune epigenomic programs might also be present in other cell types that unexpectedly activate immune transcriptional programs, such as structural cells (Krausgruber et al., 2020) and intestinal stem cells (Biton et al., 2018), suggesting a second line of immunological responses mediated by non-specialized immune cells. Our data indicate that most cell types in the CNS present immune chromatin priming, including neurons, consistent with the induction of MHC-I expression in neurons expressing ApoE in the context of AD (Zalocusky et al., 2021).
SNPs in MS are in most cases located nearby genes involved in immune regulation (International Multiple Sclerosis Genetics Consortium, 2019). Thus, MS susceptibility has been linked to immune cells within the CNS or in the periphery. Our data indicate that a subset of SNPs present in MS patients is in regulatory regions with CA in OLG in homeostatic conditions in mouse and in healthy individuals in human, or exhibit increased CA in OLG in the EAE mouse model of MS. Some of the genes that are associated with these regulatory regions have altered expression in EAE . Recent findings suggest that SNPs located in regulatory regions of genes involved in transcriptional elongation might be involved in dysregulation of OL differentiation in the context of MS (Factor et al., 2020). Thus, susceptibility for MS might lead to disease onset, progression, or remission by the activation of abnormal immune and non-immune transcriptional programs not only in immune cells but also in OLG, which therefore constitute novel targets for immunological-based therapies for MS.
STAR★METHODS
Detailed methods are provided in the online version of this paper and include the following:
RESOURCE AVAILABILITY
Lead contact-Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Gonçalo Castelo-Branco (goncalo.castelo-branco@ki.se).
Materials availability-The study did not generate new unique reagents.
Data and code availability-IGV sessions links for hg19, hg39 and mm10 are available at https://github.com/Castelo-Branco-lab/Meijer_Agirre_scATACEAE_2020., alongside code, which is also available at https://doi.org/10.5281/zenodo.5781403. The scATAC-seq dataset can be explored at the web resources listed at the key resources table, and available at https://ki.se/en/mbb/oligointernode and http://cells.ucsc.edu/?ds=olg-eaems. All the raw and processed data generated in this work are available through accession SuperSeries GEO: GSE166179. In the case of human samples generated for this study raw data can be accessed through EGA:EGAS00001005911 accession and processed data files at GEO: GSE166179. Human scATAC-seq processed control samples were retrieved from GEO: GSE147672 accession.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Animals-The mouse line used in this study was generated by crossing Sox10:Cre animals (Matsuoka et al., 2005) (The Jackson Laboratory mouse strain 025807) on a C57BL/6j genetic background with RCE:loxP (EGFP) animals (Sousa et al., 2009) (The Jackson Laboratory mouse strain 32037-JAX) on a C57BL/6xCD1 mixed genetic background. Females with a hemizygous Cre allele were mated with males lacking the Cre allele, whereas the reporter allele was kept in hemizygosity or homozygosity in both females and males. In the resulting Sox10:Cre-RCE:LoxP (EGFP) animals the entire OL lineage was labeled with EGFP. Breeding males containing a hemizygous Cre allele, in combination with the reporter allele, with non-Cre carrier females resulted in offspring where all cells were labeled with EGFP and was therefore avoided. For primary cell culture, animals of both sexes were sacrificed at P4-P6. For EAE experiments, males between the ages of 9 and 13 weeks were used for immunization and sacrificed 10-17 days after, at the peak of the disease.
All animals were free from the most common mouse viral pathogens, ectoparasites, endoparasites and mouse bacterial pathogens harbored in research animals. The battery of screened infective agents met the standard health profile established in Karolinska Institutet animal housing facilities. Mice were kept with the following light/dark cycle: dawn 6:00-7:00, daylight 7:00-18:00, dusk 18:00-19:00, and night 19:00-6:00; they were housed to a maximum number of five per cage in individually ventilated cages (IVC Sealsafe GM500, Tecniplast). Cages contained hardwood bedding (TAPVEI), nesting material, shredded paper, gnawing sticks and a cardboard box shelter (Scanbur). The mice received regular chew diet (either R70 diet or R34, Lantmännen Lantbruk, or CRM-P, 801722, Special Diet Services). General housing parameters such as relative humidity, temperature and ventilation followed the European Convention for the Protection of Vertebrate Animals used for Experimental and other Scientific Purposes treaty (ETS No. 123). Briefly, consistent relative air humidity of 50% and 22°C were maintained, and the air quality was controlled with the use of stand-alone air handling units, supplemented with high efficiency particulate air-filtered air. Husbandry parameters were monitored using ScanClime (Scanbur) units. Water was provided in a water bottle, which was changed weekly. Cages were changed every other week. All cage changes were done in a laminar airflow cabinet. Facility personnel wore dedicated scrubs, socks and shoes. Respiratory masks were used when working outside of the laminar airflow cabinet. All experimental procedures on animals were performed following the European . Control animals underwent the same treatment, but CFA without MOG35-55 peptide (CK-2110 kit from Hooke Laboratories) was used instead. Spinal cord and brains were collected at the peak of the disease when clinical score 3 (representing limp tail and complete paralysis of hind legs) has been reached. Animals that did not reach this clinical score were not analyzed in this study.
METHOD DETAILS
Tissue dissociation for single-cell ATAC-seq experiment-Mice were sacrificed with a ketaminol/xylazine intraperitoneal injection followed by intracardiac perfusion with PBS. Brain and spinal cords were collected and dissociated using the adult brain dissociation kit (130-107-677, Miltenyi), following the manufacturer's instructions, which included myelin debris removal but not the red blood cell removal step.
Single-cell ATAC-seq (10x Genomics)-Immediately after dissociation, cells were stained with DAPI (0.5 mg/ml, D9542, Sigma) and sorted on a FACS Aria III cell sorter (BD Biosciences). Sox10-GFP+/DAPI− cells were collected in PBS + 0.5% BSA and pooled with Sox10-GFP−/DAPI− cells with a 4:1 ratio. The pool of cells was then lysed and washed according to the Demonstrated Protocol: Nuclei Isolation for Single cell ATAC Sequencing (10x Genomics) as follows: the cells were centrifuged for 10 min at 300 × g and 4°C, resuspended in ATAC lysis buffer (containing 0.1% IGEPAL (CA-630), 0.1% Tween-20, 0.01% Digitonin, 1% BSA, 10 mM Tris-HCl pH 7.4, 10 mM NaCl, 3mM MgCl2) and incubated on ice for 3 min. After the incubation, wash buffer (containing 0.1% Tween-20, 1% BSA, 10 mM Tris-HCl pH 7.4, 10 mM NaCl, 3 mM MgCl2) was added on top without mixing and the nuclei were centrifuged for 5 min at 500 × g and 4°C. Nuclei were washed once in Diluted Nuclei buffer (10x Genomics) containing 1% BSA and incubated for 60 min at 37°C in tagmentation mix (10x Genomics). The Chromium Single Cell ATAC v1 Chemistry was used to create single-cell ATAC libraries. Two EAE and four CFA-Ctr animals were used for independent replicates. Libraries were sequenced on an Illumina Novaseq 6000 with a 50-8-16-49 read setup and a minimum of 25,000 read pairs per cell.
Plate-based single-cell ATAC-seq (Pi-ATAC)-Immediately after dissociation, cells were fixed in 1% formaldehyde (28906, Thermo Fisher Scientific) for 10 min and quenched with glycine (125 mM) for 5 min at room temperature and then washed and stored in 0.5% BSA in PBS with 0.1% sodium azide at 4°C until further processing. The cells were counted and aliquots of 500.000 cells were centrifuged for 10 min at 1,000 × g and room temperature. Cells were resuspended in lysis buffer (containing 0.05% IGEPAL (CA-630), 10 mM Tris-HCl pH 7.4, 10 mM NaCl, 3 mM MgCl2) and incubated for 5 min at room temperature. After a 20-min centrifugation at 1,000 × g at room temperature, the cells were incubated with anti-GFP antibody (FITC conjugated, 1:100, Ab6662, Abcam) and DAPI (0.5 μg/ml) in PBS containing 5% BSA for 20 min at room temperature. The cells were centrifuged for 10 min at 1,000 × g and resuspended in tagmentation mix (dH2O, 2x TD buffer [Wang et al., 2013] and Tn5 enzyme [Picelli et al., 2014]) and incubated for 30 min at 37°C. For both lysis and tagmentation buffers the volume was scaled up to match the number of cells (50 μl per 50.000 cells). Tagmentation reaction was stopped by addition of 40 mM EDTA. Cells were then centrifuged for 10 min at 1,000 × g and room temperature and resuspended in PBS + 0.5% BSA. Sox10 GFP+/DAPI+ cells were sorted on an Influx (BD Biosciences) in reverse crosslinking buffer (Chen et al., 2018) with single cells in each well of 96-well plates. Also, Sox10 GFP−/DAPI+ cells were sorted as a negative control for the OL lineage. For reverse crosslinking, the plates were incubated overnight at 65°C ending with 10 min at 80°C the next day. PCR master mixes (NEBNext High fidelity, M0541S, NEB) containing unique barcoding primers per well were dispensed on top of the reverse crosslinking buffer and DNA was amplified with the following cycling conditions: 72°C for 5 min, 98°C for 30s; 20 cycles at 98°C for 10s, 63°C for 30s, and 72°C for 1 min. PCR products were purified with the MinElute purification kit (Qiagen) and then PAGE purified to remove adapter dimers. Three EAE and two CFA-Ctr animals were used for independent replicates. Libraries were sequenced on an Illumina Hiseq 2500 with a 50-8-8-50 read setup and a minimum of 25,000 read pairs per cell.
Single-cell multi-omics (10x Genomics)-mouse-Instead of myelin debris removal (Miltenyi), Percoll (Cytiva 17-0891-01) mixed with HBSS was used to generate a percoll gradient of 38% to remove myelin debris. Immediately after debris removal, cells were stained with DAPI (0.5 μg/ml, D9542, Sigma) and sorted on a FACS Aria III cell sorter (BD Biosciences). Sox10-GFP+/DAPI− cells were collected in PBS + 0.5% BSA. The pool of cells was then lysed and washed according to the Demonstrated Protocol: Nuclei Isolation for Single cell ATAC Sequencing (10x Genomics) with modifications from the Demonstrated Protocol: Nuclei Isolation for Single Cell Multiome ATAC + Gene Expression Sequencing (10x Genomics) as follows: the cells were centrifuged for 10 min at 300 × g and 4°C, resuspended in ATAC lysis buffer (containing 0.01% IGEPAL (CA-630), 0.01% Tween-20, 0.001% Digitonin, 1% BSA, 1 mM DTT, 1 U/ul RNase inhibitor, 10 mM Tris-HCl pH 7.4, 10 mM NaCl,3 mM MgCl2) and incubated on ice for 3 min. After the incubation, wash buffer (containing 0.1% Tween-20, 1% BSA, 1 mM DTT, 1 U/ul RNase inhibitor, 10 mM Tris-HCl pH 7.4, 10 mM NaCl, 3mM MgCl2) was added on top without mixing and the nuclei were centrifuged for 5 min at 500 × g and 4°C. Nuclei were washed once in Diluted Nuclei buffer (10x Genomics) containing 1% BSA, 1 mM DTT and 1 U/ul RNase inhibitor, and incubated for 60 min at 37°C in tagmentation mix (10x Genomics). The Chromium Next GEM Single Cell Multiome ATAC + Gene Expression v1 Chemistry was used to create single nuclei ATAC and RNA libraries from the same cell. Two EAE and two CFA-Ctr animals were used for independent replicates. Libraries were sequenced on an Illumina Novaseq 6000 with a 50-8-24-49 read setup for ATAC (minimum 25,000 read pairs per cell) and a 28-10-10-90 read setup for RNA (minimum 20,000 read pairs per cell).
Single-cell multi-omics (10x Genomics)-human-Tissue samples and associated clinical and neuropathological data were supplied by the Multiple Sclerosis Society Tissue Bank, funded by the Multiple Sclerosis Society of Great Britain and Northern Ireland, registered charity 207495. All procedures used by the Multiple Sclerosis and Parkinson's Tissue Bank at Imperial College London in the procurement, storage and distribution of tissue have been approved by the relevant Multicentre Research Ethics Committee (18/WA/ 0238). Samples were collected under the IRAS Project ID: 246227. Nuclei were isolated from two 10mg fresh-frozen human gray matter brain samples (PD003 -Male, age 58, Post Mortem Interval (PMI) 9 hours, RNA Integraty Number (RIN) 8.9; PD004 -Male, age 90, PMI 12 hours, RIN 7.8), using Nuclei Pure Prep Nuclei Isolation Kit (Sigma Aldrich) with the following modifications. The tissue was lysed in Nuclei Pure Lysis Solution with 0.1% Triton X, 1mM DTT and 0.4U/ul SUPERase-In ™ RNase Inhibitor (ThermoFisher Scientific) freshly added before use, and homogenized with the help first of a 23G and then of a 29G syringe. Cold 1.8M Sucrose Cushion Solution, prepared immediately before use with the addition of1mM DTTand 0.4U/ul RNase Inhibitor, was added to the suspensions before they were filtered through a 30μm strainer. The lysates were then carefully and slowly layered on top of 1.8M Sucrose Cushion Solution previously added in new Eppendorf tubes. Samples were centrifuged for 45 min at 16,000 × g at 4°C. Pellets were re-suspended in Nuclei Storage Buffer with RNase Inhibitor, transferred in new Eppendorf tubes and centrifuged twice for 5 min at 500 × g at 4°C. Finally purified nuclei were re-suspended in Nuclei Storage Buffer with RNase Inhibitor, stained with trypan blue and counted using Countess II (Life technology). After count, nuclei permeabilization was carried out following the demonstrated protocol for Single Cell Multiome ATAC + Gene Expression Sequencing from 10x Genomics. A total of 12,000 estimated nuclei from each sample were used for the transposition step and then loaded on the Chromium Next GEM Single Cell Chip J. ATAC library and gene expression library construction was performed using the Chromium Next GEM Single Cell Multiome ATAC + Gene Expression kit, according to the manufacturer's instructions. Libraries were sequenced using Illumina NovaSeq 6000 System and NovaSeq 6000 S2 Reagent Kit v1.5 (100 cycles), aiming at a minimum sequencing depth of 30K reads/nucleus. Tissue dissociation for primary OPC cultures-Brains from P4-P6 mouse pups were collected and dissociated with the neural tissue dissociation kit (P; 130-092-628, Miltenyi), according to the manufacturer's protocol. OPCs were obtained by either FACS with Sox10-GFP+ selection or by MACS with CD140a microbeads (Cd140a microbead kit, 130-101-547, Miltenyi). For each experiment, multiple brains were pooled to obtain a sufficient number of cells.
Bulk ATAC-seq-ATAC-seq was performed as previously described (Buenrostro et al., 2013) with minor adaptations. Primary OPCs were incubated with TrypLE (Gibco #12605010) at 37°C for 5 min and collected in cell culture media. 60,000 cells per condition were washed with PBS and lysed with lysis buffer (containing 0.1% IGEPAL (CA-630), 10 mM Tris-HCl pH 7.4, 10 mM NaCl, 3 mM MgCl2) and centrifuged for 20 min at 500 × g and 4°C. Cells were then resuspended in tagmentation mix (dH2O, 2x TD buffer [Wang et al., 2013] and Tn5 enzyme [Picelli et al., 2014]) for 30 min at 37°C. The DNA was purified pre-and post-PCR with the MinElute purification kit (Qiagen) and then PAGE purified to remove adapter dimers. Three replicates per condition were performed with primary OPCs obtained from different litters. Libraries were sequenced on an Illumina Novaseq 6000 with a 50-8-8-50 read setup.
RNA-seq-0.1-1 μg RNA was used to make RNA-seq libraries with the TruSeq Stranded Total RNA Library Prep Gold kit (Illumina), according to the manufacturer's instructions. Four replicates per condition for the IFNγ experiment, three replicates for the EZH2i experiment, and four replicates for each of the transcription factor knockdowns were performed with primary OPCs obtained from different litters. Libraries were sequenced on an Illumina Novaseq 6000 with a 150-8-8-150 read setup.
Cut&Run-Cut&Run was performed as previously described (Skene and Henikoff, 2017) with minor adaptations. Primary OPCs were incubated with TrypLE (Gibco #12605010) at 37°C for 5 min and collected in cell culture media. Cells were centrifuged for 5 min at 300 × g and room temperature and then resuspended in wash buffer (20 mM HEPES pH 7.5, 150 mM NaCl, 0.5 mM Spermidine, 0.01 % BSA, 1x Roche Complete Protease Inhibitor tablet). 250,000 cells per condition were centrifuged for 3 mint 600 × g and room temperature and resuspended in wash buffer. Activated Concanavalin-A beads (Bangs Laboratories BP531) in binding buffer (20 mM HEPES pH 7.5, 10 mM KCl, 1 mM CaCl2, 1 mM MnCl2) was added to each condition and incubated for 10 min on a rotator at room temperature. Beads with nuclei were now kept on a magnetic stand and washed with Dig-wash buffer (0.05% digitonin in wash buffer). After discarding the liquid, beads were resuspended with primary antibody in antibody buffer (2 mM EDTA in Dig-wash buffer) and incubated overnight at 4°C on a nutator. Then beads were washed with freshly prepared Dig-wash buffer and incubated with 2 μg/ml Protein A-MNase (Schmid et al., 2004) in Dig-wash buffer for 1 h at 4°C on a nutator. After two washes with Dig-wash buffer and one wash with low-salt rinse buffer (20 mM HEPES pH 7.5, 0.5 mM spermidine, 0.05% digitonin) ice-cold incubation buffer (3.5 mM HEPES pH 7.5, 10 mM CaCl2, 0.05% digitonin) was added to the beads which were subsequently placed in a metal block in an ice-water bath maintained at 0°C for 5 min. The beads were then placed on a magnet stand and liquid discarded. Stop buffer (170 mM NaCl, 20 mM EGTA, 0.05% digitonin, 50 μg/mL RNase A, 25 μg/mL glycogen, 2 pg/ml Yeast spike-in DNA) was added to the beads and incubated for 30 min at 37°C. Beads were placed on a magnet stand and supernatant was collected. 2 μL of 10% SDS and 2.5 μL of 20 mg/ml Proteinase K was added to the supernatant and incubated for 1 h at 50°C. DNA from the samples was purified using the MinElute PCR purification kit (Qiagen), according to the manufacturer's instructions. Antibodies were used against H3K27me3 (Cell Signaling 9733S; rabbit; 1 μg), H3K4me3 (Diagenode C15410003-50; rabbit; 1 μg), H3K27ac (Abcam ab177178; rabbit; 1 μg) and CTCF (Cell Signaling 3418S; rabbit; 1:100).
Sequencing libraries were prepared using KAPA HyperPrep kit (Roche #07962363001) and KAPA Unique Dual-Indexed Adapters (Roche #08861919702), according to the manufacturer's instructions, but with the following adjustments: two post-adapter ligation clean-ups were performed using 0.7 × and 1.1 × AMPure XP beads, respectively. PAGE purification was performed on the post-amplification libraries to remove remaining adapter dimers. Two or three replicates per condition for the IFNγ experiment and three replicates for the EZH2i experiment were performed with primary OPCs obtained from different litters. Libraries were sequenced on the Illumina Novaseq 6000 with a 50-8-8-50 read setup.
CUT&Tag for STAT1-CUT&Tag was performed as previously described (Kaya-Okur et al., 2019) with minor adaptations. Primary OPCs were washed from the plate with PBS at room temperature. Cells were centrifuged for 5 min at 300 × g and room temperature and then resuspended in wash buffer (20 mM HEPES pH 7.5, 150 mM NaCl, 0.5 mM Spermidine, 1 × Roche Complete Protease Inhibitor tablet). 100,000 cells per condition were centrifuged for 5 min at 300 × g and room temperature and resuspended in wash buffer. Activated Concanavalin-A beads (Bangs Laboratories BP531) in binding buffer (20 mM HEPES pH 7.5, 10 mM KCl, 1 mM CaCl2, 1 mM MnCl2) was added to each condition and incubated for 10 min on a rotator at room temperature. Beads with nuclei were now kept on a magnetic stand and washed twice with Antibody buffer (2mM EDTA, 0.1% BSA and 0.05% digitonin in wash buffer). After discarding the liquid, beads were resuspended with primary antibody (STAT1, Cell Signaling 14994S; rabbit, monoclonal, 1:100) in antibody buffer and incubated overnight at 4°C on a rotator. Then beads were washed with freshly prepared Dig-wash buffer (0.05% digitonin in wash buffer) and incubated with secondary antibody (guinea pig anti-rabbit, NBP1-72763, Novus biologicals, 1:100) in Dig-wash buffer on a rotator for 30-60 min. After three washes with Dig-wash buffer, the beads were incubated with pA-Tn5 (Kaya-Okur et al., 2019) adapter complex (1:250) in Dig-300-wash buffer (300mM NaCl and 0.05% digitonin in wash buffer) and incubated for 1 h at room temperature on a rotator. After two washes with Dig-300-wash buffer, the beads were resuspended in tagmentation buffer (10 mM MgCl2, 300mM NaCl and 0.05% digitonin in wash buffer) and incubated at 37°C for 1 h in a PCR cycler with heated lid. To stop the tagmentation reaction and to solubilize DNA fragments, 2.5 μL of 0.5M EDTA, 5 μL 10% SDS, and 2 μL of 20 mg/ml Proteinase K were added to the beads, followed by an incubation of 1 h at 55°C. DNA from the samples was purified using the Zymo DNA Clean & Concentrator kit (D4014, Zymo), according to the manufacturer's instructions, then amplified according to the number of cycles analyzed with qRT-PCR. SPRI bead (B23317, Beckman Coulter) purification was performed on the post-amplification libraries to remove remaining adapter dimers. Three replicates per condition were performed with primary OPCs obtained from different litters. Libraries were sequenced on the Illumina Nextseq 500/550 with 37-8-8-37 bp setup.
H3K27ac-HiChIP-HiChIP was performed as previously described (Mumbach et al., 2016) with minor adaptations. Primary OPCs were incubated with TrypLE (Gibco #12605010) at 37°C for 5 min and collected in cell culture media. 1-3 million cells were washed once with PBS and crosslinked using freshly prepared 1% formaldehyde (Methanol-free, Pierce, 28906) diluted in PBS for 10 min at room temperature with gentle rotation. Formaldehyde was quenched by addition of glycine (125 mM) and incubated for 5 min at room temperature with gentle rotation. Fixed cells were then washed once with ice-cold PBS, flash frozen, and stored at − 80°C until further processing. Chromatin was sonicated using the covaries ME220 with settings 75 PIP, 5% duty cycle, and 200 cycles/ burst for 2 min (for 1-3 milion cells). The immunoprecipitation was performed using 2 μg H3K27ac antibody (Abcam, Ab177178) and 20 μl protein A dynabeads (Thermo Fisher, 007613560), with 0.75 μl in-house produced Tn5 for tagmentation, and 15-16 cycles of final PCR amplification (NEBNext High Fidelity 2x PCR mastermix, M0541L). Barcoded libraries were gel-purified, quantified using bioanalyzer, and mixed in equimolar ratio. Three replicates per condition were performed with primary OPCs obtained from different litters. Libraries were sequenced on an Illumina Novaseq 6000 with a 50-8-8-50 read setup.
Western Blot-Cells were collected in 2x Laemmli buffer (120 mM Tris-HCl pH 6.8, 4% SDS, 20% glycerol) and sonicated for 5 min at high power with 30s on/off cycles at 4C. Protein concentrations were measured with nanodrop and equalized with 2 × Laemmli buffer. Bromophenol blue (0.1%) and B-Mercaptoethanol (10%) were added to the protein prior to a 5-min incubation at 95°C to denature the protein. Equal volumes were loaded on 4%-20% Mini-Protean TGX precast protein gels (4561094, Bio-Rad) and transferred to a PVDF membrane (GE Healthcare) activated in methanol. Membranes were then blocked in blocking buffer (containing TBS, 0.1% Tween-20 and 5% BSA) for 1 h at room temperature and incubated overnight with primary antibody (diluted in blocking buffer) at 4°C. The membranes were then washed 3 times 10 min in TBS-t (TBS, 0.1% Tween-20) and incubated with a horseradish peroxidase-conjugated secondary antibody for 2 h at room temperature. Proteins were exposed with ECL prime (GE Healthcare) at a ChemiDox XRS imaging system (Bio-Rad). Primary antibodies were used against H3K27me3 (rabbit monoclonal, 9733S, Cell Signaling, 1:1000) or GAPDH (rabbit monoclonal, 5174S, Cell Signaling, 1:1000) and as secondary antibody anti-rabbit (A6667, Sigma, 1:5000) scATAC-seq 10X Genomics preprocessing-scATAC-seq (10X Genomics) data were processed with default parameters with cellranger-atac (version 1.2.0) count function. Reads were aligned to mm10 reference genome. As part of cellranger-atac pipeline, peaks were called individually for each of the samples and then merged. Normalized single peakbarcode matrix combining all the samples was calculated with the parameter cellranger-atac aggr -normalize=depth, by subsampling all the fragments to the same effective depth to avoid batch effects introduced by sequencing depth, which resulted in a median fragments per cell of 21836. The number of fragments in peaks, the fraction of fragments in peaks, and the ratio of reads in ENCODE blacklist sites computed by cellranger-atac were used as QC metrics in downstream processing with the package Signac v0.25 https:// satijalab.org/signac/. scATAC 10X Genomics cells with the following metrics were selected: peak_region_fragments > 1000 & peak_region_fragments < 20000 & pct_reads_in_peaks > 15 & blacklist_ratio < 0.05 & nucleosome_signal < 10 & TSS_enrichment_score > 2, which resulted in 4895 cells.
scATAC-seq Pi-ATAC preprocessing-scATAC-seq (Pi-ATAC) data were processed following the https://carldeboer.github.io/brockman.html pipeline (de Boer and Regev, 2018). Reads were trimmed and aligned to mm10 reference genome using Bowtie2 (Langmead and Salzberg, 2012). Reads with alignment quality less than Q30, incorrectly paired, and mapped to mitochondria were discarded. Duplicates were removed using Picard tools. Peaks were called individually for each sample using MACS2 (Zhang et al., 2008) (https://github.com/macs3-project/MACS) with the following parameters -q 0.05 -nomodel -molambda -shift -100 -extsize 200 -call-summits. Called peaks were merged using bedtools mergebed and the peaks overlapping ENCODE blacklisted regions were removed. The summit peaks were resized and extended to the same size and used as input in chromVAR (Schep et al., 2017) to get the fragment counts in peaks, using as input all the individual cell bam files. Fraction of fragments per peak was calculated and filtered using chromVAR resulting in 1029 cells. The output fragments matrix and peak annotations were used as input for Seurat (Butler et al., 2018) to perform non-linear dimension reduction, normalization and clustering.
Normalization and clustering of scATAC-seq-Normalization and linear dimensional reduction were performed with Signac (Butler et al., 2018). Signac v1.1.0 (Stuart et al., 2020) first performs a frequency-inverse document frequency normalization (TF-IDF), which normalizes across cells and peaks. Then, a feature selection was performed using all the peaks as input. The dimensional reduction was performed on the TF-IDF normalized matrix with the selected peaks using a singular value decomposition (SVD). RunUMAP, FindNeighbors and Findclusters functions from Seurat v3.2.1 (Butler et al., 2018) were used for clustering and visualization with 30 dimensions.
Gene activity scores and integration with scRNA-seq data-We assumed correlation between promoter accessibility and gene expression. First, different promoter lengths were tested (2 Kb, 1 Kb, 500 bp and including the region around the TSS). We extracted the gene coordinates and extended them to include the different promoter lengths. Then, the number of reads from the pooled scATAC-seq samples intersecting the coordinates were counted to calculate a pseudobulk accessibility score. Using scRNA-seq from , we generated pseudo-bulk scRNA-seq signal for each of the annotated OLG and MiGl cell types and calculated normalized gene expression for the pooled cells. We directly correlated the pooled scATAC accessibility score over the tested promoter regions and pooled gene expressions. The promoter length with highest Spearman correlation coefficient between pseudobulk scATAC and scRNA samples was selected, 500 bp. However, 500-bp promoters only showed a slight improvement in the correlation value, 500 bp Spearman correlation rho for 500bp ~0.62, 1Kb ~0.59 and 2Kb ~0.56.
Final gene activities were computed over the 500-bp region upstream of the TSS of annotations with the ENSEMBL79 biotype protein_coding using Signac. scATAC-seq cells were annotated based on scRNA-seq data from Falcão et al. (2019). The shared correlation patterns between gene activity and the scRNA-seqannotated expression matrixfrom were calculated in Signac/Seurat (Butler et al., 2018) with FindTransferAnchors function, using as reference precomputed scRNA-seq and query scATAC activity scores. Using the classification score of minimum 0.4 the scATAC-seq cells were annotated. Manual checking of the classification scores identified mismatches from incorrect classified cells, where some cells from EAE or Ctr were incorrectly classified, we manually curated the final annotations and discarded cells that showed ambiguity. A small subset of cells from EAE mice were classified as Ctr-MOL populations and vice versa, revealing a certain degree of ambiguity in the promoter-based scATAC-seq classification score ( Figure S1D). These cells were manually corrected as MOL-EAE and MOL-Ctr ( Figure 1D) and not included in further analysis directly comparing the two conditions. Single-cell tracks were obtained with samtools 1.10 using the CB tag from cellranger-atac aligned bam files and the cluster cell type annotations.
Differentially accessible peaks-Differential accessibility was calculated between cell type clusters and within cell types between EAE and Ctr conditions with Signac with parameters min.pct = 0.2, test.use = LR, latent.vars = peak_region_fragments. For each of the peaks the closest gene was found using closestFeature function combined with EnsDb.Musmusculus.v79 annotations. Identified genes were used in downstream analyses such as, for instance, GO. Unique peaks and gene lists per cluster cell type were obtained by comparing unique lists of candidates with adjusted p value less than 0.05.
Gene ontology analysis-GO analysis was performed with ClueGO (version 2.5.5) (Bindea et al., 2009) plug-in Cytoscape (version 3.7.2) (Smoot et al., 2011) with the following settings: GO Biological process, minimum GO level: 3, Max GO level: 8, minimum number of genes: 3, minimum percentage 4.0, correction methods: bonferroni step down, p value cutoff: 0.05. GREAT v4.0.4 (McLean et al., 2010) was performed on EAEenriched peaks regions for MOL1/2, MOL5/6 and OPCs Mouse: GRCm38 (UCSC mm10, Dec 2011) with whole-genome background regions and basal plus extension gene regulatory domain definition (with proximal 5 kb upstream, 1 kb downstream, plus distal up to 1,000 kb on GO biological process categories and mouse genotype-phenotype associations mapped to human genes. Combined statistical tests include the binomial test over genomic regions and the hypergeometric test over genes. The significance threshold used for FDR corrected q-values was 0.05. scATAC peaks annotations-Peaks were annotated using HOMER v4.11 (Heinz et al., 2010) with annotatePeaks.pl and gencode.vM20 annotations for all the peaks used in the analysis and for the set of peaks that showed differential accessibility between EAE-and Ctr-OLG. For the donut plots the basic annotations are shown.
MS-associated SNPs-To enable comparison between mouse open chromatin regions
and human MS-associated SNPs, liftOver was used with parameters minMatch=0.5 to convert mm10 coordinates to hg19 genomic coordinates. Then, as a double check, we reciprocally lifted back the coordinates to mm10 and retrieved only the peaks that mapped to the original position. To define the set of peaks used in the analysis, the properly aligned reads from annotated OLG cell types and MiGl were combined to generate pseudobulk ATAC-seq bam files specific for each cell type. Then, peak calling was performed for each of the annotated cell types, MACS2 with parameters -q 0.05 -nomodel -molambda -shift -100 -extsize 200. Peaks were sorted and merged to non-overlapping meta peaks. Using bedtools intersect -wo the set of SNPs overlapping open chromatin regions were retrieved.
In Figure 8, the signal of the mouse scATAC-seq is shown in mm10 reference, signal of the human scATAC-seq is shown in hg38 and the SNPs coordenates have been liftOver to the correspondent reference. MOL, MiGl, and OPC scATAC-seq peaks from healthy donors in hg38 were liftOver to hg19. Using bedtools intersect -wo with the selected SNPs with evidence from mouse scATAC-seq liftOver peaks the intersecting hg19 peaks were retrieved.
MS GWAS loci overlapping-The GREGOR suite
TF motifs differential accessibility-Motif activity between single cells was calculated using chromVAR (Schep et al., 2017). Motif variability was calculated for all cells on the selected peaks from Seurat/Signac (Butler et al., 2018) analysis with chromVARmotifs library, mouse_pwms_v2, motifs. For visualization purposes the top 100 most variable motifs were selected to build a matrix of the normalized deviation scores, Z scores, as is shown in the heatmap in Figure 4A. Deviation Z scores from chromVAR are shown on the UMAP coordinates in Figure 4B.
Signac was used to identify the overrepresented TF motifs in the sets of differentially accessible peaks between cell types and between EAE versus Ctr in OLG. Signac performs a hypergeometric test to get the probability of observing a specific motif at a given frequency by chance. The motif enrichment was performed with the chromVARmotifs PWM mouse_pwms_v2 library. Then, we cross-checked expression levels of the significantly enriched motifs in the scRNA-seq data from to select a set of TFs for further analysis, Table S4.
The distribution of the binding sites was annotated using HOMER (Heinz et al., 2010) annotatepeaks.pl with the basic annotations. The closest genes were assigned to the predicted binding sites to count the number of genes per cell type cluster for each specific TF.
Bulk RNA-seq alignment and differential gene expression-The bulk RNAseq samples were preprocessed for adapter/quality trimming and then aligned to the transcriptome using STAR (Dobin et al., 2013) version 2.7 -quantMode -sjdbOverhang 99 with EnsEMBLv75 gtf annotations. Only uniquely mapped reads were retained for downstream analysis using SAMtools. Aligned samples were converted to bedgraph files using Deeptools bamcoverage for each strand and normalized to total of reads. Filtered fastq files were used in Salmon 0.8.2 to recover the raw reads counts and transcript per million (TPM) values per transcript and gene. The differential gene expression analysis was performed with Deseq2 (Love et al., 2014). Results from differential expression were plot using Enhancedvolcano package with log 2 fold change and adjusted p value from Deseq2. CUT&RUN alignment and processing-Cut&Run samples were processed with the pipeline CUT&RUNTools that includes reads trimming, alignment (Bowtie2 mm10 reference genome) and peak calling with MACS2 https://bitbucket.org/qzhudfci/ cutruntools/src/master/. For TF Cut&Run samples the narrow peaks with <120bp fragments were used and for histone modifications broad peaks from all the fragments were used in downstream analysis (Zhu et al., 2019). Differential CUT&RUN enrichments were calculated using pyicos over the previously defined 500-bp promoter regions. Using bedtools intersect bam, all the reads intersecting called peaks were counted and used to calculate enrichment fold change with pyicos Neuron 110, 1193-1210.e1-e13, April 6, 2022(Althammer et al., 2011 pyicoenrich -counts -pseudocount parameters. Z score-associated p value and Benjamini Hochberg corrected p value were computed in R 2*(pnorm(-abs(zscore)) and p.adjust method=BH.
HiChIP-Paired-end sequencing reads from HiChIP experiments were aligned to mm10 genome and filtered for duplicates using the HiC-Pro pipeline. The pipeline's hicpro2juicebox.sh script was used to generate .hic files, which were loaded into Juicebox for viewing contact maps.
ABC model-The ABC model (Fulco et al., 2019) was computed following https://github.com/broadinstitute/ABC-Enhancer-γene-Prediction. Processed bam files, as explained above, from bulk ATAC-seq and H3K27ac CUT&RUN from Ctr and IFNγ-treated OPCs were provided as input for the model. H3K27ac-HiChIP data from cultured primary OPCs were used to estimate contact frequency. Default parameters were used for generating the candidate enhancer list and for quantifying enhancer activity. Regulatory loops with an ABC score > 0.05 were used for downstream analyses.
Correlation between expression and chromatin accessibility-The pseudobulk scATAC reads per annotated cell type were intersected with Bedtools bedcoverage with 500bp promoter regions mm10 reference genome EnsEmbl v75 and normalized by total number of reads and region length to get a normalized activity score comparable to normalized expression values. Pseudobulk gene expression and pseudobulk activities at promoters were combined and clustered based on enrichment in EAE compared to Ctr.
For bulk RNA-seq and ATAC-seq Ctr-OPCs and IFNγ-treated OPCs, accessibility was calculated as the bulk ATAC-seq coverage signal on 500-bp promoter regions. Normalized read coverage at 500 bp-promoter regions was calculated as RPKM.
Gene set types definition for single-cell and bulk samples-4 types of genes (with additional subtypes) were defined based on the expression and activity values; the non-redundant list of genes for each type was used to do GO analysis. Type1 (genes with increased expression in EAE and chromatin accessibility), Type2 (genes with increased expression in EAE, but no change in chromatin accessibility), Type3 (genes with reduced expression in EAE, but no change in chromatin accessibility), and Type4 (genes with reduced expression and chromatin accessibility in EAE) gene sets, were defined based on the results from differential gene expression and differential accessibility. Each Type was further divided in subtypes based on ranked expression and chromatin accessibility within the Type. For Ctr-OPCs and IFNγ-treated OPCs, we defined the types (and additional subtypes) as explained above for the single-cell samples. For plotting, the resulting Types ComplexHeatmap (Gu et al., 2016) was used.
Enhancer to promoter interactions from scATAC-seq-Predictions of enhancer to promoter interactions were performed using the R package ArchR Peak2gene function with cutoff 0.5 and resolution 1, which integrates scRNA-seq and scATAC-seq information to correlate expression and accessibility.
Predicted putative enhancers coordenates from peak2gene output were used to calculate the aggregated normalized scATAC-seq signal for different gene Types in different celltypes. scATAC-seq signal was counted using; bedtools multicov -bams and normalizing thw signal by putative enhancer length and total number of reads.
CUT&RUN peaks intersection-Peaks were annotated with HOMER AnnotatePeaks.pl to basic annotations. All the peaks annotated to genes from Type1 (genes with increased expression in EAE and chromatin accessibility) and Type2 (genes with increased expression in EAE, but no change in chromatin accessibility) were intersect with bedtools intersect -c to Ctr-OPCs and IFNγ-treated OPCs peaks to build the intersection matrix. Upset plots were built with R package Upset.
Number of predicted interactions in Ctr-OPCs and IFNγ treated OPCs-For
each of the differentially upregulated genes in IFNγ-treated OPC genes the number of interactions predicted with H3K27ac ABC model was counted in ctr and IFNγ-treated conditions.
Single-cell multi-omics (mouse) data processing-Single-cell multi-ome mouse (10X Genomics) data were processed with default parameters with cellranger-arc (v2.0.0) count function. Reads were aligned to mm10 (ata-cellranger-arc-mm10-2020-A-2.0.0) reference genome. Normalized single feature-barcode matrix combining all the samples from both modalities was calculated with the parameter cellranger-arc aggr with default parameters, samples were normalized to depth for both ATAC and gene expression modalities, which resulted in a median fragments per cell of 12,928, median UMI counts per cell of 4,240, median genes per cell of 1,849 and 5,769 cells.
Gene activity was calculated over the 500-bp promoter regions of annotated protein_coding genes from mm10 EnsEmbl79. Pooled replicates cells were identified based on CHRX and CHRY module score calculated from BiomaRt. Data processing was performed in RNA and ATAC modalities separately with Seurat/Signac. RNA modality was integrated with Harmony (Korsunsky et al., 2019) based on sample variable, normalized and dimension reduction PCA, dims=1:50. For the ATAC modality, we performed latent semantic indexing (LSI) on the Harmony integrated space for dims=2:50. Using the weighted nearest neighbor (WNN) from Seurat v4, we computed the joint neighbor graph of RNA and ATAC modalities. We run FindClusters in the ATAC graph, RNA graph, and joint WNN graph. Cells were annotated using transfer_anchors on the RNA modality to Falcão et al. (2019) scRNA-seq data and manually curated based on known gene markers, which allowed us to identify Astrocyte cells. By annotating the data to the Falcão et al. (2019) reference, we can compare the cell clusters indentified in the scATAC-seq data to the single-cell multi ome cells clusters. We then called again the peaks using MACS2 on the identified cell clusters.
Normalized single feature-barcode matrix combining the two individuals from both modalities was calculated with the parameter cellranger-arc aggr with default parameters; samples were normalized to depth for both ATAC and gene expression modalities. We used Cellranger-arc called peaks for preQC measurements. After QC we ran macs2 within Seurat/Signac to call the peaks, to recover consistent peak set within the whole dataset. In order to get clear celltypes on the ATAC called peaks, we discarded combinations of canonical markers from different main cell types from the RNA expression. We then called peaks using MACS2 grouped by defined cell type.
Bulk RNA-seq KD alignment and processing-The bulk RNA-seq samples were preprocessed for adapter/quality trimming and then aligned to the transcriptome using STAR (Dobin et al., 2013) version 2.7 -quantMode -sjdbOverhang 99 with EnsEMBLv75 gtf annotations. Only uniquely mapped reads were retained for downstream analysis using SAMtools. Aligned samples were converted to bedgraph files using Deeptools bamcoverage for each strand and normalized to total of reads. Filtered fastq files were used in Salmon 0.8.2 to recover the raw reads counts and transcript per million (TPM) values per transcript and gene. The differential gene expression analysis was performed with EdgeR (Mc-Carthy et al., 2012;Robinson et al., 2010) for paired samples, CalcNornFactors = TMM and design ~day+condition. Results from differential expression were plot using Enhancedvolcano package with log 2 fold change and FDR from EdgeR.
GSEA in single-cell ATAC-seq-Gene set enrichment analysis was performed using escape (Borcherding et al., 2021) R package with GSEABase (Morgan et al., 2021). GSEABase: Gene set enrichment data structures and methods. R package version 1.54.0.) Mus musculus Hallmarks. Escape was run on the ACTIVITY assay; gene activities were calculated as the average ATAC signal per cell on the 500 bp-promoters of protein_coding annotated genes from EnsEmbl79.
Motifs analysis around MS-associated SNP locations-SNPs were extended 500 bp upstream and downstream of the SNP location in hg19.
We used MEME v4.12.0 and run Fimo (Grant et al., 2011) with hg19.5 th _order_background_model for CIS-BP (Weirauch et al., 2014) Homo_sapiens.meme PWM on SNP extended regions. For each TF motif we retrieved pseudobulk RNA expression and gene activity from single-cell ATAC-seq and single-cell RNA-seq. MS-associated SNP with predicted changes on TF binding site were extracted from SNP2Tfbs ftp repository snp2tfbs_JASPAR_CORE_2014_vert.bed (Kumar et al., 2017).
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. (E) GSEA of the nearest genes to enriched CA peaks for immune Hallmarks categories. (F) Integrative genomics viewer (IGV)-merged normalized tracks of CA with 100 randomly selected cells for each cluster, with MiGl clusters grouped together. scATAC co-accessibility connections are shown. Highlighted with gray boxes are regions with differential CA in specific clusters or promoter priming and connections between regulatory regions. Genomic coordinates are shown. OPC, oligodendrocyte precursor cell; VLMC, vascular leptomeningeal cell; PLC, pericyte-like cell; MiGl, microglia; NFOL, newly formed oligodendrocytes; MOL, mature oligodendrocyte.
Figure 2. Primed CA of immune genes in single OPCs and MOLs
(A) Genes in OPCs (left) or MOL1/2 (right) are clustered based on gene expression differences between EAE versus Ctr and correlation with CA activity score (CA over 500-bp promoter region). Top GO terms for Type1 genes (with increased expression and CA in EAE; [Type1a] low and [Type1b] high CA in Ctr-OPCs), Type2 (with increased expression in EAE, but no change in CA, [Type2a] high and [Type2b] low CA in Ctr-OPCs), and Type3 (with reduced expression in EAE, but no change in CA). Type4 (with reduced expression and CA in EAE) had no GO terms. (B) UMAP based on CA (right) and RNA-seq (left) of 10x Genomics multi-ome (simultaneous scATAC and RNA-seq) of Sox10-GFP cells sorted from the spinal cord of Ctr (2) and EAE mice (2, at disease peak). Label transfer from matched scRNA-seq data .
(C) (Left) IGV tracks of CA for each selected cluster, with MiGl clusters grouped together. scATAC co-accessibility connections are shown. Highlighted with gray boxes are regions with differential CA in specific clusters or promoter priming and connections between regulatory regions. Genomic coordinates are shown. (right) Violin plots depicting the expression of Ifit2 and Nlrc5 in each cluster. Cell acronyms as in Figure 1. (A) Volcano plots showing differential gene expression in RNA-seq (left) and CA at promoter regions in ATAC-seq (right) between Ctr-OPCs and OPCs treated with 100 ng/mL IFN-γ for 48 h. Genes with adj. p value < 0.05 and log 2 fold change >1.5 are shown in red. (B) Genes in IFN-γ-treated OPCs and Ctr-OPCs are clustered based on their chromatin activity score (CA over 500-bp promoter region) and gene expression correlation. Top GO terms are shown for Type1-Type4 as defined in Figure 2A. (C) IGV tracks are shown for ATAC-seq and RNA-seq in IFN-γ-treated OPCs and Ctr-OPCs for selected genes. Highlighted with gray boxes and arrows are regions with differential CA in IFN-γ-treated OPCs or promoter priming; black arrows, promoter regions. Merged tracks of 3 biological replicates are shown for ATAC-seq and 4 biological replicates for RNA-seq.
Figure 4. STAT1 and BACH1 have increased motif accessibility (MA) in OLG from EAE and are involved in IFN-γ-mediated regulation of immune genes in OPCs
(A) ChromVAR clustering of TF motif variability from scATAC-seq. Each row presents a TF motif, whereas each column represents a single cell. Scale, blue (low TF MA) to yellow (high). (B) TF motif variability projected on top of UMAP of scATAC-seq. (C) Volcano plots showing differential expressed genes in RNA-seq upon transfection of primary OPCs with siRNAs targeting Bach1 before treating with IFN-γ for 6 h. 4 biological Figure 5. H3K27ac, CTCF binding, and enhancer-promoter contacts at immune genes in mouse OPCs are altered upon IFN-γ treatment (A) Volcano plots showing differential H3K27ac (left) and CTCF binding (right) between IFN-γ-treated OPCs and Ctr-OPCs, assessed with Cut&Run.3 biological replicates. Genes with adj. p value < 0.05 and log 2 fold change >1.5 are shown in red. (B) Number of genes (y axis) in Ctr-OPCs (blue) and IFN-γ-treated OPCs (green) with n predicted interactions (x axis). (C) IGV tracks showing CTCF binding and H3K27ac occupancy, assessed with Cut&Run, ATAC-seq in IFN-γ-treated OPCs, and Ctr-OPCs for MHC-I and MHC-II loci. Predicted enhancer/promoter contacts computed by the activity-by-contact (ABC) model (Fulco et al., 2019) based on CA and H3K27ac-HiChIP. Highlighted with gray boxes are regions with increased H3K27ac, CTCF binding, CA, and/or predicted interactions in IFN-γ-treated OPCs. Highlighted with a red arrow is an enhancer region interacting with multiple genes in the MHC-I and MHC-II loci. Merged tracks for 3 biological replicates per condition.
Figure 6. H3K27me3 modulation is involved in IFN-γ-mediated immune gene activation in OPCs
(A) Volcano plots for H3K27me3 and H3K4me3 in IFN-γ-treated OPCs versus Ctr-OPCs assessed with Cut&Run. Two replicates. Genes with adj. p value < 0.05 and log 2 fold change >1.5 are shown in red. (B) Cut&Run IGV tracks for selected MHC-I and MHC-II genes with increased H3K4me3 or decreased H3K27me3 in OPCs upon IFN-γ treatment. Merged tracks of two replicates. (C) Upset plots of Cut&Run Ctr-OPCs and IFN-γ-treated peak intersection in Type2 genes. Top barplot shows the number of intersecting peaks per combination, left barplot shows the size of each peak dataset, and the matrix shows the Cut&Run peak sets (dots) and shared (connecting line) in each combination. (D) Venn diagram showing the number of genes enriched in OPCs upon treatment with 1.5 μM EZH2 inhibitor EPZ011989 (EZH2i) for 4 days, with and without subsequent co-treatment with 100 ng/mL IFN-γ for last 6 h (and the genes enriched in both) and the top gene ontology biological terms for the genes in each category. (E) RNA-seq IGV tracks for MHC-I, MHC-II, and cytokine genes with increased expression upon EZH2i in IFN-γ-spiked OPCs. Merged tracks of three biological replicates. (C) Overlap with MS-associated GWAS variants. For each peak set, expected (x axis) versus observed (y axis) number of SNP hits overlapping the human healthy individuals hg19 multi-ome scATAC-seq peaks, scATAC-seq cell-type-specific peaks from Ctr and EAE mice, and Ctr-OPC and IFN-γ-treated primary OPC ATAC-seq peaks. Dot size scaled to adjusted p value and adjusted p values < 0.01 in red. ENDO, endothelial cells; ASTRO, astrocytes; EXCNEU, excitatory neurons; INHNEU, inhibitory neurons; MIGL, microglia; OLIGO or MOL, mature oligodendrocyte; OPC, oligodendrocyte precursor cell; VLMC, vascular leptomeningeal cell; PLC, pericyte-like cell.
|
v3-fos-license
|
2015-06-01T23:46:22.000Z
|
2015-02-23T00:00:00.000
|
17742315
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep08540.pdf",
"pdf_hash": "e054e7e7bd13ce487820ca74597c6aa10ff227c7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1057",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"sha1": "e054e7e7bd13ce487820ca74597c6aa10ff227c7",
"year": 2015
}
|
pes2o/s2orc
|
Bridging topological and functional information in protein interaction networks by short loops profiling
Protein-protein interaction networks (PPINs) have been employed to identify potential novel interconnections between proteins as well as crucial cellular functions. In this study we identify fundamental principles of PPIN topologies by analysing network motifs of short loops, which are small cyclic interactions of between 3 and 6 proteins. We compared 30 PPINs with corresponding randomised null models and examined the occurrence of common biological functions in loops extracted from a cross-validated high-confidence dataset of 622 human protein complexes. We demonstrate that loops are an intrinsic feature of PPINs and that specific cell functions are predominantly performed by loops of different lengths. Topologically, we find that loops are strongly related to the accuracy of PPINs and define a core of interactions with high resilience. The identification of this core and the analysis of loop composition are promising tools to assess PPIN quality and to uncover possible biases from experimental detection methods. More than 96% of loops share at least one biological function, with enrichment of cellular functions related to mRNA metabolic processing and the cell cycle. Our analyses suggest that these motifs can be used in the design of targeted experiments for functional phenotype detection.
I n the last two decades PPI Networks (PPINs) have been analysed with a wide range of statistical and mathematical tools 1 to address biological questions related to the evolution of different species 2,3 , the identification of disease related proteins and interactions [4][5][6] and more recently, the process of drug discovery [7][8][9] . Many of these studies pointed out that essential protein interactions in cellular mechanisms in healthy and diseased states are often imputable to few connected nodes in the network 10 . Therefore PPIN analysis can represent a powerful tool in biomedical research, allowing for the identification of crucial target proteins to manipulate or treat the observed functional phenotypes. However, exploiting this potential requires carefully validated PPI 11,12 data and the ability to identify a minimal set of proteins that are best suited for drug targeting.
During the years, high-throughput experimental methods to map PPIs have constantly improved: mapping of binary interactions by yeast two-hybrid (Y2H) systems 13 and mapping of membership and identity of protein complexes by affinity-or immuno-purification followed by mass spectrometry (AP-MS) 14 , recently extended to large scale biochemical purification of protein complexes and identification of their constituent components by MS (BP-MS) 12 . At the same time, theoretical tools and more advanced experimental techniques have highlighted limits in the quality of the data and have stimulated renewed efforts to improve their quality. The current challenges of network biology are in the identification of standardised approaches to reduce methodological biases 11,12 , to increase data reproducibility 15 and to assess the scope and limitations of PPIN models 16,17 . This has been paralleled by computational efforts to improve algorithms and methodologies for larger datasets and for data integration of different types of cellular networks 4 . A paradigmatic example is represented by studies complementing PPINs with 3D structural data [18][19][20] .
Particularly important for the identification of experimental biases and of truly relevant biological information is the problem of finding a reference (null) model for network analysis 21,22 . Indeed, each property calculated from PPINs should be compared with a corresponding family of reference random graphs 21 . It is essential to prove that specific values of network properties are statistically different from random and can be safely related to biological functions 4 . Indirectly, this procedure can be used to identify experimental biases by network comparison 11 . Several approaches were developed to extract meaningful properties from PPINs using graph theory 23 . These properties can be broadly classified according to the level of detail: global properties describing the features of the whole network or local properties encompassing only parts of the network. The former include measures of connectivity (average degree, degree distribution, average shortest paths) 23 , measures of grouping (average clustering connectivity) 23 , and measures of the relationship between nodes (assortativity coefficient 23 , degree-degree correlation 11,21 ). The latter include indices aimed at identifying sub-networks defining functional modules 24 , recurring patterns of connected nodes 25 , fully connected groups of nodes (cliques) 26 , induced subgraphs (graphlets) 27 or simplified representations of subgraphs (Power Graphs) 28 .
Among all local properties, motifs have been particularly exploited as they have been demonstrated to be associated with biological functions and their interactions are modified in diseases 29 . They act as building blocks of cellular networks 30 . Different definitions (and motif types) have been proposed, all of them generally assume that a motif is a pattern appearing more frequently than expected given the network 31 . They were initially detected in transcriptional regulatory networks 31 and later in different types of cellular networks 30 . Motifs of two, three and four proteins have been classified and associated with specific regulatory functions in accordance with their transcriptional patterns 29 . In addition, there is evidence from previous studies that motifs related to functional units can be successfully mined from PPINs functional units 26,28 .
A specific type of motif is represented by loops, defined as nonintersecting closed paths in PPINs. These were shown to be functionally critical in particular cases 12,32 , but no exhaustive investigation has been performed to assess their biological relevance or the relationship between loop length and biological functions. To the best of our knowledge, no study has so far estimated if PPINs are consistently enriched in loop motifs compared to randomised networks with similar properties and under comparable topological constraints.
This study demonstrates that short loops of length three, four and five are of critical importance in PPINs by a) assessing their statistical significance compared to randomised networks with the same degree and degree-degree correlation and b) evaluating their specialised biological role through functional annotation. In detail, we calculated the number of short loops in a set of PPINs from different organisms and estimated their resilience and statistical significance by comparison with a tailored graph ensemble generated by Markov chain graph dynamics. We investigated the relationship between the variation in loop number upon randomisation and the initial topological properties of the networks. We characterised the composition of loops resilient upon randomisation. Finally we used Gene Ontology (GO) 33 and KEGG 34 pathway annotation to identify preferentially represented functions in loops of different lengths for the human PPINs.
Results
The results are presented according to a two-fold scheme of investigation: a) statistical relevance of short loop motifs with respect to random models; b) functional enrichment in short loops.
Number and essentiality of short loops in PPINs. Survey of the occurrence of loops in PPINs. We selected a set of 30 PPINs from the literature (Table 1 and Methods) to cover a range of source organisms and experimental techniques. The set includes early milestone studies on model organisms 35 as well as one of the most recent high-confidence human PPIN 12 . The number of short loops of length 3, 4, 5 and 6 in each of PPINs was counted using the Looplength bounded Depth First Search algorithm (Methods). In all cases the number increases with loop length nearly exponentially (Table 1). No significant correlation is seen between loop numbers and any of the topological properties of the network, except for the first eigenvalue of the graph adjacency matrix (Supplementary Table S1). This property is related to the occurrence of hub nodes, suggesting that networks richer in hubs have also more loops. The unusual value of zero for loops in S. cerevisiae XII could be related to the quality of this specific network.
Short loops are an intrinsic property of PPINs. Previous studies demonstrated the importance of defining reference (null) models for network analysis. Ideally an analytical formulation for such models would guarantee a statistically reliable comparison 21,22 . Such analytical formulation is not currently available for short loops, therefore we introduced a reference model by a process of randomisation of the original network using Markov Chain Graph Dynamics (MCGD; Methods), rewiring the network under topological constraints to generate a tailored ensemble of random graphs directly comparable to the original one. To obtain null models characterised by each network, two sets of constraints were selected: a) the degree distribution and b) the degree distribution and degree-degree correlation. Such constraints provide an avenue to independently test the influence of the degree-degree correlation on the number of loops and on their change upon randomisation. In this respect our previous study 11 demonstrated its usefulness in detecting experimental biases embedded in PPINs. The degree-degree correlation is related to the assortativity. This is simply the Pearson coefficient of the degreedegree correlation distribution (Supplementary Material).
For all datasets we performed five independent simulations of MCGD of 100 x number of interactions (NI) edge swapping moves, measuring the number of loops of length 3, 4, 5 and 6. The extent of randomness was monitored by measuring the Hamming distance between the original and the randomised networks. In all simulations the distance dropped to less than 0.02 within the first 10 x NI steps, confirming that no memory of the global structure in the original network was retained during MCGD. Therefore the randomisation process effectively removes the local structure of the original network. After this initial change, the number of loops generally stabilised to a constant value when the simulations reach convergence to a fully randomised state. Figure 1a-d report the variation in the number of loops during MCGD for a H. sapiens PPIN 12 (Supplementary Figure S1 for all other networks). The low variability across the replicas (error bars in the figures) confirms the reproducibility of the MCGD procedure. The trend of variation is the same independently of loop length. The number of loops decreases steeply within the first 10 x NI steps under both constraints. However, the reduction is smaller when the degree-degree correlation is constrained (blue line), suggesting that the wiring of the original network is influenced by this topological property. The structure of these loops may be dependent on the connectivity of the surrounding nodes and the relative degree-degree distribution. Conversely, this implies that the information contained in such properties may be associated with the occurrence of loops in the original network. However the degreedegree correlation is insufficient to fully reconstruct loop wiring in networks due to the lack of correlation between this property and the number of loops (Supplementary Table S1).
Short loops are related to the quality of the PPIN. The trend of change in the number of loops during MCGD is similar for different loop lengths in the same network. Therefore, for simplicity we focused our comparative analyses on loops of length 3. These are related to the clustering coefficient commonly used to characterize the structure of networks (Supplementary Material). To assess effects of the different data sources, we compared the human PPINs obtained by different methods. Figure 1e- The datasets cover a range of source organisms and a variety of experimental techniques (Methods). The names of datasets, their detection methods and references are presented along with properties of each network. The number of proteins (NP) and interactions (NI) for each network are reported in this table alongside a selected set of global topological properties of the network: measures of connectivity such as the average (k_mean) and maximum degree (k_max), indices of node centrality such as the average betweenness (btwn), the average eigenvector centrality (evc_mean) and the first eigenvalue of the graph adjacent matrix (ev), as well as measures of the relationship between nodes such as the assortativity coefficient (assort) 42 , the transitivity ratio (transitivity) 29 and the average degree-degree correlation (kk_corr) 20 Table 1. to the entire topological wiring. The observed trends in changing the number of loops during MCGD are more similar for related experimental sources. In line with our previous results 11 , Figure 1e-h highlights that the information from degree-degree correlation is sensitive to the different experimental biases reflected in the derived PPINs 11 . This suggests that the quality of the PPINs may have a strong effect on the number of loops and on their variation upon randomisation.
While highly variable at first glance, the trends of loop numbers upon MCGD can be classified into few general patterns by comparing the number of loops in the original network and in the two randomised ensembles obtained by MCGD (Methods). Four distinct patterns were detected in our simulations, which are represented in the schematic shown in Figure 2a. The number can increase under both constraint sets (purple frame top left), increase in one case and decrease in the other (pink frame top right), or decrease in both cases (cyan/green frame bottom panels). For the first two patterns, imposing only a constraint on the degree distribution generates an increase in the number of loops and this is always steeper than with the more stringent constraint of the degree-degree correlation. When decreases in the number of loops are detected for both constraint sets this could be steeper (cyan) or flatter (green) in the presence of a constraint affecting the degree-degree correlation term.
Few networks show irregular patterns under MCGD (grey labels in Table 1), but in general the pattern of change in loop number is consistent for networks from the same experimental source (Table 1). This suggests that the quality of the initial network or some of its topological properties may play a role in defining the evolution of loop wiring under randomisation. To investigate these aspects we performed a Principal Component Analysis (PCA) on the variables describing some typical topological properties of networks (Methods). A projection of the networks in the space defined by the first two PCs is reported in the biplot in Figure 2b. The direction of the original variables in this space is indicated by orange arrows and the networks are colour-coded according to the pattern colours in Figure 2a. The plot confirms that the degree-degree correlation is an effective index to discriminate between networks from different experimental sources 11 , but it also highlights the role of the network size (n. edges) and the relationships between nodes (assortativity/ average eigenvector centrality) in defining different behaviours under randomisation. There is a clear separation between the networks with a specific pattern (green) from the others. Interestingly, these correspond to the networks generally considered of higher quality 11,12 . The pattern associated with these high quality networks shows that a constraint on the degree-degree correlation is helpful in preserving some of the original loops (higher number of resilient loops in the green frame of Figure 2a).
Resilient loops have functional importance. It is particularly relevant to identify and characterise how many and which loops are preserved upon randomisation with a constraint on the original degree-degree correlation. In the high-confidence human PPIN (BP-MS) 12 , in general 13-18% of loops were retained after randomisation (Supplementary Table S2). Specifically, the common ones across the replicas account for 8,342 and 219,217 loops of length 3 and 4 involving 58 and 60 proteins respectively. The sub-network of proteins including only these loops shows a highly connected set with a predominance of ribosomal proteins and RNA processing proteins ( Figure 3). This suggests an essential core set that may be resilient due to its functional importance. Indeed, these proteins and their interactions in resilient loops are consistent with cluster structures detected by computational methods such as MCODE 39 and Cluster One 40 (Supplementary Table S3-5, Figure S2). In addition, while these methods mainly identify the ribosomal protein complex as the most important cluster, with inclusion of few additional proteins, the set of resilient loops after MCGD includes a sensibly larger number of critical accessory proteins (Supplementary Table S6) connected to the ribosomal complex supporting the hypothesis of an important functional role for short loops. The detection of a resilient loop set could complement cluster analysis in the functional annotation of core sets in PPINs.
The resilient loops contain proteins that are known to interact and have functions in transcription, hnRNA splicing and translation. Specifically, the ATP-dependent helicase, DHX9 is involved in Table 1 and colored according to the trend in change of loop numbers (Figure 2a). Vectors representing the original variables included in the PC analysis are projected into the PC1/PC2 plane and reported as oranges arrows. Details on network properties are reported in Table 1. unwinding double-stranded DNA and in RNA-dependent processes in all three of these functions 41 . Additionally, DHX9 binds another protein on the list, ILF3, to regulate gene expression 42 . ILF3 and ILF2 interact and are core components of the NFATc transcription factor, which regulates gene expression during T cell activation, including the IL2 gene [43][44][45] . DHX9 is also a component of the coding region determinant (CRD) complex containing HNRNPU that stabilises MYC mRNA 46 and is required for the translation of mRNA containing the 59 post-translational control element sequence 47 . A number of ribonucleoproteins in the U2 snRNP splicing complex recognise the 39 splice site for hnRNA 48 . These include U2AF1, U2AF2, SF3A1 and HNRPM and each of these, together with NCSTN and DHX9 were independently identified in soluble nuclear protein complexes 12 . The diversity of proteins and their functions suggests that resilient loops are not limited to the predominant ribosomal proteins but also include other protein interactions governing functional processes of the cell.
Functional specialisation of short loops in PPINs.
Short loops have a high degree of functional consensus. The evidence for functional importance of specific short loops suggests that in general loop motifs may perform dedicated biological functions. This was shown for regulatory networks 29 but no exhaustive study has been performed on PPINs. In this study, a human PPIN of 622 soluble protein complexes detected by BP-MS 12 was employed to investigate the biological function of short loops. The original study reported some examples of relations between protein complexes, evolutionary conservation and disease. This study presents a comprehensive functional analysis of short loop interactions in the BP-MS network in comparison with other human PPINs.
We reasoned that if all the proteins in a loop share a common function or process, the loop might be the essential unit delivering that function or process. To test this hypothesis we annotated the proteins with GO terms 33 and defined the concept of functional consensus ( Figure 4). This is the percentage of common terms among all proteins in a loop, independently of the level in the GO hierarchy. The results of the functional consensus analysis are reported in Figure 5. The barplot in panel 5a shows the fraction of loops having a specific percentage of common GO terms in the BP-MS network of protein complexes 12 . The majority of short loops share at least one biological function. This confirms that the degree of functional consensus is generally high (Figure 5a). To address the influence of highly connected complexes and the effects of including other human PPINs, additional datasets were examined (Figure 5bd). First, we removed all proteins of the large ribosomal subunit to reduce possible biases towards this large set of extensively interacting proteins with well-annotated functional terms (Figure 5b). Secondly, we generated an integrated human PPIN (Figure 5c) from datasets obtained with different detection methods such as BP-MS 12 , Y2H 49 , database collection 50 , and the 3D interactome database 19 . Finally, we measured the functional consensus for the integrated human PPIN obtained after excluding data from BP-MS (Figure 5a). The results demonstrate that the extent of functional consensus is not biased by highly connected complexes (Figure 5a-b) or by the network source (Figure 5a and 5c). The statistical significance of these results was verified by a resampling randomisation test. The results in Figure 5e www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 8540 | DOI: 10.1038/srep08540 random set. These data confirm that the enrichment in functional specialisation of loop motifs is a property of PPINs.
Short loops are enriched in biological functions associated with specific cellular mechanisms. In addition to the high degree of functional consensus in short loops, specific biological functions are more highly represented in short loops compared to the original network. Figure 6a describes the frequency of functional terms for the network and loops of different lengths. Three distinct trends were identified: Trend 1 is associated with a group of GO terms enriched in loops compared with the overall network. In contrast, Trend 2 is a group of terms with higher occurrence in the network. Trend 3 shows a remarkably similar percentage of occurrence in short loops, which decreases with the loop length (12 6 2%, 7.1 6 0.7%, 3.8 6 0.4%). These results suggest a complementarity between the occurrence of GO terms in the network and in motifs. As for the analysis of functional consensus, the calculation was replicated after excluding the highly connected 60S ribosome complex (Figure 5b). Interestingly, only two trends are visible in this case (Figure 6b). All terms of Trend 3 have a higher occurrence in the network, but as a part of Trend 1 (now combined in Trend 4). On the other hand, the frequencies of the remaining terms of Trend 1 decrease and follow Trend 2 (now combined in Trend 5). Figure 6c summarises these changes and reports the number of terms in each of the groups (detailed terms in Supplementary Table S7). The comparison of terms in the network and short loops shows that biological functions are more enriched if proteins in the network are associated with global processes such as ''organismal process'' and ''developmental process'' but also a few specific functions such as ''DNA-templated transcription'' and its regulation (terms in Trend 2 and about half of the terms in Trend 5), while ''nucleobase-containing compound metabolic process'' including ''mRNA metabolism'', ''gene expression'', and ''viral processes'' always emerge in short loops independently of the presence of highly connected ribosomal proteins (28 of Trend 1). However, biosynthetic processes including ''RNA biosynthetic process'', ''protein complex subunit organization'', and ''localization functions'' involving ''transport'' and ''protein localization'' are particularly enriched in short loops but strongly affected by the inclusion/removal of ribosomal proteins (half of terms in Trend 5 deriving from Trend 1). Some groups of functions such as ''cell cycle'' regulation processes and ''antigen processing'' are enriched in loops when the ribosomal proteins are excluded (Trend 4 from Trend 3). Overall, these results indicate that short loops perform specialized functions complementary to the ones performed by complex protein communication pathways distributed across the whole PPIN, which include metabolism, cell growth and death, and immune functions. This suggests that loops can be used to extend or predict the functional annotation in PPIN or in pathway analyses. For example, Figure 7 presents the KEGG 34 pathway of cell cycle regulation annotated with the proteins from short loops of length 3 and 4 with the GO term ''cell cycle'' (Supplementary Table S7 -8). The sub-network of short loops is strongly wired to the KEGG pathway throughout the cell cycle stages, although only a small number of proteins (in red) map directly to the pathway. Loop proteins extend the scope of the KEGG annotation: some of the proteins and their interactions have a role in connecting to functional components of the cell cycle such as DNA replication, DNA repair, DNA damage checkpoint, and structural maintenance of chromosomes (clusters in green backgrounds). Also, several proteins interconnect proteins from different functions or different phases of the cell cycle such as MSH2 and MSH6, DNA mismatch repair proteins, belonging to a loop with PCNA and RAD21.
These results suggest a scenario in which specific functions are delivered through local, short range units and regulated by large long range modules. This is in line with an emerging vision of PPINs as a modularized system composed by sub-networks of proteins (i.e. communities) of different sizes where the interplay of local motifs, such as loops, collaborate to regulate the entire network through a complex set of interactions.
Discussion
Several strategies can be used to identify a minimal group of nodes in a graph by either extracting clusters under specific topological constraints 20,51 or by selecting nodes consistently with an annotated property. A different approach is based on looking for pre-existing simplified motifs that can be computationally detected relatively easily 31 . Previous studies reported the detection of motifs based on their overrepresentation within networks 52 or their occurrence in pre-compiled representative subgraph sets (Power Graphs 28 or Graphlets 22 ). Our contribution differs from previous approaches on three levels. First, we directly counted the occurrence of motifs independently from the local subgraph environment of the motif. Secondly, we selected a specific motif type, non-intersecting closed loops, of different lengths without imposing specific interaction patterns (i.e. feed-forward loops). Thirdly, we estimated the statistical significance of motifs by comparison with tailored random graph ensembles 21 with comparable topological constraints, instead of using a general random model. Among the different motifs, short loops have a two-fold advantage: their relevance can be directly validated with information-theoretic approaches and their functional unity can easily be challenged by targeted experiments, such as selective knockout or siRNA/RNAi silencing experiments.
The inclusion of loop motifs in PPINs can be explained by their ability to perform specialised functions. We demonstrated this by annotating the proteins in a series of human PPINs with GO terms and then by estimating the degree of consensus in the functional terms for each loop. The results showed that, statistically, proteins in a loop are specialised to perform common functions. While previous studies demonstrated functional specialisation for specific regulatory motifs 31 or loops in specific cellular sub-networks 53 , this is the first comprehensive analysis covering loops of different lengths, networks from different species and extensive functional annotation. Moreover, these specialised functions are highly enriched in the loops compared to the overall network, while it is the opposite for regulatory functions. This suggests a model of cellular life in which regulatory processes are distributed over the network and they cover single functions that are performed by simple local motifs. This is consistent with a previous study reporting that local motifs are critical for the delivery of biological functions and their tendency to aggregate in functional units is not a trivial effect of statistical enrichment 54 .
Overall our results show evidence of three important roles of loop motifs in PPINs: first, loops contribute to define the wiring and topological properties of the network; second, they have a critical role in performing dedicated biological functions; and third, they can provide an indirect measure of the quality of the network model.
Evidence for a specific role of loops in defining the wiring of the networks was demonstrated by comparative analysis of their occurrence in PPINs from different species and from different experimental sources. In particular, we tested the effect of constraining the degree-degree correlation 11,21 during a randomisation process. Indeed, the information contained in this topological measure further contributes in defining the occurrence and structure of loops as previously shown for other network features 11 . We suggest that loops contain unique information on the biology of the system. Indeed we found that their number and resilience under randomisation are related to the quality of the underlying network: higher quality (i.e. more biologically consistent) networks have similar proprieties regarding loop occurrence and resilience. Therefore, we reinforce the importance of core units in PPINs, but different from previous reports 6, 55 we demonstrate here that these units are composed of geometric short loop motifs. To quantify this we implemented a novel and efficient protocol that can be extended to the study of other network motifs under different topological constraints.
Evidence for the functional role of loops was shown by the analysis of common terms after GO annotation. We found that generally loops have a functional purpose, as shown by the consistency of GO terms associated with their proteins. Indeed, proteins are recruited to form a complex to perform a set of specific biological functions and loops may act as the basic unit to build more complex assemblies 54 . Additionally, a high degree of functional consensus may be exploited to predict biological processes of partially annotated protein complexes 56,57 . More interestingly, loops of different lengths show a slightly different enrichment for some terms, but strong differences in functional annotation when compared with the remaining proteins in the network. We found that the most resilient group of loops is associated with essential functions that include transcription, splicing and translation. By comparing different human PPINs we also found that functional consistency decreases with the decrease in network quality. This is in line with recent evidence 55 that during the years the human interactome from published data is becoming more compact and less sparse. A defined functional core has emerged with the increase in quality. This is also associated with the discovery of a core sub-network of functional importance that is generally the target of diseases 6 . Therefore, our findings show convincing evidence for a practical use of loops in investigating the quality of detected PPINs. As previously discussed, the network quality in terms of accuracy of determination correlates directly with a) the pattern of change in the number of loops under randomisation, b) the degree of functional consensus and c) the occurrence of resilient core modules after randomisation. On the basis of this we suggest that newly determined PPINs could be validated against recently published high quality networks 12 by comparison of their loop properties, measured against a null model of network interactions.
We demonstrate here that PPI loops contain significant information on functional mechanisms underlying the biology of the cell. They can be instrumental in the identification of essential modules delivering critical functions. Additionally they contribute to complete/validate functional annotation and to extend the annotation provided by pathway analysis, as shown in the case of cell cycle proteins. Finally, their suitability for experimental targeting allows for direct validation of predictions and identification of unannotated proteins in complexes that are abnormal in specific diseases.
Methods
Data Set. PPINs are graph models where proteins are described by nodes and interactions by edges. They are conventionally represented by binary matrices where the presence (or absence) of interactions between each pair of proteins is recorded with 1 (or 0). In this study self-interactions and duplicate interactions where removed. A data set of 30 PPINs including 11 species was derived from the literature ( Table 1). The data set includes 25 PPIN previously described in a large-scale analysis study from our lab 11 and four recently published PPINs. The set includes nine eukaryotes (Caenorhabditis elegans, Drosophila melanogaster, Homo sapiens, Plasmodium falciparum and Saccharomyces cerevisiae) and six bacteria (Campylobacter jejuni, Escherichia coli, Helicobacter pylori, Mesorphizobium loti, Synechocystis and Treponema pallidum). These interaction data were originally derived by six different methods: Yeast-two-Hybrid (Y2H), Affinity Purification-Mass Spectrometry (AP-MS), biochemical isolation of protein complexes by MS (BP-MS), Protein Complementation Assay (PCA), database deposition, and data integration. The most recently added PPINs include a network of human soluble proteins 12 with highconfidence physical interactions and three human 3D interactome networks [18][19][20] .
Algorithm for loop detection. The definition of a loop in this study is a closed path without repeating nodes or edges (Supplementary Figures S5). To detect all loops in the network, an algorithm based on depth-first-search (DFS) bounded by loop-length was implemented in C. From a node assumed as an origin of a loop, a path is extended in depth by adding two directly connected forward nodes. Then the connected nodes are tested for existence of a common neighbour (directly interacting) node. Once found, the common node is added to the loop and the extension step is performed again until no common nodes are detected or the length of the path is equal to six. The algorithm finds all possible loops of the network in power of loop-length time O(n?l) where n is the number of proteins in the network and l is the loop length.
Degree-Constrained Graph Dynamics Based on Edge Swaps. We compare the values of observables in our protein interaction networks with those observed in suitable null models, i.e. random networks which share some properties of the networks under study. We use two types of null models: random networks with the same degree distribution as the original protein interaction networks and random networks with the same degree distribution and degree-degree correlations (Supplementary Material). Such tailored graph ensembles with controlled degree distribution and degree-degree correlations constitute a significant improvement, as null models, on the fully random graph ensembles, which assume degrees uncorrelated and Poissonian distributed. These can generate highly sophisticated null models by exact and unbiased algorithms. In addition, our method is efficient, because it does not require preprocessing and runs in linear time compared to other PPIN analyses methods 58 .
In order to generate the above null models we use rewiring algorithms that randomise protein interaction networks, yet conserving the degrees of its nodes, by repeated applications of edge swaps that act on quadruplets of nodes. Edge swaps are proposed at each time step and accepted with an acceptance rate which ensures convergence of the graph dynamics to equilibrium networks with controlled degreedegree correlations (Supplementary Material).
The observables under study are monitored during the whole graph dynamics until they stabilise to their equilibrium values, against which observations in the original protein interaction networks are benchmarked. The use of two different null models, random networks with the same degree distribution and degree-degree correlations of the original PPINs and uncorrelated networks with the same degree distribution, respectively, allow us to quantify the extent to which degree-degree correlations are responsible for the behaviour that we observe in the PPINs.
Detection of changes in loop number during MCGD. In this study, tailored ensembles of randomised graphs were generated by Markov Chain Graph Dynamics to assess the difference in the number of loops between biological and random networks of the same family 21 . To perform the randomisation preserving specific topological features of the initial networks, the simulations were performed constraining 1) the original degree distribution or 2) the degree distribution and degree-degree correlation (previous paragraph for details). The changes in the number of loops during MCGD showed a series of different patterns according to the constraints, the loop length and the original network. These patterns were classified into eight groups according to the number of loops in the initial network compared to the final randomised network (higher/lower). Considering both simulations under constraint 1) and 2), there are six possible trends. Four of these trends were detected in the simulations and are shown schematically in Fig. 2a.
Classification of PPINs according to their topological properties. Principal Component Analysis (PCA) was performed on a set of variables describing the topological properties of the 30 PPINs in order to group them according their network features. After correlation analysis, four independent variables were selected: number of interactions, degree-degree correlation, assortativity, and the average eigenvector centrality. These variables describe the size of the network, their connectivity and the centrality of the nodes. The location of the networks in the space described by the first two PCs was used to identify groups by visual inspection. The grouping was then compared with the grouping associated to the pattern of decrease/ increase in number of loops after randomisation.
Analysis of functional enrichment by GO annotation. The recent high-confidence human soluble protein interaction network 12 was used for functional analyses. To reduce possible biases from large assembled and extensively annotated proteins 12 , the data set excluding the large ribosomal protein complex was also analysed. The domain of 'biological process' in the GO vocabulary was used for the functional analysis of each PPIN. The enrichment in functional annotation was recorded for the set of proteins in short loops of different length compared to the remaining proteins in the network. Additionally we defined the concept of functional consensus as the fraction of annotated GO terms that are common to all the proteins in a loop. The functional consensus can be considered a microscopic measure of functional enrichment. In the analysis of the frequency of functional terms all general terms at the top of the GO hierarchy were excluded as they are common to all annotated proteins. GO terms with more than 4 different children terms at level 2 were excluded.
Software for network visualisation and statistical analysis. Loop-detection and Markov Chain Graph Dynamics were implemented in C. Functional and statistical analyses were performed using in-house python scripts, R 3.0.2, the Bioconductor 59 packages Uniprot.WS and GO.db and QuickGO. Network images were generated with Cytoscape 3.0.2 60 .
|
v3-fos-license
|
2019-02-12T00:21:00.376Z
|
2019-02-01T00:00:00.000
|
59608911
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s13584-018-0274-4",
"pdf_hash": "1ab41a61abf02a069b034da1d29a5a03d83639ab",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1060",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "835a6a5c6c62a147122fc01a2ca163c4dd018e73",
"year": 2019
}
|
pes2o/s2orc
|
Clinical and demographic characteristics of secluded and mechanically restrained mentally ill patients: a retrospective study
Background Restraint or seclusion measures in acute psychiatric care are used as a last resort when all other methods for removal of physical threat have failed. The purpose of this study is to find a correlation between coercive measures, demographic characteristics within this patient group, and factors associated with shortened periods of restriction. Methods This is a one-year retrospective study conducted in a male acute closed ward of a psychiatric hospital in Israel. The data from January 1, 2014 to December 31, 2014 were retrieved from the records of patients who underwent restraint and/or seclusion interventions during this period. The analyzed data included age, psychiatric diagnosis, marital status, education, race, ethnicity, length of hospital stay, legal status during admission, type of coercive measure (mechanical restraint, seclusion), number and duration of coercive episodes, reasons for coercion, time of event, number of previous hospitalizations, aggression in past and present treatment, and treatment during events. Results During this time period, there were 563 admissions in the study ward. Over this period, 176 subjects (31.3%) underwent 488 restraints and/or seclusions. 98% were aggressive in the past. (Although some results reached statistical significance, we prefer to emphasize here only the most important results, while the others will be presented in the text.) Patients with personality disorders were physically limited for the longest time, while schizophrenia patients were restricted for the shortest time compared with other diagnoses (p = 0.007). A negative correlation was found between the length of coercion and the number of academic female nurses on duty (p = 0.005), as well as the administration of sedative medications during the restricting procedure. Conclusions We believe that the presence of registered, academic female nurses on duty and medication administration during coercive measures can reduce the length of restriction.
Background
Every country has different laws regarding mentally disturbed patients who need involuntary medical intervention and its own system to cope with them. The public mental health administration in Israel is managed by district psychiatrists, who have the legal authority to force hospitalization and/or force treatment after a voluntary or forced psychiatric examination.
Since 1991, in accordance with the Israeli law for treating mentally ill persons, there have been three categories for involuntary hospitalizations: a) by a district psychiatrist, b) by court for forensic observation, and c) by court for forced treatment.
The problem of violent behavior in psychiatric facilities remains very relevant in the clinician's everyday practice. Despite significant progress in the pharmacological treatment of mental disorders, mechanical restraint and seclusion are still used in daily practice for psychiatric inpatients. Professional staff usually prefers to avoid using these procedures, since they limit a patient's freedom and impair the patient's dignity. However, sometimes it is necessary to do so in order to contain extreme episodes of dangerous behavior threatening the patient's surrounding environment. In general, restraint is needed for immediate control of physically threatening and/or agitated behavior. This behavior usually resolves and does not recur following the patient's release [1,2].
The correct use of coercive measures in a psychiatric ward is a major challenge. It is essential to ensure that these interventions are used only when other approaches have failed. Most of the studies in this field were published over the first decade of the twenty-first century; and according to a recent review, restraint use remains frequent [3]. About 6-17% of psychiatric inpatients undergo this intervention and its prevalence may reach as high as 38% in some involuntarily admitted psychiatric patients [3,4]. For various reasons, about 24% of all patients admitted to a psychiatric emergency department require restraint or a combination of seclusion and restraint [5]. Published data concerning coercive measures use in various countries are presented in Table 1.
A literature review and survey of international trends in psychiatric hospitals in developed countries found great variations in the incidence of seclusion, from 3.6 to 15.6%, and in the incidence of restraint, 1.2 to 8.0% [6]. However, several other studies have reported a range in the incidence of seclusion without restraint in adult psychiatric settings between 4 and 44% [7], and a range in use of seclusion with restraints between 4% [8] and 12% [9]. Longer hospital stay has been associated with a greater risk of seclusion with or without restraint [10,11]. Studies have also found that patients who have been secluded stay longer in the hospital [9][10][11], and they have had more previous admissions [12]. In recent decades, psychiatric staff has tried intentionally to reduce the use of coercion, in accordance with international professional ethical guidelines [13].
The present study investigates characteristics of adult inpatients in a closed male psychiatric ward in Israel who underwent restraint and/or seclusion in an upholstered room, and identifies correlations between demographic, clinical, and other factors with the length of the restriction periods.
Materials and methods
This is a one-year retrospective evaluation study of restraint and seclusion in an upholstered room in a male acute, closed psychiatric ward in a government-owned mental health center, performed from January 1st, 2014 to December 31st, 2014. The data examined were from a period prior to recently issued guidelines on the use of restraint and seclusion by the Israeli Ministry of Health. The ward contains 50 beds and admits voluntary and involuntary patients. In this hospital there are only gender-separated acute closed wards, as is the case in most acute closed wards in Israel. The separated wards minimize eliminate interactions between male and female patients. The study population consisted of all ward admissions during the study period. The data were retrieved from each patient's record. The analysis included the data on the following variables: age, marital status, education, race, ethnicity, psychiatric diagnosis according to ICD-10, length of hospital stay in days, type of event (seclusion or restraint), reasons for coercion, time of event, number of events per patient, total length of coercion in hours, number of previous hospitalizations, aggression in past and present treatment, and treatment during event. We evaluated correlations between the type of coercion (seclusion or restraint) and the number of staff members, their educational level, and the percent of inpatient occupancy in the ward at the time of event.
Seclusion is defined as placing the patient in a locked upholstered room. During seclusion, patients are observed by video monitoring, and there is communication via intercom. Mechanical restraint refers to the use of belts attached to a bed in a special single-bed room in order to restrict movement of each of the patient's arms and legs. According to our internal hospital guidelines, mechanical restraint patients should be monitored continuously via a closed-circuit T.V. system and intermittently by the nursing staff who reassess the patient's level of agitation and vital signs every half hour. During the use of coercive measures, patients receive either the regular treatment, but earlier than usual, or additional sedative drug therapy. There were two groups of additional sedative medications: 1) antipsychotics, and 2) benzodiazepines. Choice of the medication was based on the clinician's judgment. The study was approved by the Institutional Review Board.
Statistical analysis
The statistical analysis was performed in three phases. First phase: Descriptive statistics. In this phase we assessed the average and standard deviation for all the quantitative normally distributed variables and the median and range for all the quantitative non-normally distributed variables. We assessed percentages for the categorical variables. Second phase: univariate analysis. In this phase we compared the dependent variable with each of the independent categorical variables. For comparison of the dependent variable with the categorical variables, we used either the Mann-Whitney or the Kruskal-Wallis test, as appropriate. For a comparison of dependent variables with independent quantitative variables, we used Spearman's rank correlation coefficient. Third phase: multivariable analysis. In this phase we used multivariate linear models. The regression model was constructed, including covariates that were found to be statistically significant in the univariate analysis and/ or were clinically significant. Also, we checked possible Table 2. Time spent in the upholstered room or being physically restrained was the longest in patients with personality disorders (median 4 h, range 1.5-182.5 h) compared with other diagnoses (p = 0.007), and the shortest time was among schizophrenia patients. The data are presented in Table 3.
A negative correlation was found between the length of coercion (in hours) and: age of patients (r = − 0.119, p = 0.001); number of previous hospitalizations (r = − 0.019, p < 0.001); duration of present hospitalization (r = − 0.255, p < 0.001); number of registered academic female nurses on duty (r = − 0.128, p < 0.005); and number of registered academic male nurses on duty (r = − 0.097, p < 0.05), (Table 4). Table 5 presents the results of applying the multivariate linear regression for identifying factors associated with the duration of the coercive measures. There is a significant association between additional medication and duration of coercion.
Patients who were restrained had a longer duration of coercion than patients who were secluded. In addition, patients who were divorced had a longer duration of coercion than those who were married.
Variables that were negatively associated with the length of coercion included: diagnosis of schizophrenia, single marital status, number of academic male nurses on duty, and the number of academic female nurses on duty. And, variables that were positively associated with
Discussion
Coercive measures are sometimes necessary to manage a patient with severely aggressive behavior. These measures can include physical force, mechanical devices, or drugs that temporarily restrict freedom of movement or control behavior. Although they are not a part of the patient's standard treatment, according to the Joint Commission on Accreditation of Health Care Organizations (JCAHO) criteria [14], such intervention may be implemented as emergency treatment for patients with behavior that is dangerous to themselves or others. Appropriate management by psychiatric ward staff of patients with disruptive behavior, creates a sense of security and can help find the balance between required therapeutic interventions and the need of preserving patients' dignity. The indications for limitations of freedom in psychiatric facilities are not always well defined and may be prone to abuse. In many countries, there are no national guidelines regarding the use of coercive measures.
The Israeli law for treatment of mentally ill patients (1991) sets a policy for using means of coercion, as follows: A. restraint method may refer as to seclusion or restriction B. use of coercive measures for the hospitalized patients should be made only to the extent required for medical treatment of the patient or to prevent danger to himself or others C. the medical directive regarding the use of a coercive measure should be given in writing by a physician for a limited period, in a state of emergency and in the absence of a physician the nurse is permitted to provide a coercive instruction.
Published studies performed in different countries indicate that coercive measures are used in 100% of the wards in Germany, 60% of those in Switzerland, and, for all practical purposes, in none of the wards in Great Britain where physical restraint is applied only along with pharmacological restraint and for a very short period of time (mean 12 min) [4]. Such a difference can be explained by cultural and demographic characteristics between patients of different countries, number of personnel per patient, overcrowding and physical conditions of the facilities.
Our findings from this retrospective study demonstrate that during the study period about one third of all admitted patients (31.3%) required the use of coercive measures. These results correspond to other studies performed on a similar sample size with similar duration [11,15]. In our study, since most of the patients needing coercive measures were dangerous for themselves, more were restrained than secluded. It is hard to prevent self-harm without restraints.
Our findings concerning age, number and duration of hospitalizations are similar to those of other studies. McLaughlin et al. found that all coercive measures were associated with patients staying longer in hospital [16]. Caqueo-Urizar et al. [17] noted that younger age is associated with more violent behavior, since young adulthood is a critical period at risk for aggressive behavior. According to the study performed by Dumais et al., younger age and a longer stay in hospital are predictors of an episode of seclusion with restraint [18]. Although patients with a principal diagnosis of a personality disorder were least likely to undergo coercive measures in our sample (about 3%), these patients were restricted for the longest period in comparison with patients with other diagnoses. Use of physical restraint in this particular group brought temporary relief from the feelings of regret and remorse about the trouble they had caused others during their repeated cycles of violence and aggression [19]. In our opinion, the explanation for this discrepancy may be related to the fact that personality disorder patients, especially those with aggressive and violent behavior, frequently cause a negative countertransference in the personal staff. In some cases, hopefully rare, staff might use these measures as punishment. We suggest that this unconscious reaction may lead to a longer time of restriction. Therefore it is important to educate staff and raise their awareness about this issue. In addition, physical restraint is often not the sole solution for patients with a personality disorder who exhibit aggressive and violent behavior. They also need pharmacological and psychological treatment in order to achieve long-term, effective management of their behavior.
Conversely, patients with schizophrenia (the most prevalent diagnosis -about 53%) were under coercive measures for the shortest time in comparison to patients with other diagnoses. A similar trend was observed in studies performed by Dumais et al. [20], Caqueo-Urizar [17], and Huber et al. [21]. According to Beck et al. [10], schizophrenia or other psychoses appear to be a risk factor for a single episode of restriction, but not for multiple episodes. The review by Beghi et al. [3] notes that in some studies a diagnosis of schizophrenia may increase the risk of aggressiveness and restraint [3].
Interestingly, we have found that female nurses generally used less restraint than males, but both academic male and female nurses used less coercive measures. Similar data were described in some studies, which found that a male staff member is more likely to use restraint that a female staff member [22,23]. These researchers believe that aggressiveness tends to be directed against people of the same gender and, given that more male patients are restrained, this is more likely to be done by male staff. Our results emphasize that the level of academic education of the staff is fundamental and influences the duration of coercion. Our opinion is that the key principles and initial management of the agitated patient should be part of the syllabus of nursing students.
Some authors have found that inpatient overcrowding is associated with an increased rate of aggressive behavior [17,18,21,28,29]. In contrast, we did not find any association between the degree of crowding in the ward and frequency or duration of coercive measures.
As others [10,11,28,29], we also did not find an association between educational level, ethnic origin of patients and frequency or length of coercive measures. However, some studies have found associations between low educational level and ethnicity and higher aggression level and longer duration of restraint [15,17,30,31].
The main limitation of our study is that it was a retrospective one. Thus we cannot be certain that the associations we have demonstrated are causal.
It should be noted that in the last 2 years there is a public debate in Israel concerning the issue of patients' restraint in psychiatric units. The practice of restraint has been dramatically diminished following this debate. Future studies should be designed to evaluate the results of these changes, for example, whether they have had an influence on patients' violence against staff and other patients.
Conclusions
The use of physical restraint should be based on the prevailing needs of the patient and should be used only as a "last resort" [19]. According to Human Rights Working Group, seclusion is "the restriction of a person's freedom, without his or her consent, by locking him or her in a room. It can be justified only on the basis of a clearly identified and significant risk of serious harm to others that cannot be managed with greater safety by any other means" [32].
Locking someone alone in a room is a serious intervention and must be carefully regulated and monitored, in order to avoid abuse. We believe that it can be done, when essential, by applying a rigorous set of principles for its use and ensuring that there is a careful framework for monitoring the practice at the local level. The use of seclusion, while less restrictive, should always and only be applied for the benefit of the patient. Even so, the use of restriction can cause distress and psychological harm and can increase the potential risk of self-harm [19]. Therefore, evaluation of the intended benefit should be carried out after each episode of restraint.
We assume that three factors associated with length of coercion time should be taken in account: 1) personality disorder may lengthen it; 2) presence, on duty, of male and female nurses with academic degrees can reduce length of coercion; 3) giving sedative medication during coercive measures also could diminish length of restriction.
The application of coercive measures should be used as an adjunct to pharmacological treatment and psychological support, to achieve long-term, effective management of the patients' disturbances. Furthermore, staff should have access to routine supervision with regard to their practice. To prevent the risks associated with the use of restraint in psychiatry, it is necessary to train staff with courses that encourage the use of diverse methods of managing aggressive patients (de-escalation techniques) [33][34][35].
We believe that restraint use should be minimized, but is sometimes necessary to prevent and manage violent, aggressive, and self-harming behavior of patients. It will be used most fairly and effectively when it results from a decision-making process by highly skilled staff based on careful observation and is then is carefully monitored.
|
v3-fos-license
|
2018-12-12T04:04:33.464Z
|
2015-09-30T00:00:00.000
|
54817270
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.helsinki.fi/lumat/article/download/1014/1007",
"pdf_hash": "cbb7da81890ab1ccd45fa4a7c44f999267843431",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1061",
"s2fieldsofstudy": [
"Chemistry",
"Education",
"Computer Science"
],
"sha1": "cbb7da81890ab1ccd45fa4a7c44f999267843431",
"year": 2015
}
|
pes2o/s2orc
|
DESIGN AND EVALUATION FRAMEWORK FOR RELEVANT CHEMISTRY-RELATED EDUCATIONAL CARD AND BOARD GAMES
During the 21st century, new generations of both commercial board games and digital games have appeared, and in their wake, game-based learning has been extensively studied in recent years. There has also been some research on and development of card and board games for learning chemistry. Most of this research has been conducted in the field of regular and educational digital games. Many different classification, evaluation and assessment frameworks and tools are available for digital games. Few have been developed for card or board games, but many general rules for good educational games have been offered in research articles. Based on a literature review, a novel design and evaluation framework for card and board games for chemistry education on the lower secondary level has been developed. The aim of this framework is to help designers and teachers to design new educational card and board games, to support them in evaluating the viability of already existing chemistry-related educational games and instructing them in supporting student learning with a game.
Introduction
Game-based learning is one of the topical teaching approaches studied around the world.
There is a need for designing more relevant learning environments to engage students, especially in chemistry.Game-based learning has been extensively studied in recent years.
The research has focused especially on digital games, but there has also been some research on and development of educational card and board games related to chemistry (e.g., Tüysüz, 2009;Kavak, 2012;Rastegarpour & Poopak, 2012;Bayir, 2014).During the 21 st century, new generations of both commercial board games and digital games have appeared (Keskitalo, 2010).
Definition of Educational Game
The term game can be defined as "a system in which players engage in an artificial conflict, defined by rules that results in a quantifiable outcome" (Salen & Zimmerman, 2003).An educational game is a game with a certain didactical meaning that aims to support, improve and advance the learning process (Dondi & Moretti, 2007).
Educational Games as a Teaching Method in General and in Chemistry in Particular
As a teaching method, educational games employ the idea of a student participating actively in his or her own learning.Active participation in the learning process enables deeper conceptual understanding.(Lujan & DiCarlo, 2006;Tüysüz, 2009) Educational games are mentioned as one motivating teaching method for group work on the lower secondary level in the new Finnish National Curriculum Framework 2016 (OPS 2016(OPS , 2015)).
There are many opportunities and challenges in using educational games in chemistry education.In general, pre-, in-and post-game guidance and instructions given by a teacher have a positive effect on students' motivation and learning performance, ensuring a positive attitude towards the topic of learning (Casbergue & Kieff, 1998;Ke, 2009).Using educational games has been found to support especially students who are lower achievers, but not without a teacher's guidance (Ke, 2009, 21-22).If a teacher is not involved, the students just learn how to play and are not immersed in the learning topics (Ke, 2009).
Challenges and conflicts as well as rewards and feedback during play give alternating feelings of frustration and pleasure to players.These alternating feelings are one of the engaging elements of games and playing.(e.g., Tüysüz, 2009;Annetta, 2010) A co-operative goal structure, as opposed to competitive or individualistic goals, has been found to enhance positive attitudes towards the learning content.Boys have been found to engage better in co-operational and problem-based playing than girls.(Ke, 2008) According to Ke (2009), competition is the difference between game and simulation, but in-game competition should not necessarily be against other players, but towards the goal.Playing a computer game based on uncertainty improves the factual memorisation and emotional engagement of students aged 10-16.Purportedly, this is also effective in non-digital games and other areas of learning.(Howard-Jones & Demetriou, 2009) With adults, it has been suggested that uncertainty and changes in game difficulty engage the player in both the learning and the game (Chanel, Rebetez, Bétrancourt, & Pun, 2008;Tüysüz, 2009).The effects of uncertainty or different difficulty levels in chemistry-related educational card or board games have not yet been studied.
Both computer and card games have been demonstrated to have a positive effect on chemistry learning and attitudes towards chemistry (Sherman & Sherman, 1980;Kavak, 2012;Rastegarpour & Poopak, 2012).To support the idea of chemistry learning with computer-based games, statistically significant positive effects (n=176, p > 0.05) were found in chemistry achievement and attitudes towards chemistry among pre-service primary-level teachers.However, no significant difference in their metacognitive levels was observed.(Tüysüz, 2009) Bayir (2014) has studied three different chemistry-related card and board games with 250 students (grades 9-12) and 30 in-service and pre-service teachers.The themes in the games were compounds, the elements and the periodic table.The results of Bayir's research indicate that using these card and board games was beneficial to both the teachers and the students.According to the teachers, the three main benefits of using the games were facilitating the learning of the main concepts, teaching the concepts in an interesting and enjoyable way and enabling the students to understand the relationships between the concepts.The research suggests that games spark students´ interest in chemistry and facilitate the learning of the chemistry concepts in question.
There is also evidence against games as effective learning tools (e.g., Randel, Morris, Wetzel, & Whitehill, 1992;Emes, 1997).The conclusions of these studies propose that games are effective only for certain learning content and in situations where the objectives are clearly defined.
Evaluation Criteria for Educational Games
Different criteria for evaluating digital games exist, including, for example, the flow of the Design research in science education is an effective research method that is connected to real teaching situations and real problems of science education.Design research has three main parts: problem analysis, design procedure and design solution.There might be some variation and the parts may be repeated during the research process (Edelson, 2002), but the process always starts with problem analysis to ensure that the design approach is based on a real life problem and that it encompasses a theoretical framework (Pernaa, 2013).
Methods
The main aim is to develop a design and evaluation framework for educational card and board games related to chemistry learning on the lower secondary level.This framework is intended to support both teaching and learning and help to develop new, high-quality educational games.The objective of this research project is to answer the main research question: What kind of framework for designing and evaluating games for chemistry education would support the creation of games for better and more relevant teaching and learning?
Theoretical Problem Analysis
The problem analysis for this design research project was theoretical.It was executed as a literature review.This systematic literature review was conducted in accordance with the criteria and models for integrating and systematic data collection (Salminen, 2011;Koskinen, Kangas, & Krokfors, 2014).
The phases of the literature review were: After the searching phase, data extraction was executed with a directed content analysis.
Directed content analysis is one of the three different approaches to qualitative content analysis in which a theoretical framework serves as a basis for initial codes (Hsieh & Shannon, 2005).
The last phase of the literature review was data synthesis, where codes from the data extraction phase were clustered an evaluated.During the evaluation, the codes which were found to be irrelevant or impossible to take into account in card and board games were discarded.
Design Procedure
After clustering, a design and evaluation framework for chemistry-related educational card and board games on the lower secondary level was developed by forming design and evaluation criteria to give the framework a structure that teachers could easily understand.
The section on lower secondary level chemistry in the Finnish National Curriculum Framework 2016 (OPS 2016(OPS , 2015) was used as an example for incorporating curricular chemistry aspects into the framework.The main topics from the curriculum were gathered and included into the design and evaluation framework.The structure of the framework was modified and additional subclasses and details were added in order to make the structure and content of the framework as good as possible.
In accordance with the rules for qualitative design research presented by the Design-Based Research Collective (2003), the design solution produced in this study is a guiding model which is transferable to different fields of teaching.
Results of the Literature Review
The literature review was conducted to broaden the theoretical framework and determine the theory-driven central concepts in the assessment and evaluation of both games and educational games.These central concepts are listed in Table 1.After the synthesis, some of the central concepts were excluded because they were mainly connected to digital games and much easier to include in the context of video games, for example, than in card or board games.The excluded concepts were: navigation, number of attempts, identity or customisable characters and narrativity or richness of storyline.Selfesteem was also excluded, even though it could probably be evaluated with a post-game questionnaire or some sort of self-assessment tool.
Design Solution
Based on the results of both the theoretical problem analysis (Table 1) and the design process (Chapter 2.2), a novel design and evaluation framework was developed for card and board games related to chemistry learning on the lower secondary level (Figure 1).This framework includes classes and subclasses which should be included in a good chemistry related educational game.The tool also helps teachers to assess and support a playing session.The detailed subclasses may be used to guide game design and evaluation.
The best research-based alternatives are in bold.
Discussion and conclusions
Results of the literature review (Table 1) clearly demonstrate a consensus among developers of digital educational games about what the central and important concepts and contents in games are.These results support and add to both the theoretical framework about games as a teaching method and the main lower secondary level chemistry content of the Finnish National Curriculum Framework 2016.Both the curriculum and the developers of educational games are focused on sociality and co-operation; evaluation, assessment and feedback; problem solving and challenges; and real-life connections.All of these concepts are also connected to the socio-constructivist concept of learning and to supporting guidance and formative assessment in helping students learn better.
The design solution (Figure 1) is a framework which makes it easy for a teacher to evaluate the educational quality of a certain card or board game.The framework can also be used for games in other formats.The curriculum content in the framework can be adjusted to support curricula in different countries.
The next point of study will be how new games can be created by using the developed design and evaluation framework.
Defining the objective: To review the relevant literature relating to the classification and evaluation of educational games. Defining criteria for articles: Article includes a tool, framework or other relevant, research-based information for classifying or evaluating games or educational games; article does not include only simulations or commercial games; article presents general information and does not focus only on one game; exceptions to the previous rule are articles about chemistry-related games; entire article is available without extra cost; sources are from 2000 to 2014. Defining key words for articles: first search with games and classification, second search with games and evaluation, third search with games and quality assessment, fourth search with educational games and quality assessment. Defining data sources: Nelli Search Portal (including databases, journals and ejournal sources in the Helsinki University Library and the National Library of Finland) Literature search: (see below) Data Extraction: a directed content analysis with coding (see below and Chapter 3) Data Synthesis: a directed content analysis with clustering (see below and Chapter 3) A total of 11 articles from the four searches were accepted into the data extraction phase of this literature review.The first search with the key words games and classification yielded 132219 results.When sorted by relevance, none of the first ten articles were considered relevant for this research.These articles only classified different games by violence or cheating, for example, or their content concerned something else than playable games (e.g., weakly acyclic games).The second search with the key words games and evaluation yielded 339307 results, a number which was absolutely too high to wade through.The decision was made to only include the top 90 articles when sorted by relevance: 6 of these 90 were considered to comply with the criteria set for articles and accepted.This search was then adjusted to only include articles from the year 2014, and three more articles were accepted.The third search with the key words games and quality assessment resulted in 126075 articles, the relevant top 30 of which were waded through and two more articles were accepted.The fourth and last search with the key words educational games and quality assessment yielded 51080 results.Two more articles were accepted from the top 30 of articles sorted by relevance.Additional 15 articles were chosen to complement the data search described in the previous paragraph.These articles were either referenced in the searched articles or they DESIGN AND EVALUATION FRAMEWORK FOR EDUCATIONAL GAMES addressed developing, evaluating or researching chemistry-related educational card and board games.
Figure 1 .
Figure 1.The design and evaluation framework for chemistry-related educational card and board games on the lower secondary level.
Table 1 .
Coding and clustering of central concepts in the field of game and educational game assessment and evaluation based on literature review (n=26).
|
v3-fos-license
|
2018-06-07T10:26:00.000Z
|
2018-03-31T00:00:00.000
|
51684240
|
{
"extfieldsofstudy": [
"Physics",
"Medicine",
"Computer Science",
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/e20060441",
"pdf_hash": "a7233438c45b04ce29f3c26d7eff057776a8e46c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1063",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "6241f583c9b5aeee9299660804c5b069c8c457af",
"year": 2018
}
|
pes2o/s2orc
|
Equilibrium States in Open Quantum Systems
The aim of this paper is to study the question of whether or not equilibrium states exist in open quantum systems that are embedded in at least two environments and are described by a non-Hermitian Hamilton operator H. The eigenfunctions of H contain the influence of exceptional points (EPs) and external mixing (EM) of the states via the environment. As a result, equilibrium states exist (far from EPs). They are different from those of the corresponding closed system. Their wavefunctions are orthogonal even though the Hamiltonian is non-Hermitian.
Introduction
In many recent studies, quantum systems are described by a non-Hermitian Hamilton operator H. Mainly, only the eigenvalues of H are considered because the main interest of these studies are in receiving an answer to the question of whether the eigenvalues of the non-Hermitian Hamiltonian are real or complex [1]. Additionally, the conditions for changing from real to complex eigenvalues and vice versa are considered in many papers. In other studies, calculations for realistic small quantum systems are performed by using the complex scaling method [2]. Recently, topological insulators are also considered in the presence of non-Hermiticity [3][4][5]. In the description of realistic systems, the non-Hermiticity of the Hamiltonian arises from embedding the system into an environment as explained in References [6,7]. It is derived from the full Hamiltonian, describing a system together with the environment which is, of course, Hermitian.
In order to describe the properties of a realistic quantum system, one needs not only the eigenvalues of the Hamiltonian but also its eigenfunctions. This is nothing but the longtime experience obtained from standard quantum mechanics calculations performed with Hermitian operators. There is no reason why this should be different in the description of a system by means of a non-Hermitian Hamiltonian.
The characteristics of the eigenvalues together with those of the eigenfunctions of a non-Hermitian Hamiltonian, H, are, so far, only studied for a small open quantum system in the papers of only one group, see e.g., Reference [6]. Their theoretical results are compared with those known experimentally [7]. These theoretical studies are performed in the same manner as those in standard quantum mechanics, with the only difference being that the Hermitian Hamiltonian H is replaced by a non-Hermitian operator H. The results, therefore, allow us to find a direct answer to the question of whether or not it is necessary to describe a system (in a special situation) by a non-Hermitian Hamilton operator. Additionally, the results show that a unique answer does not exist; it will depend on the conditions under which the system is considered.
Of special interest is the challenging question of whether or not an equilibrium state exists in an open quantum system that is embedded in at least two environments. The challenge of this question arises from the fact that it combines two conflicting concepts. On the one hand, the system is open meaning that its properties are not fixed and will vary under the influence of the environment. On the other hand, the system is expected to be in equilibrium. It is therefore no wonder that this question is not at all considered, up to now.
The present paper is aimed at a study of this question in the framework of the non-Hermitian formalism. For this purpose, we will sketch some results obtained by using the formalism worked out in Reference [6]. We will then consider the parametric evolution of an open quantum system and the possibility to form an equilibrium state. In any case, such a state is expected to be different from the equilibrium state of the corresponding closed system (described by the Hermitian Hamilton operator H).
The paper is organized in the following manner. In Section 2, the non-Hermitian formalism is shown to differ from that of the Hermitian formalism by two basic features: (i) the existence of singular points, which are mainly called exceptional points (EPs) [8] and (ii) the possibility of a mixing of the eigenfunctions of H via the environment, usually called external mixing (EM) [9]. In Section 3, the information entropy is defined, while in Section 4, the possible formation of an equilibrium state in an open quantum system (coupled to at least two environments) is considered. Section 5 is devoted to the different aspects of the question of whether or not it is really necessary to consider the non-Hermiticity of the Hamiltonian when describing an open quantum system. Here, we consider the most common case; that the system is coupled to more than one environment. The conclusions are contained in the last Section 6. In Appendix A, a few experimental results are listed which cannot be explained in the framework of Hermitian quantum physics.
Non-Hermitian versus Hermitian Formalism
In standard quantum theory, the Hamilton operator of a many-body system is assumed to be Hermitian. Its eigenvalues are real and provide the energies of the states. The lifetimes of the states cannot be calculated directly. They are usually obtained from the probability that the particles may tunnel from inside the system to the outside. The Hermitian Hamilton operator describes a closed system, and has provided many numerical results which agree well with experimentally observed results. A few experimental results are, however, not explained, in spite of much effort. They remain puzzling (see Appendix A for a few examples).
Full information on a quantum system can be obtained only when it is observed either by a special measurement device or in a natural manner, that is by the environments in which the system is embedded. Such a system is open and is described best by a Hamilton operator which is non-Hermitian [6].
Quantum systems are usually localized in space, meaning that they have a finite extension with a characteristic shape. The environments are however infinitely extended. The environments of an open quantum system can, generally, be parametrically varied and thus allow a parameter dependent study of various properties of quantum systems. Studies of such a type are now performed theoretically and experimentally on various different open quantum systems [7]. They provided many interesting results, which are necessary for a deeper understanding of quantum physics.
From a mathematical point of view, the non-Hermitian formalism is much more complicated than the Hermitian formalism. One of the most important problems is the existence of singularities [8] that may appear in the non-Hermitian formalism. At these singular points, mainly called exceptional points (EPs), two eigenvalues of the Hamiltonian coalesce and-even more relevant for the system properties-the eigenfunctions show deviations from the expected ones in a certain finite neighborhood around these points [6,9]. The deviations are caused by nonlinearities of the involved equations at and near the EPs.
At an EP, the trajectories of the eigenvalues E 1,2 ≡ E 1,2 + iΓ 1,2 /2 of the two crossing states of H [10] do not move linearly as a function of a certain parameter. The eigenvalue trajectories are rather exchanged: the trajectory of state 1 continues to move as the trajectory of state 2 and vice versa. The same holds true for the trajectories of the eigenfunctions. This trajectory of the eigenvalue is influenced by an EP not only at the position of the EP, but also in its vicinity. Here, Im(E i ) may be exchanged, while the corresponding Re(E i ) are not exchanged; or the other way around. The first case (exchange of the widths) can be traced to discrete states [11], which is nothing but the Landau-Zener effect, known very well (but not fully explained) in Hermitian quantum physics. The trajectories of the eigenfunctions show a behavior which corresponds to that of the eigenvalue trajectories, as illustrated first in Reference [11]. In any case, a linear motion of the eigenvalue trajectories occurs only far from an EP.
Another important problem is the fact that all of the states of the system may interact via the environment. This is a second-order process since every state is directly coupled to the environment and hence-via the environment-to another state. This process is usually called external mixing (EM) of the states [6,9].
A naturally arising question is what are the conditions under which a system can be nevertheless described by a Hermitian Hamilton operator? In other words, when it is really necessary to describe a system by means of a non-Hermitian Hamilton operator?
The main results of many studies on realistic quantum systems are the following: (i) Far from EPs and at low level density (every state is well separated from neighboring states), the description of a system by means of a Hermitian Hamilton operator provides good results. This fact is well known from countless calculations over many years. (ii) Near to EPs, the non-Hermitian formalism provides results which are counterintuitive.
These results agree (at least qualitatively) with puzzling experimentally observed results (see Appendix A). (iii) The EM of all states via the environment changes the eigenfunctions of the Hamiltonian even though EM is a second-order process [6,9]. (iv) In approaching an EP, the EM increases to infinity [9]. It is therefore impossible for any source (such as light) to interact with the system at an EP (for an example, see Appendix A).
The answer to the above question is therefore, that the non-Hermiticity of the Hamiltonian must be taken into account in all numerical calculations for which the influence of EPs and EM cannot be neglected. All other cases can be well described numerically by means of the standard methods with a Hermitian Hamilton operator. This, however, does not mean that the Hamiltonian of an open quantum system is really Hermitian. Quite the contrary; the singular EPs cause, in either case, nonlinear processes in a finite parameter range around their position [9]. These nonlinear processes being inherent in the non-Hermitian formalism, determine the (parametric) evolution of an open quantum system.
The results obtained from calculations for a system that is embedded in only one environment, seem to contradict these statements. That is, such a system can be described by taking into consideration the influences of EPs and EM or by neglecting them and the results are the same. The reason for this unexpected result is that nonlinear processes are able to restore the correct results (corresponding to the calculations with inclusion of EPs and EM) when EPs and EM are explicitly neglected in the calculation. These results feign a good description of the system by means of a Hermitian Hamilton operator [9]. It should be underlined however that this occurs only in the case when the system is embedded in no more than one well-defined environment.
New interesting effects occur in systems that are embedded in more than one environment (which is commonly the case). A realistic example is transmission through a small system (e.g., a quantum dot) which needs at least two environments. One of the two environments is related to the incoming flux and the other one to the outgoing flux. In this case, one of the most important new effects is the appearance of coherence [12] in addition to the always existing dissipation in open quantum systems. Coherence is correlated with an enhanced transmission, and is therefore of relevance for applications.
Information Entropy and Equilibrium States
The properties of open quantum systems depend on many different conditions, mainly simulated by different parameter values. In many studies, the coupling strength ω between the system and environment is kept constant at a sufficiently high level. In this approach, the influence of a possible variation of the value of the coupling strength ω on the results is largely suppressed.
According to the results of many calculations, the main difference between the Hermitian and non-Hermitian formalism of quantum mechanics lies in two effects: the (mathematical) existence of singular EPs [8] and the (physical) possibility of EM of the eigenfunctions [9]. The most challenging question is the following. Do equilibrium states exist in a system with EPs and EM? In other words, do equilibrium states exist in an open quantum system and how they do look like?
In order to find an answer to this question, we will first define the information entropy. For this purpose, let us consider a system consisting of N eigenstates of a non-Hermitian Hamiltonian H, the eigenvalues of which are E i ≡ E i + i/2 Γ i [10]. Here, E i stand for the energy of the ith eigenstates and Γ i for its width, being inversely proportional to the lifetime, τ i , of the state. Let us choose E i ≈ E j =i and Γ i = Γ j =i . Following Shannon [13], the information entropy H ent for this case can be defined by: where p i is the probability to find state i and −log 2 p i is the expectation value of the information content of state i (its entropy). The value H ent is maximum when the different p i are equal. In this case, the system is in an equilibrium state.
Equilibrium State of an Open Quantum System
An open quantum system (normally embedded in more than one environment) can be in an equilibrium state only under two conditions: (i) the system is far from an EP and (ii) EM is taken into account. The first condition arises from the fact that the properties of the system are not stable when the system is near an EP. The second condition corresponds to the fact that EM is inherent in the eigenfunctions Φ i of the non-Hermitian Hamiltonian H.
It should be repeated here that it is indeed a challenging question whether or not an equilibrium state exists in an open quantum system. This is because it combines two conflicting concepts: the system is open (meaning that its properties are influenced by the environment) and is, nevertheless, expected to be in equilibrium. It is, therefore, no wonder that this question has not been considered, up to now. To overcome this problem, we introduce the concept of maximum entropy. It is meaningful for an open quantum system as will be shown in the following.
Numerical calculations have shown the following (unexpected) result. By tracing the results for a system embedded in two environments, by varying a certain parameter, the eigenfunctions, Φ i , of the non-Hermitian operator, H, can become orthogonal [12]. Thus, the system does not evolve further.
This result can be understood by considering that the evolution of a system described by the non-Hermitian operator H, is driven by nonlinear terms. These nonlinear terms originate from the EPs and are involved in the corresponding Schrödinger equation. They are responsible for the fact that the system does not evolve without limit, instead the evolution of the system occurs up to a certain final state, at which the eigenfunctions of H are orthogonal (instead of biorthogonal).
The calculations further show that the eigenfunctions Φ i of the final state are strongly mixed in the set of wavefunctions Φ 0 k of the individual original states: Here Φ 0 k are the eigenfunctions of the non-Hermitian Hamiltonian H 0 , which differs from the full operator H by the disappearance of the non-diagonal matrix elements. This means that the Φ 0 k are the wavefunctions of the original (unmixed) states and a mixing of the states via the environment is involved in the final Φ i .
According to numerical calculations, when the entropy is maximum, each of the Φ 0 k appears in Equation (2) with the same probability, in all the eigenfunctions Φ i [12]. This means that the final state is an equilibrium state. It should be underlined once more that a general definition of an equilibrium state in an open quantum system is difficult (or even impossible). It is however always possible to find a state with maximum entropy.
Summarizing the results obtained for a system that is embedded in at least two environments, we state the following: (i) the final state of the evolution of the open quantum system (described by a non-Hermitian Hamiltonian H) is an equilibrium state, and (ii) the eigenfunctions of H at the final state of evolution are orthogonal (not biorthogonal).
Numerical Studies for Concrete Systems versus Evolution of Open Quantum Systems
Let us come back to the question raised in the Introduction; whether (and when) it is necessary to describe a realistic open quantum system by a non-Hermitian Hamilton operator? The results obtained in different studies are sketched above. They show clearly that the answer to this question depends on the conditions under which the system is considered.
From the point of view of numerical results, which will describe special properties of the system, the answer is the following. The system can be well described by a Hermitian Hamiltonian, H, when its individual eigenstates are distant from another, so that the different states of the system (almost) do not influence each other. This happens at low level density. The system, however, has to be described by a non-Hermitian Hamiltonian, H, when the influence of singular points (EPs) cannot be neglected in the considered parameter range. This occurs especially at high level density where the different eigenstates are no longer independent of one another.
This means, that the properties of many realistic systems can be described well by means of a Hermitian Hamiltonian, H. This statement corresponds to the longtime experience of standard quantum mechanics.
The situation is completely different when general questions, such as the (parametric) evolution of an open quantum system or the formation of an equilibrium state, are considered. In this case, the nonlinear processes caused by the EPs and involved in the non-Hermitian formalism are decisive. Among other outcomes, they are responsible for the fact that an open quantum system (embedded in at least two environments) will finally stop evolving; the evolution occurs up to the formation of a certain final state. The wavefunctions of this final state are orthogonal (not biorthogonal) even though the Hamiltonian of the system is non-Hermitian. The final state is an equilibrium state of the whole system (including its second-order components related to EM). It is different from the equilibrium state of the corresponding closed system.
Conclusions
Non-Hermitian quantum physics is a fascinating field of research that not only finds an answer to some long-standing problems of quantum physics (for examples, see the Appendix and also Reference [9]). Above all, it justifies the use of standard Hermitian quantum physics for the description of experimental results in many cases. Non-Hermitian quantum physics is therefore nothing but a realistic extension of standard quantum physics.
From a mathematical point of view, non-Hermitian quantum physics is much more complicated than Hermitian quantum physics. An important difference is the existence of nonlinearities arising from the EPs. However, they only play a role in the vicinity of the EPs.
In spite of these differences, the non-Hermitian and Hermitian formalisms show analogous physical features. An example is the existence of equilibrium states. As shown in the present paper, equilibrium states exist not only in closed quantum systems (described by a Hermitian Hamilton operator) but also in open quantum systems (described by a non-Hermitian Hamilton operator), although with certain restrictions. The equilibrium states of an open and a closed system differ, of course, from one another.
Funding: This research received no external funding
Conflicts of Interest:
The author declares no conflict of interest.
Appendix A. Puzzling Experimental Results
Over the years, different calculations with non-Hermitian Hamiltonians have been performed in order to explain puzzling experimental results. Here, a few of them will be listed.
1.
Some years ago, the evolution of the transmission phase, monitored across a sequence of resonance states, was studied experimentally in work where a multi-level quantum dot was embedded into one of the arms of an Aharonov-Bohm interferometer [14][15][16]. These experiments revealed the presence of unexpected regularity in the measured scattering phases (so-called "phase lapses"), when the number of states occupied by electrons in the dot was sufficiently large. While this behavior could not be fully explained within approaches based upon Hermitian quantum theory, it has been established that the phase lapses can be attributed to the non-Hermitian character of this mesoscopic system, and to changes of the system that occur as the number of electrons in the dot is varied [17]. The observed regularity arises from the overlap of the many long-lived states with the short-lived one, all of which are formed in the regime of overlapping resonance states.
2.
An example of an environmentally induced transition that is different in character to that described above is the spin swapping observed in a two-spin system embedded in an environment of neighboring spins [18][19][20]. In describing the damped dynamics of the two-spin system, its interaction with its environment may be taken to be inversely proportional to some characteristic time (τ SE ), which degrades the spin-swapping oscillations on some "decoherence time" (τ φ ). In the experiment, two distinct dynamical regimes were observed. In the first of these, the expected proportionalities ω ∝ b and τ φ ∝ τ SE were found as the interaction strength, b, between the spins was increased. This behavior agrees with Fermi's golden rule. On exceeding a critical environmental interaction, however, the swapping was instead found to freeze while the decoherence rate dropped according to τ −1 φ ∝ b 2 τ SE . The transition between these two dynamical regimes was not smooth, but rather had the characteristics of critical phenomenon, occurring once ω becomes imaginary. For such conditions, damping of the spin motion decreases with increased coupling to the environment, in marked contrast to the behavior obtained when ω is real. The observed results are related, in References [18][19][20], to the non-Hermitian Hamiltonian describing the system and to the presence of an EP.
3.
The high efficiency of the photosynthesis process (used by plants to convert light energy in reaction centers into chemical energy) is not understood in Hermitian quantum physics [21][22][23][24][25].
Using the formalism for the description of open quantum systems by means of a non-Hermitian Hamilton operator, fluctuations of the cross section near singular points (EPs) are shown to play the decisive role [26]. The fluctuations appear in a natural manner, without any excitation of the internal degrees of freedom of the system. They therefore occur with high efficiency and very quickly. The excitation of resonance states of the system by means of these fluctuations (being the second step of the whole process) takes place much slower than the first one, because it involves the excitation of the internal degrees of freedom of the system. This two-step process as a whole is highly efficient and the decay is bi-exponential. The characteristic features of the process obtained from analytical and numerical studies, are the same as those of light harvesting in photosynthetic organisms [26].
4.
Atomic systems can be used to store light and to thus act as a quantum memory. According to experimental results, optical storage can be achieved via stopped light [27]. Recently, this interesting phenomenon has been related to non-Hermitian quantum physics. It has been revealed that light stops at exceptional points (EPs) [28]. The authors of Reference [28] restrict their study to parity-time (PT)-symmetricoptical waveguides. This restriction is, however, not necessary. The phenomenon is rather characteristic of non-Hermitian quantum physics; the external mixing (EM) of the eigenfunctions of a non-Hermitian Hamilton operator becomes infinitely large at an exceptional point (EP) [9], such that any interaction with an external source (such as light) vanishes in approaching an EP.
|
v3-fos-license
|
2018-06-02T15:52:33.665Z
|
2017-01-19T00:00:00.000
|
44168632
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.5954/icarob.2017.os15-3",
"pdf_hash": "591ad367da4bc502ae33e4969f2a66182c23f044",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1066",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "591ad367da4bc502ae33e4969f2a66182c23f044",
"year": 2017
}
|
pes2o/s2orc
|
Design of Intellectual Vehicles with Path Memorizing Function
Based on the deficiencies of current intelligent vehicles, we design a new vehicle system that has a function of storing path data. It has a kind of algorithms to recognize the position and path of the vehicle that neither relies on the external positioning methods such as GPS, WIFI, etc, nor the electromagnetic rails or photoelectric rails. This system is rarely affected by the ground condition, so there is no need for any ground signs.
Introduction
Nowadays there are two kinds of guided vehicles that have found an increasingly wide utilization in the shortdistance transport of goods.
Vehicles are guided by electromagnetic track.
Vehicles are guided by photoelectronic track.Both of them need to rely on the track on the floor, and require a good ground condition.Their limitations are as follows: In the factory or warehouse, the ground condition may not be optimistic.Debris, water, dust and other issues often make it difficult for photoelectric sensors to accurately detect the lane.Similarly, the electromagnetic guidance is also limited by the water, metal debris and other adverse impact.
Before applying these two methods of guidance, workers must lay the black wire or electromagnetic wire on the ground as a "track".This is not convenient for factory to changes the product line, and it is difficult to lay "tracks" on unrepaired ground such as dirt roads and brick roads 1 .In addition to the ground track, we also could adopt wireless location technology, such as GPS and WIFI, to guided vehicles, but both GPS and WIFI have some obvious defects that are hard to overcome.GPS is vulnerable to the interference of buildings and the accuracy is poor in indoor positioning.Similarly, WIFI positioning is usually applicable to indoor space.Moreover, before positioning the vehicles, WIFI transceiver must be installed in some key locations in the room and we have to set the location information to the position system.This process generally takes a long time and is very cumbersome 2 .Also, when the indoor space changes, such as installing a new large mechanical equipment in the plant, the location information needs to be reset.
Based on the above points, it is necessary to design a transportation system that is not limited by the ground condition, and it does not rely on the GPS, WIFI or markers on the ground and can accurately transport the goods from one place to another.
Therefore, we designed a kind of intelligent vehicle with path memory function, which can record path between two locations, and quickly and accurately transport the goods from the starting point to the end point without human operation.
The hardware structural design
The whole hardware design consists of 9 modules: MCU, ultrasonic ranging module, communication module, speed measuring module, motor drive module, electronic compass, CCD camera, SD card and LCD module. 3The hardware structure is shown in Fig. 1.
Fig. 1 The hardware structural design of vehicles The structural design of the search and rescue robots is shown in Fig. 1.And the followings are some of the important details about the hardware structural design.
Ultrasonic ranging module
The ultrasonic ranging module can measure the distance between the vehicle and the surrounding obstacles to prevent the vehicle from hitting them.
Moreover, when the vehicle moves near to the end point of the transport, cooperating with the camera, the ultrasonic sensor can measures the distance between the vehicle and the marker, which is set at the end point, to eliminate the position deviation caused in the transport process, so that the vehicle can reach the destination more accurately.
Optical encoder and electronic compass
The photoelectric encoder records the vehicle traveling distance, electronic compass detects the current direction of the vehicle.With the data from both sensors, we can determine the relative position between the current vehicle and the starting point, as well as the deviation between the current vehicle and the preset travel path.
CCD camera
The camera can detect the terminal markers.With the date from optical encoder and electronic compass, if the MCU identifies that the vehicle is about to reach the terminal point, the camera will look for the terminal marker in its line of sight, and determine the relative angle to the end point.According to the relative angle and relative distance, the MCU could adjust the vehicle to reach the end point more accurately.
Vehicle body structure
As shown in Fig. 2, the vehicle body has three wheels, "A" and "B" are drive wheels, "C" is a mecanum wheel.Compared with the four-wheeled vehicle, the tricycle does not require steering gear, and its cost is lower than the four-wheeled vehicle.
The MUC can control "A" and "B" with different speed to achieve pivot steering, while the fourwheel vehicle cannot do this.Therefore, in the adjustment of moving error, the three-wheeled body has a greater advantage 4 .
Vehicle forward path identification algorithm
The vehicle designed in this paper can travel along the established route that is stored in the SD card.There are two sources of the established route.One is the manual input specific data, such as the distance, steering angle, etc.The other is that the vehicle is remotely controlled to move forward.At the same time, the vehicle automatically records the path in real time 5 .Therefore, it requires the vehicle to have the ability to detect the path in real time, the detailed methods are as follows: The MCU reads the speed data from photoelectric encoders every 500ms.As we know, it is impossible that the speeds of the two drive wheels are exactly the same in the meantime.If we just integrate the average speed as vehicle's path, it will lead a large deviation.Therefore, it can be considered that in this 500ms, the P -157 vehicle moves an arc M as shown in Fig. 3.We need to figure out the line distance L in the 500ms.Where, "a" is the direction difference during the 500ms, which is measured by the electronic compass.Formula 3 can be derived from the formula 1 and 2, and the line distance L can be figured out.
Assuming that the path is shown in Fig. 4 ang ang ang (7) From Formula 4 to 7, the direction (ang4) and the distance of the path AC can be obtained.And then use the above formula again, we can get the direction and the distance of the path AD.The rest can be done in the same manner.Finally, the straight line distance and direction of path AM can be obtained. 6he direction and the distance of a path can be stored in the SD card as ideal path to guide the vehicle moving.In addition, by calculating the path which the vehicle has traveled in real time, error between authentic path and the ideal path can be figured out.
Vehicle routing correction algorithm
When the vehicle runs autonomously without operator, its real-time position and direction can be obtained by the path identification algorithm, and the ideal forward route can be obtained from the information stored in the SD card.
As shown in Fig. 5, the straight line CD is the ideal forward route of the vehicle, the line AB is the distance deviation between the vehicle and the ideal forward route, and the ANG is the angle deviation between the vehicle and the ideal traveling direction. 7The overall deviation is obtained by Formula 8. Where, "Deviat" is the overall deviation value and is the error input of the PID algorithm.In this correction algorithm, "m" is the ratio adjustment factor of the linear deviation and the angle deviation.That is to say, when the value of m increases, the control algorithm focus much more on reducing the distance deviation.When the m decreases, the angle deviation has more affection to the control algorithm.
According to the above information, it is easily to add the PID in control algorithm. 8-158
Testing and conclusion
We obtained the following data by testing the intelligent vehicle (the test environment is cement ground).
In the 10 times of the random tests (human remote control), the vehicle travels 10m in straight line.The maximum location error for the end point is 2.7cm and the probability of error within 2 cm is 80%.
There is a probability of 90% that the steering error of turning 90 angle is in the range of 0.3.
As shown of the routes in Fig. 6, there is a probability of 90% that the error of vehicle's position in end point is less than 8cm.If adding the recognition algorithm of end point, the probability that the error that is less than 4cm will be 95%, and the probability that the vehicle does not recognize the end point is about 1.5%.It can meet the demand basically.It can realize the transportation of goods without any signs or tracks.It can be applied to a variety of indoor and outdoor occasions.Even if it transports goods in the rainy days or on dirt roads, the transportation could still be proceed very well.It can quickly change the transport path.Thus such feature is convenient for the change of plant production line.However, there are still some defects and shortcomings in this design, which mainly focus on the transport accuracy.Next, we will continue to improve the design on this aspect.
The path recognition algorithm of the design performs better on the ground and on the downhill path.The error is in the acceptable range when on dry dirt roads.However, in the potholes, or on muddy roads the performance is still not ideal.So we will continue to optimize the vehicle recognition algorithm and focus on improving the adaptability of the vehicle to the pavement with poor road conditions.
Fig. 3 .
Fig.3.The calculation of line distance L (a), and Fig.4(b) is the vehicle's actual travel path.The following is the algorithm to calculating the A-M straight-line distance and relate angle.
Fig. 4 .
Fig.4.The algorithm to calculating the travel distance
|
v3-fos-license
|
2020-02-27T09:15:32.807Z
|
2020-02-01T00:00:00.000
|
211536840
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1008334&type=printable",
"pdf_hash": "01ec5e247c3692e1932df66b47332516bd39d0c2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1067",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "420097febecbc7195d54ef3ab8af48009ab77bda",
"year": 2020
}
|
pes2o/s2orc
|
Two waves of pro-inflammatory factors are released during the influenza A virus (IAV)-driven pulmonary immunopathogenesis
Influenza A virus (IAV) infection is a complicated process. After IAVs spread to the lung, extensive pro-inflammatory cytokines and chemokines are released, which largely determine the outcome of infection. Using a single-cell RNA sequencing (scRNA-seq) assay, we systematically and sequentially analyzed the transcriptome of more than 16,000 immune cells in the pulmonary tissue of infected mice, and demonstrated that two waves of pro-inflammatory factors were released. A group of IAV-infected PD-L1+ neutrophils were the major contributor to the first wave at an earlier stage (day 1–3 post infection). Notably, at a later stage (day 7 post infection) when IAV was hardly detected in the immune cells, a group of platelet factor 4-positive (Pf4+)-macrophages generated another wave of pro-inflammatory factors, which were probably the precursors of alveolar macrophages (AMs). Furthermore, single-cell signaling map identified inter-lineage crosstalk between different clusters and helped better understand the signature of PD-L1+ neutrophils and Pf4+-macrophages. Our data characteristically clarified the infiltrated immune cells and their production of pro-inflammatory factors during the immunopathogenesis development, and deciphered the important mechanisms underlying IAV-driven inflammatory reactions in the lung.
Introduction
Aberrant pulmonary immune responses correlate with the pathogenesis of multiple human respiratory viral infections, including IAV infection [1]. Immune responses in the lung tissue include both antiviral and inflammatory factors, which play crucial roles in host protection and immunopathogenesis [2,3]. Through the integrated action of different pro-inflammatory factors, different immune cells are recruited into the airway. CC chemokines (like Ccl2, Ccl7, and Ccl8) and CXC chemokines (like Cxcl2 and Cxcl12) are important for the recruitment of leukocytes into the microenvironment of the airway. Infected monocytes and macrophages are the main contributors to the rapid production of pro-inflammatory and chemotactic cytokines, which lead to the enhanced migration of leukocytes and result in an effective defense against viral invasion. However, elevated cytokine and chemokine production has been associated with a poor clinical outcome [4]. Studies have highlighted a correlation between IL-6, IL-1, and TNF-α levels and the severity of disease symptoms [5,6,2]. Though these infiltrated cells are required for host protection and recovery, they can also exacerbate the immune injury to the lung and worsen clinical symptoms. The activation of neutrophils was recently reported associated with the most severe and acute infection of IAV in patients [7], and old mice infected with IAV would induced excessive levels of neutrophils and higher levels of cytokines [8], indicating that neutrophils have important roles in the IAV-driven immunopathogenesis Because of the double-edged sword role played by these infiltrated immune cells and the cytokines/chemokines they produced, it is necessary to further explore immune reaction profiles of the lung at different time points during IAV infection.
Alveolar macrophages (AMs) are critical for lung homeostasis and immune responses to pathogens [9]. As tissue-resident macrophages, AMs can self-maintain in local without the contribution of bone marrow (BM)-derived monocyte under normal condition [10]. TGF-bR signaling can up-regulate the expression of PPAR-γ, a signature transcription factor that is essential for the development of AMs [11]. Dramatically reduction of AMs was found in the lung infected with IAVs. Of note, AMs undergo M1 and M2 polarization through different stimulation by different cytokines. During IAV infection, a large number of AMs undergo apoptosis. When viruses are eliminated by neutralization antibody or T cells, the local repopulation of AMs would be dependent on BM-derived monocytes in a short time [12]. However, the long-time recovery of AMs depended on the local proliferation of the tissue resident AMs. The mechanism how the AMs return to steady state during IAV-derived lung damage remained to be clarified.
Single-cell RNA sequencing (scRNA-seq) has been applied to investigate the immune system under physiological and pathological conditions [13][14][15][16]. It allows detailed understanding of the complicate immune system at single-cell resolution [17][18][19]. In particular, scRNA-seq is a powerful tool for defining viral target cells, as it is convenient for analyzing the viral mRNAs and host signature genes in a single cell [20][21][22][23]. In addition, it can precisely examine the patterns of cytokine release in each immune cell and inter-lineage crosstalk in single-cell with the ligand and receptor interaction map [24,25]. In this study, we analyzed >16,000 immune cells in the lung tissue isolated from mice infected with IAV at different time points post infection, and scRNA-seq enabled us to clarify the complicated immune responses in the lung tissue across the whole course of IAV-driven pneumonia.
Identification of cell clusters in the lung during IAV infection
To investigate the whole immune cell populations in the lung during IAV infection, we collected total suspended cells of the lung tissue from C57BL/6 mice uninfected (day 0, 2762 cells) or infected with A/PR/8/34 (H1N1) virus at 5 time points including day 1 (2185 cells), day 3 (3074 cells), day 5 (2526 cells), day 7 (2572 cells), and day 12 (3305 cells) (3 mouse each group), for the droplet-based scRNA-seq transcriptional profiles using the 10x Chromium platform (Fig 1A and S1 Table). Any cell with less than 200 genes or more than 30% of mitochondrial unique molecular identifier (UMI) counts was filtered out, and only genes with at least one UMI count detected in at least one cell were used for further analysis. The scRNA-seq profiles after the quality control were aggregated and analyzed using CellRanger software, which can provide stable and accurate clustering solutions for 10x Genomics scRNA-seq data [26]. The sequencing quality control showed that the six samples from six time points during virus infection were qualified for further scRNA-seq analysis (S1A Fig and S1 Table). Graphbased clustering was run using t-distributed Stochastic Neighbor Embedding (tSNE) to group cells together that have similar expression profiles, and to build a sparse nearest-neighbor graph without pre-specification of the number of clusters.
Based on the tSNE dimensionality reduction and unsupervised cell clustering, we identified 18 distinct cell clusters named as C1-C18 based upon their total cell numbers, which expressed unique transcriptional profiles and sequentially occurred at different time points (Fig 1B, S1B Fig and S2 Table). Pairwise Pearson correlations between each cluster were calculated based on the mean expression of each gene across all cells in the cluster for hierarchical clustering, showing the distinct relationship among different clusters (S1C Fig and S3 Table). We also found that the changes of several major clusters during IAV infection with FACS analysis were similar with that in scRNA-seq data (S1D Fig and S2 Table), which further confirmed the scRNA-seq data. To identify genes that were enriched in a specific cluster, the mean expression of each gene was calculated across all cells in the cluster and the log2 fold-change of differentially expressed genes was calculated relative to the other clusters. Some significant genes (Log2 fold change >1, P-value <0.01, Benjamini-Hochberg adjusted) and genes with high expression of the known markers of major cell types were shown in Fig 1C. For example, many cells in cluster 5 (denoted C5) showed high expression of SiglecF and CD11c (Itgax), and were labeled as pulmonary alveolar macrophages (AMs) (S2A Fig). The above results, combined with further principal component analysis demonstrated that the immune cells in the lung comprised all major immune lineages, and the clusters were mainly comprised by monocyte/macrophage/DC-lineage (C1, C5, C8, C7, C10, and C17), lymphocyte-lineage (C2, C3, C4, C6, C11, and C12), granulocyte-lineage (C13, C14 and C16), erythrocyte-lineage (C9 and C15), and epithelial cell-lineage (C18) (S2 Fig). The heterogenous components of cells in the lung during IAV infection highlight the necessity of single-cell analysis for dissecting the IAV-related immune cells in the lung in detail.
IAV infection initiates in the respiratory tract and spreads in the lung, which triggers widespread pulmonary immune responses. We fitted the cells from 18 clusters into different samples that were collected at different times. The distribution patterns of the 18 clusters sequentially changed at different time points after IAV infection (S3 Fig). Massive changes to
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis the transcriptional landscape were found between the normal lung tissue (day 0) and tissue from mice challenged with A/PR/8/34 (H1N1) viruses (day 1, 3, 5, 7, and 12). We also analyzed the expression of genes involved in the inflammatory response at different time points and in different clusters. There are 372 genes related to the GO term of inflammatory response (GO: 0006954) (S4 Fig). The inflammatory response was analyzed within various cells. C1/C5/C7/ C10-monocytes and C13/C14/C16-granulocytes generated many pro-inflammatory factors at various time points (Fig 1D and S5 Fig).
Host cells infected with IAV were quantitatively identified by tracking the intracellular IAV segmented mRNAs at single-cell resolution. To better understand the infected cells in lung, the UMI counts of IAV genes were sought in single cell transcriptional data (S5 and S6 Tables), and above 5000 cells were detected containing at least one counts of IAV transcript (S6 Fig).
To reduce the false positive rate of infected cells, only the cells with the highly expressed IAV genes (at least one transcript per gene per cell) were defined as highly infected cells. There were 668 cells selected with the expression of more than 8 copies of viral mRNAs. Among them, the cell number at one day p.i. is 135 cells; three days p.i. is 352 cells; five days p.i. is 180 cells; seven days p.i. is 1 cells; 12 days p.i. is 0 cells). The viral mRNA-positive cells mainly appeared in samples from day 1, day 3 and day 5 p.i., while a few RNA copies of virus genes were found at day 7 and day 12 p.i., showing that the clearance of the viruses occurred at day 7 p.i. (S7 Fig and S6 Table). Significant amount of viral mRNAs were mainly detected in the cells from 8 clusters (C1, C5, C7, C8, C13, C14, C15, C16 and C18) p.i. (S8 Fig). According to the expression counts of IAV transcripts, the cells in the clusters susceptible to IAV infection can be divided into highly infected cells (total UMI counts of viral transcripts �8), potential or lowly infected cells (total UMI counts of viral transcripts �1), and undetected cells (UMI counts of viral transcripts = 0) (S8 Fig). However, the cells of three types exhibited similar response to IAV in most clusters, indicating that there was no visible correlation between viral load and host response level of single cell in the clusters infected to IAV ( Fig 1E). The cells under both extracellular exposure and intracellular infection can exhibit a significant response in addition to a significant bystander response [27,28]. The uninfected or lowly infected immune cells could be activated by cytokines or chemokines from other immune cells and generate antiviral factors or other pro-inflammatory factors. These results depicted the dynamic landscape of the cells from the lung immune response during IAV infection, and described the composition of immune cells during IAV-driven pneumonia.
PD-L1 + neutrophils infected by IAV are the major contributor to the first wave of pro-inflammatory factors
Sequential transcriptional profile analysis specifically for pro-inflammatory factors revealed that there were two waves of pro-inflammatory factor productions during the whole IAV infection process. The first wave occurred during the early stage of IAV infection, and the second wave occurred after day 7 p.i. when the viral replication was no longer detectable (Fig 2A). Further study on the pro-inflammatory factors at day 1 p.i. indicated that although C1-macrophages generated some factors, such as Ccl22 and Cxcl19 and C5-pulmonary AMs generated IL18, the C13-cells are the major contributor to generating various pro-inflammatory factors, expressed genes (y axis) by cluster (x axis). Dot size represents the fraction of cells in the cluster that express the gene; colour indicates the mean expression (Z-score) in expressing cells, relative to other clusters. (D) The sum UMI counts expression of host 372 genes related to inflammatory response in different cell clusters (X-axis) (GO: 0006954).The dots indicate the cells from different clusters, coloured according to the samples. (E) The sum UMI counts expression of host 372 genes related to inflammatory response in different cell clusters susceptible to IAV infection (X-axis) (GO: 0006954). The dots indicate the cells from different clusters, coloured according to the samples. The cells in the clusters susceptible to IAV infection were divided into highly infected cells (I), potential or lowly infected cells (P), and undetected cells (N). https://doi.org/10.1371/journal.ppat.1008334.g001
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis including high levels of Ccl3, Ccl4, Cxcl2, Cxcl10, TNF-α, and IL1α at day 1 p.i. (Fig 2B and 2C). The heatmap constructed based on the log2 fold change of these pro-inflammatory genes in various cell clusters with high ratios at day 1 p.i. indicated that most pro-inflammatory factors upregulated at day 1 p.i. were enriched in C13-cells (S9 Fig). The graphical representation with tSNE plot of some classical pro-inflammatory genes which were highly-expressed in the single cell library at day 1 p.i.. were also enriched in the C13-cells (S10 Fig). The UMI counts of these pro-inflammatory gene transcripts at single-cell resolution also showed the highest values in C13-cells at day 1 p.i. compared with that in other clusters (S11 and S12 Figs). Pearson correlation analyses (PCA) for the patterns of antiviral factors and inflammatory response further demonstrated that C13 cells showed characteristics of granulocytes-like rather than that of macrophage/monocytes (S13 Fig). Moreover, PCA analysis confirmed that the character of C13 cells was significantly different from that of eosinophils and basophils whose sequencing data were deposited in the NCBI Sequence Read Archive database under the accession code SRP040656), but much closer to that of C14 and C16 which were typical neutrophils ( Fig 2D). The cell transition among C14-C16-C13 was further demonstrated by constructing the single-cell trajectories in pseudotime (S14 Fig). Based on the highly expressed pro-inflammatory factors, the C13 cells were activated to a higher degree than C14 and C16 cells. As PD-L1 (CD274) was highly and specifically expressed in C13-neutrophils (Fig 2E), we chose PD-L1 as a marker for isolating C13 cells. Data from flow cytometry showed that PD-L1 + neutrophils accounted for 30-70% of the total CD11b + Ly6G + neutrophils in the early stage of IAV infection in the lung (Fig 2F and 2G). The isolated PD-L1 + neutrophils (PD-L1 + CD11b + Ly6G + ) at day 2 p.i. harbored much higher viral RNA and IL-1α mRNA levels, as well as higher Ccl3, Ccl4, IL-1β mRNA levels, compared with PD-L1neutrophils (PD-L1 -CD11b + Ly6G + ), indicating that the C13 PD-L1 + neutrophils generated high pro-inflammatory cytokine mRNA level in the lung at early stage of IAV infection (Fig 2H and S15A Fig). The high mRNA and protein level of virus HA was also detected in C13 PD-L1 + neutrophils (S5 Table and S15B Fig). Furthermore, gene ontology (GO) enrichment analysis showed that the primary function of C13-granulocytes were related to pro-inflammatory responses and neutrophil chemotaxis (false discovery rate (FDR) <1E-10) (Fig 2I). Therefore, we identified that a group of PD-L1 + neutrophils were the major contributor to the first wave of pro-inflammatory factors at day 1-3 p.i.. However, it is notable that C18-epithelial cells, which have been considered as the major target for IAV infection, counted for only 1% in total cells of lung in our preparation. Since the lung were homogenized using lung dissociation kit rather than enzymatic digestion, the adhesive epithelial cells, especially highly infected epithelial cells which undergo apoptosis, would be largely removed after filtered through a 70 μm nylon mesh filter. Therefore, our data cannot exclude The relative mRNA expression of pro-inflammatory genes in PD-L1 + neutrophils (CD11b + Ly6G + PD-L1 + ) and PD-L1neutrophils (CD11b + Ly6G + PD-L1 -) from the lung of mice infected with 0.5 LD 50 of influenza A/ PR/8/34 (H1N1) viruses was analyzed with qRT-PCR at day 2 p.i.. Data are shown as the means ± SD in one of three independent experiments. �� , P < 0.01; ��� , P < 0.001 (Student t test, n = 3). n.s. means not significant. (I) The top ten terms of Gene Ontology (GO) biological processes in the significant enrichment of the highly expressed genes in C13-neutrophils. Transformed false discovery rate (FDR) was indicated at the X-axis. The GO term of neutrophil chemotaxis was highlighted with rectangle. https://doi.org/10.1371/journal.ppat.1008334.g002
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis the possibility that epithelial cells could also release a significant amount of pro-inflammatory factors at the earlier time.
Pf4 + -macrophages are the major contributor to the second wave of proinflammatory factors
To identify the cell types involved in generating the second wave of pro-inflammatory factors, we compared the transcriptional profiles of inflammatory responses among the major cell types at day 7 p.i.. Although C6-active T-cells expressed some pro-inflammatory factors, such as Ccl5, most pro-inflammatory factors such as Ccl7, Ccl8, Cxcl2, Ccl2, Ccl9, Ccl12, Cxcl10 (Fig 3A and S16-S18 Figs), TNF-α and complement family member C1q were mainly expressed in cells from C8 cells, which were characteristically Pf4-positive (Fig 3A and 3B). The heatmap constructed based on the log2 fold change of these pro-inflammatory genes in various cell clusters with high ratios at day 7 p.i. indicated that most pro-inflammatory factors upregulated at day 7 p.i. were enriched in the expression of C8-cells (S16 Fig). The graphical representation with tSNE plot of some classical pro-inflammatory genes which were highlyexpressed in the single cell library at day 7 p.i.. were also enriched in the C8-cells (S19 Fig).
The UMI counts of these pro-inflammatory gene transcripts at single-cell resolution also showed the highest values in C8-cells at day 7 p.i. compared with other clusters (S19 and S20 Figs). The Pf4 + cells were significantly increased at day 7 p.i. in the lungs of mice infected with IAV ( Fig 3C and 3D). We confirmed that the pro-inflammatory factors Ccl2 and Ccl8 were expressed in these Pf4 + cells in the lung at day 7 p.i using an immunohistochemistry assay ( Fig 3E and S21 Fig). As Pf4-positive megakaryocytes were recently identified in lung [29,30], we compared the transcriptional profile of C8 with that of C5, C1, and C17, which were closed to C8 in tSNE maps, as well as that of megakaryocytes identified in lung and bone marrow [29]. We found that the C8 was much closer to C5/C1/C17-macrophage-lineage in the pearson correlation analysis ( Fig 3F) and PCA analyses on the z-score normalized mean expression profiles of these cell clusters (Fig 3G). We therefore proposed that these C8 Pf4-postive cells were macrophage-lineage rather than megakaryocyte-lineage. Accordingly, we found that a group of Pf4 + -macrophages was the major contributor to the second wave of pro-inflammatory factors at day 7 p.i..
To further decipher the function of C8-Pf4-positive macrophages in the lung after IAV infection, we utilized a Pf4-cre induced DTR line (Pf4-cre; iDTR) in our study [31]. Pf4 + CD41 + -macrophages cells are depleted in the presence of diphtheria toxin (DT). The Pf4-cre; iDTR mice were intraperitoneally injected daily with DT 4 days after inoculation with 0.5 LD50 of A/PR/8/34 (H1N1) viruses ( Fig 4A). Of note, the Pf4-cre + ; iDTR mice and Pf4-cre -; iDTR mice were injected with DT after infected with A/PR/8/34 (H1N1) viruses. The cell numbers of neutrophils and other macrophages were unaffected in the DT system (S22A Fig), implying that the Pf4 + macrophages were specifically depleted in Pf4-cre + ; iDTR mice. Of note, Pf4 + macrophages belong to CD11b + Ly6C + macrophages (S22B Fig). To examine whether the depletion of C8 Pf4-positive macrophages would affect the secretion of second-wave of proinflammatory factors in vivo, we analyzed the expression of Ccl7, Ccl8, Ccl12, Cxcl12, Spp1, and Cxcl3 with RT-PCR analysis (Fig 4B), and the expression of Ccl8 and Ccl2 with ELISA assays (Fig 4C and 4D) at day 8 p.i.. We found that the expression of pro-inflammatory factors was decreased in Pf4-cre + ; iDTR mice when compared with that in Pf4-cre -; iDTR mice, further indicating that C8 Pf4-positive macrophages were the major contributor to the second wave of pro-inflammatory factors.
Besides releasing a wave of pro-inflammatory factors, we found that C8 Pf4-positive macrophages had high expression of Pparg (Fig 4E), a signature transcription factor that is essential for the development of AMs. TGF-β (encoded by Tgfb1) was also highly expressed in C8 Pf4-positive macrophages (Fig 4F and 4G), which were the main cytokines for the development of monocyte-derived AMs. Also, the secretion of TGF-β was reduced after the depletion of C8 Pf4-positive macrophages, indicating that C8 Pf4-positive macrophages were the major contributors for the secretion of TGF-β ( Fig 4H). Thus, we supposed that C8 Pf4-positive macrophages are the precursors of AMs during IAV infection. To this end, we analyzed the percentage of AMs at day 8 p.i. and found that AMs was significantly decreased after the depletion of C8 Pf4-positive macrophages (Fig 4I), while the percentage of T lymphocytes was
Identification the ligand/receptor pair among C13 and C8 and others clusters
In order to comprehensively elucidate the regulatory network of various infiltrated inflammatory cells in the lung during viral infection, we systematically analyzed the receptors and ligands of chemokines, interleukins, interferons and other inflammatory factors (Fig 5, S23 and S24 Figs). Highly expression of the inflammatory factors from IL-1 family such as IL-1α, IL1-β, and IL-RN was found in C13-PD-L1 + (CD274 + ) neutrophils at the early stage of infection (day 1 p.i.) (Fig 5A). Correspondingly, IL1R2 and IL1RAP were significantly upregulated in neutrophils cluster C13 and C16 and C10-monocytes, but not IL1R1 which are the receptors of these chemokines. Other chemokines like Ccl3 and Ccl4 were also highly expressed in C13, and the major receptors such as Ccr1 were found abundant in C13, C16, C10, C5-alveolar macrophage and C1-M1-macrophage. Moreover, the Cxcl2/Cxcr2 axis which regulates NLRP3 inflammasome activation was found in C13 and C16. To study the receptors of C13, we systematically summarized the receptors of C13 in day 1 p.i. (Fig 5B). As the major highly expressed receptors, IL1R2 and Ccr1 were found in these clusters, and the corresponding ligands were also highly expressed in C13 and C16, which were consistent with the result of ligands analysis (Fig 5A). In addition, C1 (M1-macrophage), C2, and C3 (NK cells) also contributed substantial ligands of Ccr1 such as Ccl2, Ccl3, Ccl4, and Ccl5.
C8 cluster secreted various cytokines at day 7 p.i when viruses were almost undetectable. High levels of Ccl2/Ccr2 and Ccr5 pair were exited in all the clusters including C8 cluster ( Fig 5C and 5D). These pair bondings were also observed in Ccl3, Ccl4, Ccl5/Ccr1, and Ccr5. New chemokines such as Ccl7 and Ccl8 emerged in C8 at day 7 p.i.. The known receptors Ccr1, Ccr2 and Ccr5 were also highly expressed in almost all clusters. Importantly, we observed strong interaction between C8 and other cell clusters such as C5-AM clusters. Thus, through the ligand/receptor interacting map of lung during IAV infection, we identified various interlineage crosstalks and further confirmed the correlation and relationship between C8 and other cell clusters in convalescence of IAV infected in the lung.
Discussion
The involvement of immune cells during IAV infection in the lung is a dynamic and complex process. To provide an atlas of immune response in the lung during IAV infection, we performed a scRNA-seq analysis of the transcriptional profile database of pulmonary immune cells during IAV infection. Previous reports have unveiled the novel antiviral factors in the early stage of IAV infection using single-cell analysis through comparing the infected and uninfected samples [23]. However, the exact profile of the IAV-driven immunopathogenesis in the lung is still unclear. Using the transverse (cells to cells) and longitudinal (day to day) analysis of immune cells in the lung, we gained insights into of the pulmonary immune processes during IAV infection.
Considering the strong connection between the severity of IAV infection and cytokine/chemokine production, the pro-inflammatory factors releasing cells were further analyzed [32]. By sequentially analyzing the transcriptome of infiltrated immune cells, we identified two waves of pro-inflammatory factor released. C13, C10, and C16 contributed the most to early pro-inflammatory factor release (Day 1 and Day 3). In particular, the IAV-infected C13-PD-L1 + -neutrophils are the major contributor to the first wave of pro-inflammatory
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis factor release. Signaling through IL-1R contributed to both host protection and immunopathogenesis following IAV infection as previously reported [33]. In our study, high expression of both IL-1α and IL-1β was detected in C13/C16-neutrophils, while IL-1α was only generated by C13-neutrophils. The C13-neutrophils may be recruited from bone marrow or
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis transformed in situ in the lung, which remains to be further explored. The high level of virus infection in C13-neutrophils was revealed by the single-cell sequence and qRT-PCR. It is notable that the immune reaction and virus replication were so extensive in this cell cluster simultaneously. NS of IAV is the most important viral protein that counteracts the IFN system, and we found that the level of NS RNA was particularly high in C13-neutrophils when compared with other clusters. The balance between immune reaction and immune antagonism in this cluster during IAV-driven immunopathogenesis reminds to be clarified.
Notably, after IAV copy is hardly found in lung, the second wave of pro-inflammatory factors was generated, mainly by a group of special cells, which was identified as Pf4 + -macrophages. C8-Pf4 + -macrophages expanded up to 18% of total cells at day 7 p.i.. This cluster expresses high level of cytokines and chemokines such as Ccl7, Ccl8, Cxcl2, Ccl2, Ccl6, Ccl9, Ccl12, Cxcl10, TNF-α, Trem2, and complement family member C1q at day 7 p.i.. The depletion of Pf4 + -macrophages led to a reduction in cytokines such as Ccl2, Ccl7, and Ccl8, which were reported to recruit monocytes and facilitate the maturation of tissue-specific macrophages. Besides these pro-inflammatory factors such as TNF-α directly worsen the immune injury, C8-Pf4 + -macrophages may assist the recruitments of monocytes or T-lymphocytes in lung through binding to Ccr2 by generating Ccl2 and Ccl7. Furthermore, high level expression of C1qa, C1qb, and C1qc was also detected in C8-Pf4 + -macrophages. As a recent study reported that C1q could regulate the activation of CD8 + T cells in autoimmunity and viral infection, it is possible that C8-Pf4 + -macrophages may assist T lymphocytes in adaptive immunity [34]. Collectively, we especially identified the origin of cytokine/chemokine occurring at late stage of immunopathogenesis, which could contribute to the persistence of severe IAVdriven pneumonia.
Interestingly, C8-Pf4 + -macrophages also expresses high level of TGF-β at day 7 p.i., which were reported to promote the mature of AMs in the lung. Considering that the PPAR-γ was highly expressed in C8-Pf4 + -macrophages, and the depletion of Pf4 + -macrophages led to a significant reduction in AMs formation in the lung at late stage IAV-mediated pneumonia, we hypothesized that C8 Pf4 + -macrophages may be the precursors of AMs when lung recovered from IAV infection. The characteristic of monocyte-derived AMs was different from tissue resident AMs. These cells may transform to AMs along with a lot of cytokine release, and the autosecreting of TGF-β in C8 Pf4 + -macrophages further promoted these cells to differentiate into AMs. Collectively, we especially identified the origins of cytokine/chemokine release occurring at the late stage of immunopathogenesis, which were probably the precursors of AMs.
Inflammatory factors play important roles in the infectious diseases. Studying the communication networks between immune cells can help us understand infectious diseases deeply. Reducing immune response in cytokine storm and enhancing the immune response to eliminate pathogens efficiently were the aim for IAV-induced pneumonia treatment. Through comprehensive analysis of ligands and receptors interaction of infiltrating immune cells in lung, we found strong autocrine loop in immune cells both in early and convalescent stages of infection. For instance, the ligands (IL-1α, IL1-β, and IL-RN) and receptors (IL1R2 and IL1RAP) of IL-1 system were both highly expressed in C13 neutrophils cluster compared with other clusters. In addition, the upregulation of Ccl3 and Ccl4 was also accompanied by the high levels of Ccr1, and Cxcl2/Cxcr2 axis, which was also remarkable in C13 and C16. Controlling these autocrine loops to avoid the excessive release of inflammatory factors was critical for preventing the production of cytokine storm. IAV infection induced pneumonia would cause plenty of epithelial cells apoptosis and inflammatory lung injury. When the virus elimination at day 7 p.i., monocyte/macrophage massively infiltrated into lung. Moderate of monocyte/macrophage and inflammatory factors favors lung damage repair. However, uncontrollable cell invasion and cytokine released would induce serious pulmonary immune pathology or pulmonary fibrosis. The mechanism that C8 Pf4 + monocyte/macrophage released so strong cytokine remained to be further explored. For example, how cytokines such as Ccl2, Ccl3, Ccl4, Ccl7, and Ccl8 transduce different signal even in a single cell still unknown.
In summary, through scRNA-seq analysis, we demonstrated many new phenomena for the pathogenesis of IAV infection. Importantly, by sequentially analyzing the transcriptome of infiltrated immune cells, we identified two waves of pro-inflammatory factor releases mainly from two cell clusters, which have not been described previously. These clusters could be the major origin of cytokine/chemokine storm occurring in IAV-driven pneumonia [5,35]. Therefore, these newly-identified clusters should be considered as important therapeutic targets for IAV-driven pneumonia, for relieving the clinic symptoms caused by cytokine storm, for shortening the time for recovery, or for preventing severe sequential complications.
Mice
C57BL/6, Pf4-cre, iDTR, and Rosa26-LSL-tdTomato mice on a C57BL/6 background were used in our study. To track Pf4 + cells, Pf4-cre mice were crossed with Roso26-LSL-tdTomato mice to generate Pf4-tdTomato-expressing cells. To generate Pf4-cre; iDTR mice, Pf4-cre mice were crossed with the iDTR line. To induce Pf4-cre; iDTR mice model, DT was injected intraperitoneally twice at day 4 and day 6 p.i. at a dose of 50 ng/g. Pf4-cre and Rosa26-LSL-tdTomato mice were provided by Dr. Linheng Li lab of Kansas University. Mice aged 6-12 weeks were used at the start of the experiments. Littermate controls were used in all experiments. Mice were bred and housed in specific pathogen-free conditions at the Animal Center of Sun Yat-sen University (SYSU) Zhongshan School of Medicine (ZSSOM) via standard pellet feed and water.
Ethics statement
All animal experiments were carried out in strict accordance with the guidelines and regulations of Laboratory Monitoring Committee of Guangdong Province of China, and were approved by Ethics Committee of Zhongshan School of Medicine (ZSSOM) of Sun Yat-sen University on Laboratory Animal Care (Assurance Number: 2017-061).
Influenza infection
Mice were anaesthetized with isoflurane and inoculated intranasally with 0.5 LD 50 (50 PFU) of influenza A/PR/8/34 (H1N1) viruses. The lungs were collected at the indicated time points post infection (p.i.). The virus stocks were obtained from embryonated chicken eggs after inoculation for 48-72 h, and the titers were determined by a plaque assay on MDCK cells, as described previously [36]. Body weight and survival rates of each group were measured daily.
FACS analysis and cell sorting
Single cell suspensions were prepared from the lung, spleen, bone marrow or blood samples. Cells were incubated with anti-Fc receptor antibodies (clone 2.4G2) and stained with the antibodies on ice for 20 min before washing. For intracellular staining, cells were stained with antibodies to surface molecules, followed by being fixed and permeated in Cytofix/Cytoperm buffer (BD Biosciences). Cells were stained intracellularly and then analyzed using a LSRFortessa (BD Biosciences) or sorted with a FACSAria (BD Biosciences) following the manufacturer's procedures. Data were analyzed with FlowJo software (TreeStar).
Immunofluorescence assay (IFAs)
Cells were fixed with 4% paraformaldehyde (PFA), and permeabilized with 1% Triton X-100, followed by blocking with 5% Goat Serum in PBS blocking solution and stained with primary antibodies for 1h. After being washed for 3 times, cells were stained with anti-rat Alexa-Fluor488 and anti-rabbit AlexaFluor594 secondary antibodies for 1 h, followed by washing for 3 times again and staining with 4', 6-diamidino-2-phenylindole (DAPI) reagent (Invitrogen). All the procedures were performed at room temperature.
Quantitative real-time PCR (qRT-PCR)
Erythroblasts, granulocytes, or M1 macrophages were isolated and the total RNA of each cell was extracted for qRT-PCR. cDNA was reversed with oligo(dT) and random hexamers using the PrimeScript RT reagent kit (Takara). Real-time PCR was performed using SYBR Green (Bio-Rad) with a qTOWER2.0 (Analytik Jena AG). Relative expression was determined by normalization to the housekeeping gene β-actin with Bio-Rad CFX Manager software.
Immunohistochemistry (IHC)
The lungs were fixed in 2% paraformaldehyde for 24 h and embedded in paraffin. After freezing, the paraffin blocks were sectioned into 5 μm slides, adhered onto the glass slides, and fixed in ice-cold acetone. Sections were pretreated with Image-iT FX Signal Enhancer (Thermo Fisher Scientific) and blocked with with 5% Goat Serum in PBS blocking solution. Sections were then stained with anti-Ccl2 (Rabbit, Bioss) primary antibodies, followed by staining with anti-rat AlexaFluor488 and anti-rabbit AlexaFluor594 secondary antibodies, and DAPI.
ELISA assay
Mouse Ccl2 and Ccl8 ELISA kit was purchased from R&D. The bronchoalveolar lavage (BAL) of the lung samples from Pf4-cre -; iDTR mice and Pf4-cre + ; iDTR mice was harvested from 96 wells plate and then detected the Ccl2 or Ccl8 concentrations by following the instruction of manufacturer.
Single cell RNA-seq
(1) Single cell collection and cDNA amplification. Single cell capture was performed using a Chromium Controller instrument (10x Genomics), a highly repeatable, efficient and stable solution for cell characterization and gene expression profiling of thousands to millions of cells (https://www.10xgenomics.com/solutions/single-cell/). Single cells were collected from the lungs of mice (three mice per group) uninfected (day 0) or infected with A/PR/8/34 (H1N1) virus at 5 time points including day 1, day 3, day 5, day 7, and day 12 p.i.. The lung tissue was dissected and homogenized using a lung dissociation kit (Miltenyi Biotec). Following dissociations, single cell suspensiosn were filtered through a 70 μm nylon mesh filter (BD Biosciences) into PBS supplemented with 0.2 mM pH8 EDTA and 0.04% bovine serum albumin (BSA). Red blood cells were lysed by hypotonic lysis. Fresh cells from the lung were harvested, washed with 1× PBS and re-suspended at 1× 10 6 cells per ml in 1× PBS containing 0.04% BSA to minimize cell loss and aggregation following the protocol recommended by 10x Genomics. Cell viability of the samples was analyzed using trypan blue exclusion staining to ensure more
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis than 90% of live cells. Cellular suspensions were loaded on the Chromium Controller instrument to generate single-cell gel bead-in-emulsions (GEMs) with Chromium single cell 3' reagent v2 kits (10x Genomics), containing a pool of~750.000 barcodes sampled to separately index the transcriptome of each cell. Thousands of individual cells were isolated into droplets together with gel beads coated with unique primers bearing 10X cell barcodes, unique molecular identifiers (UMI) and poly (dT) sequences. According to the single cell 3' reagent kit protocol, GEM-reverse transcriptions were performed in a Veriti 96-well thermal cycler (Thermo Fisher Scientific). After RT, GEMs were broken and the barcoded single-strand cDNA was cleaned up with DynaBeads MyOne Silane Beads (Thermo Fisher Scientific) and a SPRI select Reagent Kit (Beckman Coulter). Global Amplification of cDNA was achieved using the Veriti 96-well thermal cycler, and the amplified cDNA product was cleaned up with the SPRIselect Reagent Kit.
(2) Library construction and sequencing. The indexed sequencing libraries were constructed using the reagents in the Chromium Single Cell 3' Library v2 Kit for (a) fragmentation, end repair and A-tailing; (b) size selection with SPRI select beads; (c) adaptor ligation; (d) post-ligation cleanup with SPRI select beads; and (e) sample index PCR and final cleanup with SPRI select beads. The final single cell 3' library comprises the standard Illumina paired-end constructs which begin and end with P5 and P7 primers used in Illumina bridge amplification. The barcoded sequencing libraries were quantified by a Bioanalyzer Agilent 2100 System using a High Sensitivity DNA chip (Agilent), and the quantitative PCR using a KAPA Library Quantification Kit (KAPA Biosystems). Finally, the sequencing libraries were loaded onto a HiSeq2500 (Illumina) with a custom paired-end sequencing mode (26 bp for read 1 and 98 bp for read 2) to obtain a sequencing depth of~50,000 reads per cell.
scRNA-seq bioinformatics analysis
(1) Initial quality control. The single-cell sequencing files (base call files) were processed using the Cell Ranger Single-Cell Software Suite (v2.0) for quality control, sample demultiplexing, barcode processing, and single-cell 3 0 gene counting [37]. The raw base call files of each sample were first demultiplexed into fastq data using the bcl2fastq conversion software. Quality control of the fastq data was performed using FastQC software, and the data were aligned to the Nucleotide Sequence Database (https://www.ncbi.nlm.nih.gov/genbank/) using the basic local alignment search tool (BLAST) to avoid the data distortion caused by the experimental contamination of other species, especially the bacterial infection or contamination. After the initial quality control, the sequences with barcodes and UMIs of low quality were removed. We obtained about 862.7 million clean reads based on the mouse transcriptomes of 16,424 cells, achieving >50,000 mean reads per cell. More than 98% of the clean reads had high quality scores at the Q30 (an error probability for base calling of 0.1%) level in the bases of the barcodes and UMIs. The sequencing saturation of each sample was above 80%, and 15,237~16,505 mouse genes were detected across six single-cell RNA-seq libraries.
(2) Alignment, UMI counting and multi-library aggregation. The fastq data were aligned to the UCSC mouse reference genome (mm10) using STAR with default parameters. For further counting of the UMI tags, the CellRanger count algorithm was used to generate single-cell gene counts for a single library, which can provide the most stable and accurate clustering solutions for 10x Genomics scRNA-seq data [26]. Only confidently mapped, non-PCR duplicates with valid barcodes and UMIs were used to generate the gene-barcode matrix. For quantitatively identifying intracellular viral segmented mRNAs to track the cells from the lung infected with IAV at single-cell resolution, the scRNA-seq data of six lung samples of the lung from mice uninfected (day 0) or infected with A/PR/8/34 (PR8, H1N1) virus at 5 time points including day 1, day 3, day 5, day 7, and day 12 were reanalyzed using the CellRanger count algorithm based the union of mm10 and PR8 (txid211044, NCBI) reference genome. For the comparison of the scRNA-seq data among different libraries, the gene-cell-barcode matrix of each sample was normalized by equalizing the read depth between libraries for further merging using the CellRanger aggregate procedure, which was confirmed using the Seurat integrated analysis method [38]. The reads from higher-depth libraries were subsampled until all libraries have an equal number of confidently mapped reads per cell. The gene-cell-barcode matrix from each of the six samples was concatenated, log-transformed and filtered based on the number of genes detected per cell. Any cell with less than 200 genes or more than 30% of mitochondrial UMI counts was filtered out, and only genes with at least one UMI count detected in at least one cell were used for further analysis, which was performed using CellRanger R version 2.0.0 and Seurat suite version 2.0.0.
(3) Clustering, differential expression and visualization. For clustering the cells, the principal component analysis (PCA) was run on the normalized filtered gene-barcode matrix to reduce the number of feature (gene) dimensions. Top 15 principal components (PCs) were selected and passed to t-distributed Stochastic Neighbor Embedding [38] for clustering visualization in a two dimensional space. Graph-based Clustering was then run to group cells together that have similar expression profiles, building a sparse nearest-neighbor graph without pre-specification of the number of clusters. Clusters were grouped into 18 unsupervised categories, according to the differential expression profile with hallmark genes. To identify genes that were enriched in a specific cluster, the mean expression of each gene was calculated across all cells in the cluster. Then each gene from the cluster was compared to the median expression of the same gene from cells in all other clusters, and the log2 fold-change of differentially expressed gene was calculated. For hierarchical clustering, pairwise Pearson correlation between each cluster was calculated based on the mean expression of each gene across all cells in the cluster, and the log2 fold-change of differentially expressed gene was used for visualization by heatmap with MEV software (http://www.tm4.org/). The graphical representation of specific gene expression with tSNE plot was implemented by using Loupe Cell Browser software and Cell Ranger R.
(4) Single cell trajectory analysis, PCA analysis and Gene Ontology enrichment. The single cell trajectory analysis was performed using the package Monocle for constructing single-cell trajectories in pseudotime based on the differential expressed genes among related single cells. The transcriptional profile data of macrophage, eosinophils and basophils were retrieved from the NIH SRA database with the accession code SRP040656 (https://www.ncbi. nlm.nih.gov/sra/). After the z-score normalization, the transcriptional profile data of macrophages, eosinophils and basophils from database together with the specific cell clusters in our study were used for pairwise Pearson correlation and PCA analysis implemented in R language (http://www.r-project.org) to demonstrate the phylogeny of specific clusters. Functional pathways representative of each gene signature were analyzed for enrichment in gene categories from the Gene Ontology Biological Processes (GO-BP) database (Gene Ontology Consortium) using DAVID Bioinformatics Resources [39].
The ligand and receptor interaction maps
To visualize the ligand and receptor (LR) interactions of the immune cells in lung post of IAV infection, the published dataset of ligand and receptor pairs [24] were used as the reference. The low average UMI count of the LR genes were filtered, including all LR genes with <0.1 UMI counts average in each cluster (each sample was normalized by equalizing the read depth). We built an interaction circular graph by referring the literature supported ligand and
PLOS PATHOGENS
Landscape of IAV-driven immunopathogenesis receptor pairs, and connecting the edges between them, generating with the Circos package (http://circos.ca/). We computed the log transformed UMI counts, and the highly expressed ligand and receptor interactions were highlighted with the red color in the graph using the Circos package.
Statistical analysis
Data were analyzed using GraphPad Prism 6.0 software (La Jolla, CA, USA). The two-tailed Student's t-test was used to determine the significance of statistical data between two experimental groups or multiple comparisons. Data were considered significant at � P < 0.05, �� P < 0.01 and ��� P < 0.001.
Supporting information S1 Table
|
v3-fos-license
|
2018-12-07T14:11:00.350Z
|
2017-12-22T00:00:00.000
|
55236984
|
{
"extfieldsofstudy": [
"History"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://revistacomsoc.pt/article/download/1087/1067",
"pdf_hash": "9a8c11fe08930c12a8cb7b3f2530e063addc2aba",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1068",
"s2fieldsofstudy": [
"Art",
"Political Science"
],
"sha1": "9a8c11fe08930c12a8cb7b3f2530e063addc2aba",
"year": 2017
}
|
pes2o/s2orc
|
The Brazilian woman : from the colonial photography to contemporary Portuguese photography
This study aims to carry out an initial analysis of how the Brazilian woman image is shaped by a discourse that is historically constructed and reinforced by colonial photography. This visuality has endured through the ages and represents a form of contemporary colonialism, as it is characterized by an identity reductionism disguised as a global ideology. The possibility of paradox prevalence in these speeches is analyzed through a critical view of the work of André Cepeda and Miguel Valle de Figueiredo, Portuguese photographers who has produced photography artwork about the Brazilian woman. In these images, the construction of a visual concept of Brazilian women revealed underlying statements supported by their perceptions and experiences, as well as in generalized beliefs. Thus, it was concluded that the understanding of the image of Brazilian women as portrayed by those photographers shows itself covered of brand new colonizing processes in which the Brazilian woman’s image is linked with a sense of an available and sensual body, imbued with the concept of a colonial body that still persists in contemporary imagery.
Introduction
The emergence of photography in the 19 th century brought with it the assumption of an indisputable truth about the facts represented.Its status as a purely descriptive medium allowed an understanding that when registering the "real" world it does so in the most objective way to see something as it "really" is.This provides one's remembrance with an understanding that the facts were presented in a picture as they really looked like.Therefore, memory and photographic image are mixed, one seems not to work without the other.
However, both photograph and memory are not sufficient to confer absolute credibility to a given fact.As memory, photography selects parts of the event to deceive, manipulate or enhance ideologies when characterizing a subject in a specific way.Nevertheless, photography still constitutes one of the best tools for recalling processes (Le Goff, 2003).
The photograph, at same time that allows the remembrance of what is seen in it, also allows to its author a construction of a stigmatized image about people and places, dictated by the underlying intentions of its maker.Even when photography is used as a technical document, it also enables the creation and profusion of meaning.Through this process, an identity definition of that Other is unveiled.In the colonial era, photography was responsible for printing racist notions which represented often a false depiction of events, subjects and historical circumstances (Roberts, 1988).
It is known that gender relations went hand in hand with photography by giving an inferior role to non-European women, through the display of an image of a naked available body presented to the colonizer.This "male gaze" in image making marked the colonial age, especially when it started to register civilizations and individuals who were apart from the European ways as inferior beings, wild and disabled.Such use of photography created the concept of colonial photography (Edwards, 2008), one that registers the colonizer's view and lays himself in a place of superiority relating to those being captured by his camera, unveiler of the "unknown and barbaric" world.
Thus, photography became itself a diffusion medium of cultural asymmetry and gender since its invention.Pictures populated collective imagination of European people, who have come to know the world through images made by traveling photographers.Because of this same memory element that pervades photography colonialist ideas are still repeated in contemporary images, and are seen mainly in the replication of patriarchal and imperialist character stereotypes, ideas confronted by feminism and post-colonialism.
The almost zero presence of women as social agents in the colonial era is reflected in such representation of the female body as seen by the male gaze.There was a constant surveillance of manners and appearance and therefore, the image of women sought to reflect her own "good manners".This idea of modesty was not imposed by men only, it was also inculcated in the minds of women to the point of developing a self-monitoring awareness, being thought to do so from childhood on (Berger, 1972).
Therefore, one must understand that the male gaze is more than "the way a man looks to" something, but also a place of meaning formation.According to Laura Mulvey (1989), a woman can also have a masculine "gaze" whilst in position of observing another woman in a picture, reviving a lost aspect of her sexuality while doing it.The woman represented through an objectified view in film seems to provide the woman-spectator a place of male gaze and pleasure.Thus, the female body-object gives her a way to access an "aesthetic curiosity" (Mulvey, 1996) that counteracts a sexist scopophilia.
The whole package of women's imagery is held as "not a history of illustration, but as a story in itself" (Higonnet, 1992, p. 140).In these images, the representation is seen as "rules of language function that reveal or distort what is held to be true about the category of women" (Butler, 1997, p. 18).With the awareness of visual image's normative function, we seek to go deep beyond surface and get away from common sense and old ideas" contamination that settles like stains on the visual image of Brazilian women.
This reflection is aimed to propose a brief analysis of the construction of a visual representation of Brazilian woman as viewed by Portuguese photography.To accomplish this purpose, this paper aims towards a theoretical view about the image of women's body, as well as a historical look at Brazilian women´s image as portrayed by colonial photographs and Portuguese newspapers.Based on that, it has been proposed an analysis of art projects made by two contemporary Portuguese photographers: Miguel Valle de Figueiredo and André Cepeda.Both of them were in Brazil and Brazilian women were portrayed in their works.
The image of woman's body: objectification and fetishism
The body, directly or indirectly, displays a lot of issues that are extremely significant: notions of race, concept of beauty, sexuality, beliefs and notions of morality, as also the distinction between "wild" and civilized.To William Ewing (1996), pictures that have the human body as subject are political, since they are used to control or influence opinions and actions.This type of image achieves greater impact on social imaginary that television images.Pictures are the raw material of what gets carved in collective memory as the identity of the other.
When it is displayed in an erotic way, the view of a body depends on the types of such bodies' social classification.The washerwomen, since working in open spaces, were considered women with a lost sexuality (Henning, 1996).Even today, the erotic photography operates under a classification of its themes organized into recognizable types.For its part, the Advertising industry has been representing a woman's body using several degrees of explicit nudity or sexual activity in order to eroticise the female body and turn it into an object for the male gaze.For Michele Henning (1996), this is what is called objectification of the female body.
This concept is especially relevant when it comes to photography, as it also brings objectification to a second layer when it turns people into simple objects for sighting.For Solomon-Godeau (1991, p. 237) "the most insidious and instrumental forms of domination, subjugation, and objectification are produced by mainstream images of women rather than juridically criminal or obscene ones".For the author, History of Photography itself is intertwined with the social history of women, given that photography brings in its tradition a voyeuristic or fetishistic trait when portraying women.
The word fetish comes from Latin facere, meaning to make or to build.This word was first used in the 15 th century by Portuguese settlers and merchants when referencing to the African reverence for religious amulets and idols, being used then with a direct reference to witchcraft practices.Fetishism would then be the act of worshiping a fetish (Hirschfeld, 1982); to incorporate a magical property to the fetish object.This includes the Christian iconolatry that attributes supernatural powers to its saints, who could also manifest effects miraculously on the physical level.
In psychoanalysis, an object becomes a fetish when it is focus of a sexual desire, often associated with women.This is because the fetishist idealizes articles associated with women, such as shoes and lipstick.For Freud, in Three Essays on the Theory of Sexuality, written in 1905, this is an aberration, almost a condition as it replaces the "normal" sex.Fetish is "a substitute for the penis in a woman (in a mother) in which the little boy once believed and -for reasons familiar to us -do not want to leave (...) because if a woman was castrated, then the possession of his own penis would be in danger" (Freud, 1974, p. 180).According to the father of psychoanalysis, the choice of the fetish object does not depend on the similarity to the penis but to the period of fracture or trauma occurred in early childhood, when the boy realizes that the mother does not have a penis.It is at this moment that the first object that is seen then becomes your fetish, derived from own castration anxiety.
The relevance of this aspect of Freudian theory to the study of photography is that it serves as a tool to explore how visual images can objectify and fragment the woman's body, a phenomenon that happens in a totally different way than regarding the viewing of the male body shape.However, there is a problem: the fetishism is based on male castration and therefore qualifies only as a male-related issue, ignoring that it may arise in other genres.The female body is then shown as the vessel for the fetish desires fostered by males.
In this sense, Christian Metz (1985) relies on Freud to state that photography and fetish carry in it both the same contradiction and uniqueness.While the picture steadies time and allows us to carry with us a visual image, fetishism freezes a moment and fixates it in the fetishist's memory.The fetish suggests that photography summons the dead and at the same time keeps one's memory in the past.Its temporal simultaneity and material dimension allows to the photographic object its fetish character.
The film critic Laura Mulvey also uses psychoanalysis to establish a profound critique of women's visual image, especially in the cinema.In this context, psychoanalytic theory was used to unravel how the "unconscious of patriarchal society had structured film form" (1975, p. 06). In Visual Pleasure andNarrative Cinema (1975), she speaks of the existence of a male point of view that is shown in the visual arts and literature.This "male gaze" can be seen through constant use of close-up framing by the camera to show body details, hence fragmenting a woman in mind of the viewer.While backing up in fetishism theory, the author speaks of women's objectification from the point it starts to be represented as spectacle.In this sense, the man (heterosexual) constitutes the look in itself and the image to be viewed is a woman.These roles are wrapped by castration complex, where the woman is seen as the lacking or deprived portion, the sex difference.To escape from castration anxiety, the man places the woman in an undervalued position as punishment (voyeurism) or replace the female figure by a fetish (object of desire).
Later on the essay however, the author sees fetishism not as belonging to a dominant sexual look but as a culturally dominant way of seeing the world.Through the article "Afterthoughts on 'Visual Pleasure and Narrative Cinema' inspired by King Vidor's Duel in the Sun (1946)", Mulvey (1989) updated her line of thought with the inclusion of two other elements: the woman as spectator and as also in a female protagonist role.While in spectator's place, a woman mirrors the "male gaze" which is nothing more than a stance in the world, a proposed way of seeing through a masculinized version of the viewer's place.The woman takes a manly place to revive the lost aspect of her sexuality, the castration itself, using this proposed form of gaze and its pleasure.From the moment she displays masculinity as her viewpoint, she is no longer passive.
In search of a perspective beyond a binary or simple male and female opposition, Mulvey develops this theory further in Fetishism and Curiosity (1996).For the filmmaker, the woman as spectator performs a function similar to Pandora when opening the box.Curiosity exerts fascination by the image and therefore, it is shown as source of knowledge.Throughout that line of reasoning, she develops the idea of an "aesthetic curiosity" to counter the male gaze that fetishizes the image binding the curious look of Pandora to the box.She transforms the myth that had a misogynistic significance when showing the woman as guilty for all the evil of the world in a curiosity that has political dimensions in interpreting images."While curiosity is a compulsive desire to see and to know, to investigate something secret, fetishism is born out of a refusal to see, a refusal to accept the difference the female body represents for the male" (1996, p. 64).
When proposing an epistemophilia as resistance to sexist scopophilia, Mulvey (1996) suggests still the need for modulation of the argument itself to allow a more satisfactory relationship between fetishism and curiosity.Therefore, to reflect on image of women is to know that it is composed of a complex web of meanings acquired over time, that its representation has political meanings and its reception is located under the "male gaze" or even beyond, in a two-part look broken into male and female sides.
The photography role on the identity construction of the "other"
According Juan Naranjo, introducing book, Fotografía, antropología y colonialismo (2006), the advances in image printing technology starting from the early 19 th century allowed a considerable growth in circulation of printed images comparing to previous centuries.The proliferation of optical devices' use in both public and private sectors changed social habits and introduced changes in the forms of reception and distribution of information.Considering just the second half of the 19 th century, it has been created a widespread "visual industry" with an incredible iconographic density.
The Brazilian woman: from the colonial photography to contemporary Portuguese photography .Lorena Travassos Photography started to play a key role in cultural changes especially from the point images were placed next to printed words, despite the photograph's ability to erase boundaries between reality and its representation.It was exactly this ability of illusory mimesis between an object and its image as also its multiplication capacity, which made photography the visual medium of widest social penetration.Advances in photographic processes made possible the emergence of its industry, opening venues to commercialization of large-scale photographs at cheap prices like the cartes de visite.By acquiring widespread acceptance, photographs started an extensive process of democratization of visual information, since the acquisition of a picture replaced the direct live experience for a virtual observation of people and landscapes of distant locations.
Due to the expansion of the photographic industry and increased consumption of photographs, many companies extended their offer and inventoried the world sending photographers all around the world to document what they saw, as if it was an urge.At the same time at distant sites numerous photographic studios were open and to fulfill a dual function: to photograph local bourgeoisie, settlers, missionaries, sailors and also to record types of humans who were arriving in the main cities and ports.The goal of that last kind of production was the acquisition of these images by travelers and tourists.
The start of that trend of great circulation of photographs transformed into a "familiar" view the image of the "other", both for scientific class as for the bourgeoisie of that era.Although some researchers such as anthropologists used the colonial pictures for analysis at expense of field research, other scholars challenged the veracity of these images pointing them as staged as cartes de visite, once they had primarily a commercial potential.The dissenting scholars asserted that these pictures had predetermined formal standards so that the information would bring a better reading or to facilitate the comparison and therefore, it would rarely serve as basis of serious scientific studies.
The photographs taken in European colonial period had themes invented in generic ways and there was no effort at the time to identify the photographer and the event in depth.Therefore, one must lean on the colonial era photographs critically1 , an important challenge for those who use the photographic analysis in scientific research today.
The contamination of European behavior standards in the ways of life of portrayed non-European tribes -which often led to the total destruction of these cultures -is another factor responsible for making unusable the colonial commercial shoots in anthropological research material, once the cultural contamination itself was a direct result of the rapid process of colonial expansion in the 19 th century.Much of this genre of photography reproduced all kinds of fantasies related to Orientalism2 and other exotica.Thus, these photographs were used to create stereotyped identities meant only to satisfy the European Romantic consumers.
There is also strong gender relations that permeate photography of the colonial era with the attribution of an inferior role to women, especially those with non-European origins.Photos of half-naked or even naked women, regardless of race and what colonized country is in question, are always present in the colonial visuality.Such event can be explained according Filipa Vicente (2014, p. 22) as "resulting of a domination posture in relation to the visible -in relation to what can become visible -as well as the male hegemony in the colonial space."Among the ethical problems that these images put, Vicente (2014, p.m26), when faced with the sculptural works of Vasco Araújo entitled Botany (Figure 1), points out: "but if she were his woman, wife, white, in a Portuguese village and not in Africa, the Portuguese soldier would let himself being portrayed just like that by anyone?"For Stuart Hall on Cultural Identity and Diaspora (1996), representation practices always involve positions from which we write or speak.The characterization of the "other" contributed to creation of categories of religious, racial, sexual and gender content articulated in different ways.This superficial purpose of differences identification played a key role in Western visual culture.By creating the "other" as a disqualified being, inferior to West's power and knowledge, Western production was established as a form of hegemonic knowledge.Thus, it used and hijacked the very existence and culture of the other in the name of its superiority.
The Brazilian woman: from the colonial photography to contemporary Portuguese photography .Lorena Travassos This dialectical world view which opposes the colonizer and the colonized and assumes the settler culture is superior to the last, in addition of fabricating a subordination relationship of the "other", deeply imprinted the idea of inferiority in mind the "other" with the purpose the latter considered himself unable to combat this whole logic.(Barradas, 2009).Often photography was responsible for building an acceptance of such authoritarian power over the subject photographed, a power that also controlled the production and distribution of images.
This asymmetry of power led to the conclusion that black people, especially women, are sub-humans or animals.The images showing a woman in eroticized positions with apparent nudity and sexual availability to the eyes of a white settler are factors that put her not in an immoral world but in an amoral one, because her very existence was rooted apart from those standards required by moral colonizing.
In the context of the colonial relationship, says Maria Baptista (2013), "it is necessary that the black people shut up, have no face, identity or memory" (p.284) and "thus they can barely exist to the white people eyes, for they have to be objects of conquest and ordering" (p.285).Photography in this context was used as form of appropriation of bodies, memories and identities of the Other to represent him out of the historical process and time, as an uncivilized being.
The image of Brazilian women as seen by the Portuguese gaze
The invention of the "Brazilian" cliché, or the creation of a visual that could be translated to certain group of individuals as the original inhabitant of the "New World" is a product of history of Portuguese immigration in Brazil.In addition to the "Brazilian" term being representative of a wild barbarian, it was also used to represent the Portuguese emigrant returning from Brazil.According to Jorge Fernandes Alves (2004), the consequent lack of opportunities due to economy marked by farming and stagnation of economic growth in Portugal assigned to Brazil the possibility of a better future for individuals, as to "Emigrate meant to meet aspirations built in confrontation with environment and its social representations which appeared as dominant, supported by the case of real and close characters" (Alves, 2004, p. 195).
In Brazil, many photographers focused on the port regions where slaves and authorities disembarked.Among Portuguese photographers in Brazil, it was the Azorean José Christiano Junior who owned the largest collection of photographs of slaves held by 1860.With a collection consisting of 77 pictures, he offered his customers "a varied collection of customs and types of black people, the very own thing suitable for anyone who retires from Europe" (Gorender, 1987, p. xxxi).As the author photographed slaves in the exercise of their functions (Figure 2), which shows the interest of a classification of existing Brazilian individuals, there were photos that refer to black women as exposed bodies available to the eye of the photographer and the buyer.Women were shown naked, objectified, which relates to how slaves were examined to their details in markets.As pointed out by Freitas (2011, p. 65), the female slaves were lust targets for lords and The Brazilian woman: from the colonial photography to contemporary Portuguese photography .Lorena Travassos were subjected to all sorts of actions in the sexual sphere, since they were "perceived as mere objects" who "gave vent to sexual impulses".Black people of the colonial era constituted over half the population of Rio de Janeiro, capital of the Portuguese empire at that time, being a "so expressive contingent that chroniclers of the period came to compare the Rio landscape to the cities of the African coast" (Lissovsky & Azevedo, 1987, p. xxii).Christiano Júnior had in his cartes de visite "types of black people", as black individuals were portrayed from their front and side to side to show facial features, tribal branding and clothing to highlight the characteristics that defined one's ethnic group and / or profession (technique known as Bertillonage).
These photographs, according Lissovsky and Azevedo (1987), were directed to the public that has been isolated from the world until 1808, year in which opening of Brazilian ports to international trade happened, a concomitant occasion with the installation of the Portuguese Court in Brazil.It is a set of images that evokes human types and crafts; basically those are pictures of an alien and foreign.It is important to highlight the changing of senses reaching the carte de visite, namely, if there was the image of a man of means, that picture could then become his business card; however, when it presents a black person the picture would have the function of a postcard of Brazil.While the former describes a dignified individual, the latter describes a picturesque and generic character (Cunha, 1987).
Importantly to say, there was always a association of a sense of virility to black people, resulting of a hyperssexualization inherited from the colonial period.This prerogative attributes the look at sex as mean to authenticate an imprisonment "in geography and skin color" (Pine, 2004, p. 67).The hipersexualized black individual in the picture removes his nature as a human being to make way for an animal, a fetish.
According to Luciana Pontes (2004), who parts from a fieldwork carried out in Lisbon about women as portrayed in the media, "the recent intensification in the late 1990s of the Brazilian immigration further complicated mutual identity processes in a context in which are created and / or enhanced old representations"(p.236).In these representations, the author points out to a process of essentialization and exoticization of Brazilian national identity in addition of the sexualization of its women.This issue, as seen, follows the very formation of Brazil and the use of colonial photography to represent the "other".
It can be seen that sexualization of Brazilian women in contemporary images repeats several standards set in colonial photographs era.The sexualization process of Brazilian immigrant women arises in connection both to the immigrant condition as being from a country that has a colonial and slavery past.There is an overlap of social markers of exclusion -colonialism, sexism and racism -which only reinforces the colonial / subordinate and sexualized position.
In Portuguese journalism, the relationship between Brazilian image and prostitution was propagated more intensely from the case known as "Bragança Mothers" in 2003 (figures 3 and 4).This occurred, as descripted by the news outlet Times Magazine, as a result of a protest carried by Portuguese women against the presence of Brazilian women who were in Bragança to work in brothels.Wives decided to unite and oust that families' "destructive" individuals.This case contributes up to the present for a general association between Brazilian women and prostitution.This event resulted in closure of swing houses, arrests of some women and repatriations of illegal Brazilians.For the Portuguese press, those were actions necessary to ensure that women of "easy sexuality" are not allowed to invade the private space reserved for the family.For Gomes (2012), the Portuguese press has been important to that association of Brazilian immigrant with prostitution.An example of this can be seen in the controversial issue of the cover 565 of the Portuguese weekly magazine Focus (Os segredos da mulher brasileira, 2010) (Figure 5).On the cover, a woman wearing bikini serves as background to the headlines: "The secrets of Brazilian woman: He absolutely loves it, she hates it", "2,216 marriages with Portuguese men in 2009 alone" and "the Ten commandments used to seduce men".In that story, the Brazilian are defined as coming from "Vera Cruz", name that Brazil received earlier in the colonial era, which shows an explicit approach to the imaginary of that time.In addition, the Brazilian woman is represented in a fragmented manner, exposing part of her body as a sexual object.
The brazilian woman as seen under the stigma of a "colonial body", will always be the bearer of an available body, seen as object and understood as a constant "threat" to the Portuguese family.This available and sexualized body image touches all Brazilian women, regardless of her function in society or level of education.The differences in social class condition and education of Brazilian individuals seems to influence the vulnerability to stigma.Women with low education and low income when exercising household or customer service activities are targets of greater prejudice.Organized reaction capacity against that prejudice is also lower in the most vulnerable group, so they end up by accepting and internalizing the idea of being from an inferior culture (Gomes, 2012).Currently, tourism businesses and advertising have great weight in the spread of the image of an exotic Brazil to present mixed race women with a naked body as an attractive in tour packages.This has changed somewhat, says Gomes (2012) as the agency responsible for Brazilian tourism and the Portuguese press are currently deconstructing the imagination of the mulatto and eroticized woman, to construct other imaginary of Brazil with the presentation of cultural elements instead of exposed bodies.This is due in part by pressure exerted by Brazilian social movements in Brazil and Portugal.
The image of Brazilian women in the Portuguese contemporary photography: André Cepeda and Miguel Valle de Figueiredo
The existence and action of the individual in his reality as it presents itself is conditioned by the different ways of looking at the world, to interpret it and to possess it.It is through the double distance between the image and what it represents and the image and the being who stares at it, which are concomitantly found both the meaning and loss: a construction of meaning of what is represented through an articulate speech by culture itself, and the loss of the object / subject in its very existence caused by the opacity of its own representation.Thus, the construction of visual images is understood as a process of recognition which settles one's belonging or strangeness sense and one's relationship with reality.
While images broadcasted by the daily informational and communicational outlets cause "a stultifying massmidialization" (Guattari, 1992, p. 16) of a large number of individuals, a significant portion of the images made in the context of contemporary art and design principles aim to promote experiences that enable the generation of paradoxes regarding the current aestheticized reality.The experimentation of this paradox can point to a construction of a criticality relative to agency procedures laid down in cultural current globalizing system: this poetic-existential catalysis, as we will find in operation within scriptural discourses, vocal, musical or plastic, engages almost synchronously an enunciative recrystallization of its creator, the interpreter and the work of the art connoisseur.Its effectiveness lies primarily in its ability to promote active, procedural ruptures, within the field of meaning and its denotative semiotic structured fabrics.(Guattari, 1992, p. 31) The poetic construction of the artistic object, according Guattari, has a power of deconstruction of generalizing statements, since it offers the viewer a distorted meanings system consolidated in / by the current globalized culture, expanding sensitive possibilities of individuals.Such an object, even though its triviality, displays the otherness, since contemporary artistic aesthetic constitutes itself in the opinion of Ferry (2003, p. 31) as an extension of the artist himself, "a kind of business card especially designed" which present themselves "as so many 'little perceptual worlds' that no longer represent the world, but the state of vital forces of its creator" (Ferry, 2003, p. 32).Thus the familiar image is contiguous to the strangeness, by revealing not just a speech of what is seen as strange, a discourse created collectively by its context as formerly done: It also offers the creation of an intersubjectivity through the possibility of dialogue with the world created peculiarly by that artist.
Given such prospects, we seek to analyze the visual created about the "Brazilian woman" in images produced by André Cepeda and Miguel Valle de Figueiredo.The choice of these two photographers also aims to an approximation of how the Brazilian woman is still perceived in the Portuguese photography due to the colony relations maintained with Brazil.In this article, our interest is to reflect about the references found The Brazilian woman: from the colonial photography to contemporary Portuguese photography .Lorena Travassos in those images assigned as identity of Brazilian women to see how this influences the recognition of Brazil in contemporary times, in the specific case expressed through the eyes of these photographers.
It has been noted that according the postmodern reflections of photography each image references the other, constructing parallels, diachronies, synchronicities and dialectic forms, building a web of significances beyond the author intention.In this perspective, "the photographs were seen as signs that acquired its value from its insertion in the midst of a broader system of social and cultural encodings" (Cotton, 2013, p. 191).Thus, this is what is meant to be seen here through the works presented: pieces of a wide historical, social and political fabric which emerge from different relationships, including the Colonial ones.
André Cepeda, Stan Getz Street (2014)
André Cepeda (born in Coimbra, 1976) lives and works in Lisbon.Since 2005 he has been working continuously regarding Portuguese contemporary landscape as him subject, particularly the Porto region landscape.According to the photographer, he uses a large format camera (4m x 5m) because it constitutes a more precise tool that would allow for a more accurate technique.To obtain those results the equipment requires a slow process of work, thus determining his method: a long and close observation of things that would allow him to connect to and / or relate to the object or landscape he wants to photograph.
The photographer states in his website3 that he is interested in building new ways of looking at reality and space that are presented to him.Essentially for him there is a quest in his work for spaces and moments that have been rejected, suggesting a certain suspension.His interest, therefore, is based on the feeling that makes him compelled to create an image and report its space, trying to forget its history and original context of reception.Thus, he has the sole interest of work in the light, space and time of a scene.In this way, he feels freer to create new contexts for the images, as if this almost sculptural treatment would retake a dignity that was denied to the object / landscape.For the photographer, these images become a time of broader reflection about the way we build our cultural, social and political identities.
The selected artist's work in the city of São Paulo, in Brazil, took three months of reflective gaze and artistic pathways that led him to a different and peculiar city4 .This is a more of a sculptural work, which reports the space selected by the photographer.The images taken mix flânerie experience by the artist who present us streets, passersby, landscapes, city space reflections in his lens, aiming to record his gaze on the path taken.The result of this experience resulted in the book Stan Getz Street (2014) which also features portraits, mostly composed of naked women (Figures 6,7,8,9).Those are women of peculiar bodies, building the idea of an ethnic diversity in Brazil by displaying "the skin tones diversity"5 .The author states that he used living models who are used to pose in the Faculty of Arts of São Paulo da Fundação Armando Alvares Penteado (FAAP) in his photos.
In this photo essay, the photographer uses a larger number of female nudes in comparison to others he previously made.When facing the pictures of this Brazilian project, it is impossible not to find colonial ethnographic references and even classical painting references.The author said in an interview that women are presented without clothes to follow the History of Art traditions.Figures 6, 7, 8 & 9: André Cepeda, Stan Getz Street, 2014 Source: http://www.andrecepeda.com/projects/rua-stan-getz/a reference to the classic painting, we take as an example the picture in which a naked black woman (Figure 8) is reclining on a bed (the reclining nude is quite traditional in painting, as well known).In it, one can follow the temptation to repeat the statement that this woman by acting out the classic pose of the painting traditions displays herself up as an erotic object.Her stare, aimed to both the photographer and to the viewer, makes her objectified twice (Ewing, 1996).But the woman in question by working as a live model for painters, she deliberately uses bodily performance in the picture to represent a classic pose which is often used with various symbolic6 assignments.
In this context, "to recognize oneself in a portrait (and in a mirror) one imitates the image one imagines the other sees" (Phelan, 1993, p. 36).The pose itself is a theatrical attitude that provides an image already taken "from a set of standards, which is a piece of the perception of one's social self" (Fabris, 2004 pp. 35-36).The portrait, by being taken as a representation of what the other sees ends to represent the "male gaze" of the woman acted by the woman herself.As in Manet's Olympia (1863), the picture starts to be formed by folded meanings between what someone is and what one should look like.To generate another image of or to oneself, the picture becomes a sort of simulacrum.In this game, the woman's self-image reflects the male point of view, a place that determines how it should be her pose and, therefore, her own representation.
Regarding the ethnographic style of remembrance in his work, it is highlighted here the picture showing a black woman, naked, who is set to show her profile and without staring at the camera (Figure 7).Cepeda assumes, such as colonial photographers, to represent the wide variety of skin gradations in this small inventory of women who he met in Brazil.According to the website of the author, he tries to forget both the story and the original context of response to the subject.It can be seen that in amidst of the author's desire to forget all of Brazil-Portugal colonial history as well as the representation heritage of women, he ends up representing the Brazilian woman whit triviality and aside from social and political concerns that may be associated with the exploited body.
Yet, it is imposed to the viewer's judgement a single portrait of a male individual throughout the whole book (Figure 10), who is fully clothed.Judging by the thought of Cotton (2013, p. 191), the image acquires its value "from its insertion in the midst of a broader system of social and cultural encodings".There is a strangeness to the viewer caused by the female nude when contrasted with a serious and dressed man.In the book Stan Getz Street (2014), there are no explanatory texts that show any patriarchal slant on his images.However, making a counterpoint to the discourse of the photographer, -who replicates same the representation of women used in painting traditions, he exposes a sexist and careless look about the cultural and historical context of the individual photographed -, his photo essay ends up in displaying visual images of Brazilian women that strengthen the colonial stereotype fought by feminism and post-colonialism.
Miguel Valle de Figueiredo, Brazil (2007-2008)
Miguel Valle de Figueiredo was born in Santa Comba Dao, in the district of Viseu, Portugal.He is a professional photographer since 1986, with works in the industrial, engineering / architecture and editorial fields.In 1994 he was a co-founder of the magazine Around the World, a publication aimed at exposing possible courses of travel, carrying out reports in more than 50 countries.Miguel has been in Brazil about 30 times, two of those for holiday times.The author claims to know Brazil more than many Brazilians.In 1997, he won the Award Fuji-European Press in the category of In-depth Stories, with one of his photos taken in the state of Ceará, Brazil.
In a conversation with the photographer, he points out that his forays into Brazil are result of his work for tourist publications and for this reason many of his pictures are no exception to the main iconography attributed to the tropical country of beautiful landscapes and homeland of Girl from Ipanema.But in his speech, Miguel Valle de Figueiredo explains that this myth about the Brazilian woman, created by Brazilians themselves, it doesn't really exists, because in the country extension every Brazilian woman is singular -with its peculiarities on her walk, speech, her actions.
However there aren't many photographs of women in his work, arising instead as a more predominant subject the "exotic" landscape in his online portfolio presented in his Flickr page.The author, however, photographed peculiarities of a continental Brazil tinted by inequality.It seems that when he bring to us the specificity of Brazilian small villages and its people, as seen in the northeast region, the author accentuates this inequality and opens a wide sight to a not so generous features of the country such as those advertised by Caminha's letter at the time of the Portuguese conquest.That same conquest of paradise nature generated, in fact, many "Brazils".This Brazil, shown in his award-winning photographs, reinforces the idea of a mestizo country and presents the paradoxes and contrasts in the ways of life of individuals and societies that make up Brazil as a nation.
By having a practical advertising nature, a woman display in his pictures comes along with the landscape as an exotic representation of the site.There is an exotic and sensual beauty that is supposed to belong to Brazilian women, legacy of colonial vision imposed on black and indigenous people who were seen as polygamous and incestuous (Vanifas, 1997).It was up to the photographer in his commercial work, the task of playing The Girl from Ipanema, Tom Jobim's music that was responsible for idealizing women of Rio de Janeiro (Figure 11).As also never absent in the Brazilian cliché representation in travel photography, nature appears as the habitat that shelters the woman, half naked, wild, like a Medusa who seems to mesmerize men she meets (Figure 12).By saying that he can not "establish what is Brazilian, while object of portraiture," the photographer claims that such affirmation "is not the same thing to say that" he couldn't "photograph the many Brazilians, as 'the' Brazilian."He refers also to the various types that exists in Brazil, because there is no way to translate "two hundred million people with so much variety" and concludes: "The racial logic in Brazil is very difficult to map photographically".
Miguel Valle registers, whose photographic propositions relate to the advertising look, search to induce consumption by offering consumers an experience under an aesthetic world of "artist capitalism"7 (Lipovetsky & Serroy, 2015, p. 62).As much as one can realize that there is a diversity of identities in Brazil, it is hidden there also the situation of merchandise that culture and subject identities appear to strengthen stereotypes, such as the "Brazilian woman" sold as attractive packages tour around the world.
His choice is not objectively proposing an identity or to define who is the "Brazilian woman", but to present a stereotypical identity of her, a woman with accessible manners and connected to the myths that are part of the history of Brazil.
Final considerations
According Marilena Chauí (1995, p. 34) the act of staring is an activity since the act of looking is developed by and depends on each individual experience, but it is also exercised in a passivity anchored in discourse structures that engender the world.Such passive behavior can be related to the consumption of new imagery forms of what we can call as a contemporary colonial aesthetics, in whose discourses and images are articulated, spread and disseminated by current media outlets.These outlets both elect the standardization imposed by generalizing systems, as they can also involve and formalize the understanding of a particular culture and about individuals of this same culture.
The construction process of women's visual image reveals discourses supported by perceptions and experiences, but it is also supported by the attribution of generalizing values about their race and gender inferiority.So, to speak about a contemporary Brazilian woman is also to talk about issues of race and colonial view that overlaps the gender issue.
Based on this assessment, it is stated that André Cepeda and Miguel Valle de Figueiredo revealed generalizing discourses of Brazil.Figueiredo, who was more than 30 times to the country, portrayed in a fully commercial way both lands and Brazilian people.His interest is founded on a purely commercial basis to represent picturesque images accompanied the female body.In his pictures, he exposes women as part of nature, such as exotic animal, and, unconsciously or not, also favor a sex trade through the images.
For his part, Cepeda goes back to a way of cataloging female types.The choice of female nudes, according to the author, refers to the question of classical art of women representation and in his refusal to photograph people of the same sex.If the artistic aesthetic is an extension of the artist himself (Ferry, 2003), the work of carte de visite made by the photographer reveals his general reflection on Brazil.When using the artifice of pose and models, the author represents Brazilian woman without regard to the historical and cultural context that interferes with the understanding of a current Brazilian woman.This form of representation without judgement or reflection, ends by making us believe that the world constructed by the author in his work repeats old concepts to represent the "Brazilian" woman.After all, if the body is political, when having a neutral posture in situations of injustice, one takes the risk of representing the oppressive side.
It is concluded that the peculiarities of the current understanding of the image of Brazilian woman's body, specifically the one formed the Portuguese gaze in the analyzed images, shows multiple layers of new colonizing processes in which Brazilian woman visual image is associated as much sensuality as sexual disposition, filled with the understanding of a colonial body that still persists in the contemporary imagination and stands as a stain in the Brazilian woman's image.
Figures 3 and 4 :
Figures 3 and 4: Cover of Time magazine that draws Bragança "Europe's New Red Light District" and a newspaper clipping which shows the arrest of Brazilian ladies in the Portuguese press
Figure 5 :
Figure 5: Cover of the Focus magazine Source: Os segredos da mulher brasileira, 2010
|
v3-fos-license
|
2022-05-21T06:23:26.686Z
|
2022-05-10T00:00:00.000
|
248917781
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "8e40d702a75df4a437bb59a8cc0b48be297b223f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1069",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "3c70c49fc97b2d54e9b1766fc2eca89154c9e0ef",
"year": 2022
}
|
pes2o/s2orc
|
The role of hyperbaric oxygen therapy in Fournier’s Gangrene: A systematic review and meta-analysis of observational studies
ABSTRACT Purpose: Management of Fournier’s Gangrene (FG) includes broad-spectrum antibiotics with adequate surgical debridement, which should be performed within the first 24 hours of onset. However, this treatment may cause significant loss of tissue and may delay healing with the presence of ischemia. Hyperbaric oxygen therapy (HBOT) has been proposed as adjunctive therapy to assist the healing process. However, its benefit is still debatable. Therefore, this systematic review and meta-analysis aimed to evaluate the effect of HBOT as an adjunct therapy for FG. Materials and Methods: This study complied with the Preferred Reporting Items for Systematic Reviews and Meta-analyses protocol to obtain studies investigating the effect of HBOT on patients with FG. The search is systematically carried out on different databases such as MEDLINE, Embase, and Scopus based on population, intervention, control, and outcomes criteria. A total of 10 articles were retrieved for qualitative and quantitative analysis. Results: There was a significant difference in mortality as patients with FG who received HBOT had a lower number of deaths compared to patients who received conventional therapy (Odds Ratio 0.29; 95% CI 0.12 – 0.69; p = 0.005). However, the mean length of stay with Mean Difference (MD) of -0.18 (95% CI: -7.68 – 7.33; p=0.96) and the number of debridement procedures (MD 1.33; 95% CI: -0.58 – 3.23; p=0.17) were not significantly different. Conclusion: HBOT can be used as an adjunct therapy to prevent an increased risk of mortality in patients with FG.
INTRODUCTION
Fournier's Gangrene (FG) is a progressive infectious disease marked by necrotizing fasciitis of the perineum and external genitalia (1,2). It is considered an emergency in Urology due to its tendency to develop into a severe soft tissue infection associated with systemic sepsis. In several cases, it also required amputation of the penis (3). FG mortality rate ranges from 18 to 50%, with an average of 20 to 30% (4). Management of FG includes aggressive resuscitation, broad-spectrum antibiotics, and surgical debridement, which should be done in under 24 hours (5). Despite this current standard therapy, FG still causes high mortality. It is possibly due to poor local blood supply in FG patients, causing infection and damage to the blood vessels, thus may delay healing. Aggressive debridement, in this case, may cause significant loss of tissue which prolongs the healing process causing long hospital stays and a high mortality rate (6).
This problem leads to Hyperbaric Oxygen Therapy (HBOT) as adjunctive therapy for FG. Hyperbaric oxygen therapy (HBOT) is a therapeutic option involving inhaling pressurized 100% oxygen in sealed chamber (7). HBOT allows the speeding up of the healing process, which increases tissue oxygen tension, and inhibits and kills anaerobic bacteria. HBOT possessed a bactericidal effect on anaerobic infection due to aerobic or anaerobic bacteria. Recent studies have reported the role of HBOT in significantly decreasing mortality in Fournier Gangrene patients (8). There is no consensus regarding the role of adjunctive therapy of HBOT in FG, and it is still debated whether it can be used to manage FG (4,9). Therefore, this study aims to evaluate the effect of HBOT as an adjunct therapy for FG.
MATERIALS AND METHODS
This study was in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) protocol. Preliminary searching was performed to ensure that the PICO characteristics were yet to be investigated and avoid duplication of meta-analysis. Literature searches were conducted through MEDLINE, EM-BASE, and Scopus databases. Applied key words were specified as ("Fournier Gangrene" or "penile necrotizing fasciitis") and ("hyperbaric oxygen" or "hyperbaric oxygen therapy" or "hyperbaric oxygen treatment"). The expanded searching terms are presented in Table-1. The protocol of this study was registered on PROSPERO (CRD42021283421).
Inclusion and Exclusion Criteria
Articles permitted for inclusion must have been a randomized controlled trial or observational research, written in English, comprising a minimum of two arms, reporting the number of debridement, length of stay, and mortality rates in patients with FG who were treated with HBOT as opposed to only conventional therapy. Experimental trials in animals, unpublished articles, and abstract-only findings were excluded. Hyperbaric oxygen therapy (HBOT) is an adjunctive treatment in which the patients inhale 100% O 2 fraction while being exposed to rising atmospheric pressure. The interventional arm was compared to the standard conventional therapy without HBOT.
Data Extraction
Two independent investigators retrieved the data according to the extraction template. Any discrepancies and disagreements regarding data extraction would be discussed and decided by a third investigator as needed. The extracted items included study characteristics (authors, time of publication, number of samples, study design, inclusion and exclusion criteria, duration of follow-up); baseline characteristics of the subjects (age, type of intervention, affected anatomical region, and location of the study); and quantitative outcomes (length of stay, number of debridement procedures, and number of deaths).
Quality Assessment
The risk of research bias was assessed using The Newcastle-Ottawa Scale (NOS), including selection, comparability, and exposure parameters. This scoring system was used to assess the risk of bias in non-randomized studies. The result from NOS instrument assessment is classified into three categories. A score of 0-3 indicates a low-quality study, while 4-6 indicates a medium quality study, and 7-9 indicates a high-quality study.
Statistical Analysis
The measured endpoints were the mean number of debridement, mean length of stay, and mortality rate. The dichotomous variable was analysed using Odds Ratio (OR) at 95% Confidence Interval (CI), with a p-value below 0.05 regarded as statistically significant. Secondary outcomes were measured as a continuous variable with Weighted Mean Difference (WMD). Analysis of heterogeneity between studies was calculated using I 2 . Heterogeneity is considered high if the I 2 is above 50%. Subsequently the random-effects model will be applied for pooled analysis. Otherwise in I 2 <50%, the statistical fixed-effects model will be used. Statistical analysis was performed using Re-vMan 5.4 for Windows software and presented in the form of forest plots and descriptive narratives.
Systematic search results
The initial search of the study database using specific key words (Table-1) yielded 454 studies. However, we removed 194 studies with irrelevant abstracts or titles and 230 duplicate studies. A total of 30 full-text studies were then assessed for eligibility. Finally, ten eligible studies were included in the analysis of this study ( Figure-1).
Baseline characteristics of the included studies
The characteristics of each included study are presented in Supplementary Table-1, which consists of the author and year of published studies, study design, description of the intervention, the mean age, comorbidities, and FGSI score. All included studies were retrospective studies that were published between 1998 and 2021. The total number of patients analysed in this meta-analysis was 657 patients consisting of 268 in the HBOT group and 369 in the non-HBOT group, with the average age of each study ranging from 46.13 to 68.3 years old. The intervention groups of each study were given a different dose of HBOT. However, only three studies mentioned the mean FGSI score of the included studies, ranging from 7.38 to 9 (10)(11)(12). Fournier gangrene patients were associated with several comorbidities such as diabetes, alcoholism, hypertension, and smoking. The assessed outcome of this study includes mortality, mean length of stay, and mean number of debridement, as described in Table-2.
Risk of bias assessment
We used the NOS instrument to assess the risk of bias in this meta-analysis. The result from the assessment using NOS instrument of the included studies ranged from 6 to 8 which indicates a moderate to a high-quality assessment of the risk of bias (Table-3).
Meta-analysis result on the length of stay
The forest plot analysis in this study also evaluated the difference in length of stay between HBOT and non-HBOT groups. The analysis results of two included studies (11, 13) did not reveal any significant difference regarding the mean length of stay between the HBOT and non-HBOT groups in FG patients (MD -0.18; 95%CI: -7.68 -7.33; p=0.96) (Figure-3a). The fixed-effects model was used due to low heterogeneity between studies (p = 0.94; I 2 = 0%).
Meta-analysis results on the number of debridement
This meta-analysis also compared the number of debridement procedures performed in HBOT and non-HBOT groups. Three included studies (11,13,14) in the analysis of this outcome revealed no significant difference in the mean number of debridement procedures between HBOT and non--HBOT in FG patients (MD 1.33; 95% CI -0.58-3.23; p=0.17) (Figure-3b). The random-effects model was used due to the heterogeneity between studies was high at 95% (<0.00001; I 2 95%)
DISCUSSION
To the best of our knowledge, this is the first systematic review and meta-analysis study on the evaluation of HBOT in Fournier Gangrene patients. Oxygen therapy (HBOT) is an adjunctive treatment to the primary surgical debridement in the cases of soft tissue infection. This treatment involves inhaling 100% fraction of Oxygen in a pressurized environment. However, the benefit of HBOT for Fournier Gangrene (FG) is still controversial (19). Further investigation is needed befo-re HBOT can be recommended for routine use in cases of FG. Our study demonstrated a significant result that HBOT might reduce the mortality rate in FG patients. However, the effect of HBOT on the length of stay and number of debridement was not proven in this study.
Several previous studies have proven that the most important intervention to control the progressivity of the rapidly infectious process of FG involves repeated surgical debridement, broad--spectrum antibiotics, and intensive care. However, FG patients still posses a high risk of mortality
Figure 3 -a) Forest plot for the length of stay of FG patients in HBOT and non-HBOT groups, b) Forest plot for the number of debridement of FG patients in HBOT and non-HBOT groups.
and morbidity. Finding an adjunctive treatment to the standard treatment was crucial and may significantly benefit survival and prevent higher mortality of FG patients. This meta-analysis revealed a significantly lower mortality rate in FG patients who received adjuvant HBOT than conventional therapy (OR 0.29; 95% CI 0.12, 0.69; p = 0.005), consistent with findings in several studies (10,11,20,21). A study by Anheuser et al. (2018) reported that this promising result in the HBOT group was also influenced by the well availability of hyperbaric oxygen therapy and safe patient transfer despite the patient's poor physical condition because delaying the patient transfer to surgical debridement may significantly increase mortality rate (14). However, HBOT alone cannot replace the initial treatment of FG, which includes aggressive resuscitation, broad-spectrum antibiotic therapy, early colostomy, and adequate debridement (17).
Another study suggested that HBOT became an independent predictor for decreased mortality rate due to Fournier Gangrene (12). A study by Mindrup et al. (2005) has contradictory results regarding the HBOT group's mortality rate. It revealed that patients who underwent HBOT have a higher mortality rate, 12.5% in the non-HBOT group and 26.9% in the HBOT group (13). On the other hand, a study by Pizzorno et al. (1997) showed 0% mortality rate in patients that did not undergo HBOT (22), while other studies only reported a 3 and 9% mortality rate (23,24). Differences may occur due to several factors which may affect mortality in the treatment of Fournier Gangrene patients, such as surgeon experience, early administration of antibiotic therapy, intensive care, and early surgical therapy (22,(25)(26)(27). Another study also reported that the surface area of the infected body is also a factor that affects survival and mortality (28). Hyperbaric oxygen therapy was considered to be safe because it did not cause a delay in surgical debridement or interrupt the standard therapy. The length of stay between the two studies did not reveal a significant difference (MD -0.18; 95%CI: -7.68 -7.33). Only one study reported a reduction in length of stay among patients with FG receiving HBOT (29). However, the sample of this study was consisted of HBOT and NPWT treatment thus it was difficult to confirm specifically the adjunctive effect of HBOT treatment in FG patients. According to a study by Anheuser et al., there was no difference in patients with FG receiving HBOT in terms of length of stay (14). Other study also reported a shorter length of stay along and decreased mortality rates (10). In relation to the length of stay, physical disability is a significant predictor of longer hospitalization (13). It could be due to community issues, as approximately 30% of FG patients require treatment at rehabilitation centres, long-term care facilities, or local hospitals (13). The length of stay was also influenced by the need to perform concurrent surgeries such as colostomy. Regarding Fournier Gangrene Severity Index score, sepsis significantly influences the length of stay in FG patients. Understanding the importance of predicting length of stay may provide strategy in patient-based treatment and aid in decision-making in treatment choice.
Pooled analysis of the number of debridement procedures suggested no significant difference between HBOT and conventional therapy (MD 1.33; 95% CI: -0.58 -3.23). A previous study reported that the average number of surgical debridement procedures was similar between HBOT and conventional therapy leading to the interpretation that HBOT had no advantage in decreasing the number of debridement procedures when used as an adjuvant treatment of FG (30). A lower number of debridement procedures among control that did not receive HBOT has also been reported. The number of required debridement was an important parameter because complete recovery in FG patients may be determined with a lower number of repeated debridement (11).
Based on a study reported by Mindrup et al., the cost of HBOT was not negligible, as hospital charges were significantly higher among HBOT group (13). A study conducted in Germany stated that the availability of HBOT was relatively low. In addition, the expense of a patient treated with the HBOT ranges from 8,000 to 25,000 EUR and is not covered by health insurance (14). Therefore, the recommendations of HBOT as adjunctive therapy requires more cost analysis studies before it can be implemented for routine use in FG cases.
Several limitations existed in this study. Firstly, other factors that may affect the outcome cannot be entirely analysed, leaving the possibility of influence on the outcome results. Secondly, cost analysis could not yet be performed as only a few included studies mentioned this aspect in relation to the given intervention. Thirdly, the high heterogeneity of the included studies occurred due to various characteristics among study population, including patient comorbidities in both arms, the manner of the intervention, and the endpoint for analysis. Therefore, it is necessary to conduct research with a uniform design setting and population. Lastly, all included studies were retrospective observational studies. The nature of this design may raise several biases. More studies on this topic should be done, especially randomized-control trial studies, to create an adequate analysis of the usage of Hyperbaric Oxygen for Fournier's Gangrene Patients.
CONCLUSION
The adjunctive therapy of Hyperbaric Oxygen possessed a significantly lower mortality rate compared to conventional therapy. However, the effect of HBOT on the length of stay and number of debridement was not proven in this study. The influence of multiple factors warrants the need for future randomized controlled trials.
CONFLICT OF INTEREST
None declared.
|
v3-fos-license
|
2022-02-04T16:04:54.729Z
|
2022-01-31T00:00:00.000
|
246505415
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2223-7747/11/3/402/pdf",
"pdf_hash": "da319aeef89bb79bd29a6896a804fcadb9f4d897",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1072",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "38a9bec912efef12269cdd0967b19436383a412d",
"year": 2022
}
|
pes2o/s2orc
|
Phylloplane Biodiversity and Activity in the City at Different Distances from the Traffic Pollution Source
The phylloplane is an integrated part of green infrastructure which interacts with plant health. Taxonomic characterization of the phylloplane with the aim to link it to ecosystem functioning under anthropogenic pressure is not sufficient because only active microorganisms drive biochemical processes. Activity of the phylloplane remains largely overlooked. We aimed to study the interactions among the biological characteristics of the phylloplane: taxonomic diversity, functional diversity and activity, and the pollution grade. Leaves of Betula pendula were sampled in Moscow at increasing distances from the road. For determination of phylloplane activity and functional diversity, a MicroResp tool was utilized. Taxonomic diversity of the phylloplane was assessed with a combination of microorganism cultivation and molecular techniques. Increase of anthropogenic load resulted in higher microbial respiration and lower DNA amount, which could be viewed as relative inefficiency of phylloplane functioning in comparison to less contaminated areas. Taxonomic diversity declined with road vicinity, similar to the functional diversity pattern. The content of Zn in leaf dust better explained the variation in phylloplane activity and the amount of DNA. Functional diversity was linked to variation in nutrient content. The fraction of pathogenic fungi of the phylloplane was not correlated with any of the studied elements, while it was significantly high at the roadsides. The bacterial classes Gammaproteobacteria and Cytophagia, as well as the Dothideomycetes class of fungi, are exposed to the maximal effect of distance from the highway. This study demonstrated the sensitivity of the phylloplane to road vicinity, which combines the effects of contaminants (mainly Zn according to this study) and potential stressful air microclimatic conditions (e.g., low relative air humidity, high temperature, and UV level). Microbial activity and taxonomic diversity of the phylloplane could be considered as an additional tool for bioindication.
Introduction
Urban green infrastructures (GIs) contribute considerably to the quality of life in cities by provisioning important ecosystem services, e.g., microclimate cooling, dust deposition, dominating in the urban GI of cities and characterized by a high potential for particulate matter (PM) accumulation compared to other deciduous species [33]. Road transport is the primary source of air pollution in Moscow [34,35], making a gradient approach particularly suitable for phylloplane sensitivity investigation. We hypothesize that (1) the phylloplane of roadside trees will demonstrate signs of stress (inefficient functioning), (2) the taxonomic diversity of the phylloplane will be different in trees growing at different distances to the road, (3) the fraction of opportunistic microorganisms will increase with an increase of pollution level, and (4) observed changes will be driven by the concentration of particular pollutants associated with traffic.
Environmental Conditions along the Gradient
A remarkable change of daily average and especially maximal temperature along the gradient was shown even though the sensor from the 10 m distance emergently got corrupted and the data were not available. The daily average temperature at the 2 m distance was 0.8 and 1.0 • C higher compared to 30 and 50 m distance correspondingly, whereas on the afternoon of 18 August 2020 which was the warmest time during the observation period, the difference in air temperature between a distance of 2 and 50 m reached 5 • C ( Figure 1A). Significant changes of soil properties along the gradient were also reported, with a gradual decrease in bulk density (from 1.4 ± 0.1 g cm −3 to 0.9 ± 0.3 g cm −3 ) and pH (from 7.3 ± 0.1 to 4.7 ± 0.1) with the distance from the road. Over-compaction and neutralization of soil reaction due to dust depositions are indicators of anthropogenic disturbance usually reported for urban soils [36]. Indirectly, an increase in soil organic carbon content (from 3.3 ± 0.3% near the road to 6.0 ± 0.5% in urban forest) confirms higher anthropogenic disturbance in close proximity to the road. A gradual decrease in potentially toxic element (PTE) content on the soil surface with the distance to the road confirms the pollution gradient. Soil bulk content of Cu, Pb, and Zn at 2 m distance from the road was significantly (ANOVA, p < 0.05) higher than in the other locations ( Figure 1B). largely overlooked and mainly studied in isolation, from phylloplane taxa in cultures [31,32].
In this study, we investigated how the microbial activity and the taxonomic and functional diversity of the phylloplane of Betula pendula responds to air pollution by sampling trees growing at different distances from heavy traffic road in Moscow. Betula pendula was selected as it is one of the typical tree species for the region of the investigation, often dominating in the urban GI of cities and characterized by a high potential for particulate matter (PM) accumulation compared to other deciduous species [33]. Road transport is the primary source of air pollution in Moscow [34,35], making a gradient approach particularly suitable for phylloplane sensitivity investigation. We hypothesize that (1) the phylloplane of roadside trees will demonstrate signs of stress (inefficient functioning), (2) the taxonomic diversity of the phylloplane will be different in trees growing at different distances to the road, (3) the fraction of opportunistic microorganisms will increase with an increase of pollution level, and (4) observed changes will be driven by the concentration of particular pollutants associated with traffic.
Environmental Conditions along the Gradient
A remarkable change of daily average and especially maximal temperature along the gradient was shown even though the sensor from the 10 m distance emergently got corrupted and the data were not available. The daily average temperature at the 2 m distance was 0.8 and 1.0 °C higher compared to 30 and 50 m distance correspondingly, whereas on the afternoon of 18 August 2020 which was the warmest time during the observation period, the difference in air temperature between a distance of 2 and 50 m reached 5 °C ( Figure 1A). Significant changes of soil properties along the gradient were also reported, with a gradual decrease in bulk density (from 1.4 ± 0.1 g cm −3 to 0.9 ± 0.3 g cm −3 ) and pH (from 7.3 ± 0.1 to 4.7 ± 0.1) with the distance from the road. Over-compaction and neutralization of soil reaction due to dust depositions are indicators of anthropogenic disturbance usually reported for urban soils [36]. Indirectly, an increase in soil organic carbon content (from 3.3 ± 0.3% near the road to 6.0 ± 0.5% in urban forest) confirms higher anthropogenic disturbance in close proximity to the road. A gradual decrease in potentially toxic element (PTE) content on the soil surface with the distance to the road confirms the pollution gradient. Soil bulk content of Cu, Pb, and Zn at 2 m distance from the road was significantly (ANOVA, p < 0.05) higher than in the other locations ( Figure 1B). The changing trend in line distance to the road was observed for K, Ca, Mn, Mg, Na, Cu, and Zn ( Figure 2). Among them, the content of Cu, Zn, Pb, Fe, Na, and Ca tended to be higher on leaves of the roadside trees (Table 1). Considering the variance of the chemical elements' concentration on leaves between replicate trees, only Zn was strongly associated with distance to the road, significantly decreasing by 2 times along transects (from the road to the forest sites). In contrast, the concentration of Mn and K increased with distance from the road. For other elements (Al, Si), a clear dynamic in concentration in respect to the distance from the pollution source was not observed. The phylloplane community-level physiological profile (CLPP) for trees located in roadsides was shifted to domination of the microbial group consuming the most available carboxylic acids: ascorbic, citric, and oxalic ( Figure 3). The contribution of these microbial groups gradually decreased with the distance from the roadside. Basal respiration of the phylloplane was 1.5-1.6 times higher at the roadside compared to the sites at the 30-50 m distance ( Figure 4A). At the same time, the amount of phylloplane DNA was lower by a factor of 2.9-6.6 for trees at the roadsides in comparison to trees located in the forest, with intermediate rates observed for trees growing at the forest edge ( Figure 4B). Microbial functional diversity or the ability to metabolize different substrates tended to decrease along the transect from the road to the core of the forest sites, although variation among the single replicates was considerable, so that no significant differences could be detected ( Figure 4C). While the level of total genomic DNA declined closer to the road, the number of cultivable opportunistic fungi and their portion in total fungal diversity substantially increased ( Figure 4D, Table 2). For the roadside trees, 8 species of fungi were identified at 10 m distance from the roadside, 6 species at 30 m and only 3 species at 50 m distance ( Table 2). The dominant species in the cultivable fungi community of phylloplane for trees of roadsides were Ciliciopodium hyalinum and P. corylophilum-at 10 m from the road, only P. corylophilum; at 30 m, only C. hyalinum-and at the farthest distance Trichoderma aureoviride was found. The portion of opportunistic fungi was more than half of the total cultivated fungi community in all sampling points, except for the points at a distance of 30 and 50 m (17-30%). Fungi of the especially dangerous group BSL-2 appeared only at a distance of 2 and 10 m from the road (Table S1). The portion of cultivable pathogenic bacteria in all areas was less than 50% (Table S2). For the roadside trees, 8 species of fungi were identified at 10 m distance from the roadside, 6 species at 30 m and only 3 species at 50 m distance ( Table 2). The dominant species in the cultivable fungi community of phylloplane for trees of roadsides were Ciliciopodium hyalinum and P. corylophilum-at 10 m from the road, only P. corylophilum; at 30 m, only C. hyalinum-and at the farthest distance Trichoderma aureoviride was found. The portion of opportunistic fungi was more than half of the total cultivated fungi community in all sampling points, except for the points at a distance of 30 and 50 m (17-30%). Fungi of the especially dangerous group BSL-2 appeared only at a distance of 2 and 10 m from the road (Table S1). The portion of cultivable pathogenic bacteria in all areas was less than 50% (Table S2). For the roadside trees, 8 species of fungi were identified at 10 m distance from the roadside, 6 species at 30 m and only 3 species at 50 m distance ( Table 2). The dominant species in the cultivable fungi community of phylloplane for trees of roadsides were Ciliciopodium hyalinum and P. corylophilum-at 10 m from the road, only P. corylophilum; at 30 m, only C. hyalinum-and at the farthest distance Trichoderma aureoviride was found. The portion of opportunistic fungi was more than half of the total cultivated fungi community in all sampling points, except for the points at a distance of 30 and 50 m (17-30%). Fungi of the especially dangerous group BSL-2 appeared only at a distance of 2 and 10 m from the road (Table S1). The portion of cultivable pathogenic bacteria in all areas was less than 50% (Table S2). The diversity index for the microbial community calculated from the sequencing data increased from the roadside to the forest ( Figure 4E,F). It indicates the negative effect of traffic on the taxonomic diversity of the microbial community similar to that observed for functional diversity. Such changes in the diversity of fungi were more noticeable (by 11.9%) compared to those for the bacteria community (by 5.6%). This implies that the phylloplane's fungi diversity is more sensitive to pollution compared to bacteria. The diversity index for the microbial community calculated from the sequencing data increased from the roadside to the forest ( Figure 4E,F). It indicates the negative effect of traffic on the taxonomic diversity of the microbial community similar to that observed for functional diversity. Such changes in the diversity of fungi were more noticeable (by 11.9%) compared to those for the bacteria community (by 5.6%). This implies that the phylloplane's fungi diversity is more sensitive to pollution compared to bacteria.
Taxonomic Structure of the Phylloplane's Microbial Community
Bacteria. The analyzed samples contained at least 69,394 reads, and one sample contained 141,475 reads. All high-quality reads were rarefied to get even depths of 61,000 for all samples and binned into operational taxonomic units (OTUs) at 97% sequence identity. A total of 15 identified phyla were detected ( Figure 5). Most of the bacteria phyla (8-12) had a relative abundance above 0.1%. The phylum Proteobacteria was the most dominant (range 42.5-48.3%), followed by Bacteroidetes (12.4-18.5%), Actinobacteria (1.7-4.8%), and Cyanobacteria (1.1-2.3%). Within the Proteobacteria, the most phylotypes were represented by Alphaproteobacteria, Betaproteobacteria, and Gammaproteobacteria (Table S3). Bacteroidetes were mostly represented by the classes Cytophagia, Sphingobacteriia, Flavobacteriia, and Chitinophagia. At the genus level, the most abundant bacterial genera in all studied samples were Hymenobacter, Sphingomonas, Methylobacterium, f_Oxalobacteraceae, and Pseudomonas, except the dust samples collected 10 m from the road, where Pedobacter (c_Sphingobacteriia) were also the dominant genus. Cyanobacteria (1.1-2.3%). Within the Proteobacteria, the most phylotypes were represented by Alphaproteobacteria, Betaproteobacteria, and Gammaproteobacteria (Table S3). Bacteroidetes were mostly represented by the classes Cytophagia, Sphingobacteriia, Flavobacteriia, and Chitinophagia. At the genus level, the most abundant bacterial genera in all studied samples were Hymenobacter, Sphingomonas, Methylobacterium, f_Oxalobacteraceae, and Pseudomonas, except the dust samples collected 10 m from the road, where Pedobacter (c_Sphingobacteriia) were also the dominant genus.
An extremely high abundance of Gemmatimonadetes and Firmicutes was found at the closest distance to the road ( Figure 5). The abundance distribution of most bacterial species did not relate to road distance ( Figure 6A). The number of species that reacted negatively to the traffic (increasing relative abundance from the roadside to the forest) were higher than those that reacted positively. The Gammaproteobacteria and Cytophagia classes turned out to be sensitive to the pollution, insofar as the abundance of the majority of their species increased from the roadside to the forest ( Figure 6B), while Betaproteobacteria and Alphaproteobacteria classes could be characterized as resistant to the anthropogenic load due to the major portion of their species having the highest abundance on the trees at the roadside ( Figure 6C). It should be noted that the majority of species that reacted to the road distance was not dominant in the extracted samples from the phylloplane microbiome (relative abundance < 1%). Among species with high relative abundance (2-8%), distinct increasing and decreasing trends (by 3.6 and 1.5 times) from the roadside to the forest showed unkn. Pseudomonas (g) and unkn. Proteobacteria (p), respectively (Table S3). An extremely high abundance of Gemmatimonadetes and Firmicutes was found at the closest distance to the road ( Figure 5). The abundance distribution of most bacterial species did not relate to road distance ( Figure 6A). The number of species that reacted negatively to the traffic (increasing relative abundance from the roadside to the forest) were higher than those that reacted positively. The Gammaproteobacteria and Cytophagia classes turned out to be sensitive to the pollution, insofar as the abundance of the majority of their species increased from the roadside to the forest ( Figure 6B), while Betaproteobacteria and Alphaproteobacteria classes could be characterized as resistant to the anthropogenic load due to the major portion of their species having the highest abundance on the trees at the roadside ( Figure 6C). It should be noted that the majority of species that reacted to the road distance was not dominant in the extracted samples from the phylloplane microbiome (relative abundance < 1%). Among species with high relative abundance (2-8%), distinct increasing and decreasing trends (by 3.6 and 1.5 times) from the roadside to the forest showed unkn. Pseudomonas (g) and unkn. Proteobacteria (p), respectively (Table S3). Fungi. The analyzed samples contained at least 39,808 reads, and one sample contained 47,404. All high-quality reads were rarefied to get even depths of 28,000 for all samples and binned into OTUs at 97% sequence identity. The three fungi phyla from four phyla had a relative abundance above 0.1%. Ascomycota and Basidiomycota in total accounted for more than 88% of all sequencing reads (Figure 7). There were 13 сlasses of fungi with a relative abundance above 0.1% (Table S4). Exobasidiomycetes (p_ Basidiomycota) and Dothideomycetes (p_ Ascomycota) in total accounted for 49% of sequences (44-54%) with a relative abundance above 10%. The most abundant genera (relative abundance 9-23%) were Microstroma and Taphrina (2 m distance from the road), Pseudomicrostroma (10 m), Ampelomyces (30 m), and Pseudomicrostroma (50 m).
At the phylum level, no distinct distribution trend was found from the roadside to the forest ( Figure 8). As was noted for bacteria, the abundance distribution of most fungi species did not change from the roadside to the forest ( Figure 8A). At the same time, the portion of fungi species abundance of which was sensitive to the traffic effect was higher than those having the resilience to that. The Dothideomycetes class dominated in both groups-those with a decreasing and an increasing trend ( Figure 8B,C). The abundance of Exobasidiomycetes was almost exclusively characterized by an increase from the roadside to the forest, while the abundance of species belonging to the Agaricostilbomycetes class demonstrated a net decline along the gradient ( Figure 8B,C). At the species level, the most sensitive to the pollution gradient were Dothiora sorbi and Exobasidium miyabei (relative abundance increasing from the roadside to the forest by 58 and 1460 times, respectively), and the more resistant were Kondoa yuccicola and Erythrobasidium hasegawianum, with relative abundance decreasing from the roadside to the forest by 8 and more than 20 times, respectively) ( Table S4).
The taxonomic structure of the phylloplane's bacteria community for trees located 10 m from the road differed compared to those at 2, 30, and 50 m distance ( Figure 9A). The major similarity was found between bacterial communities for the trees located at 30 and 50 m distance. The taxonomic structure of the phylloplane's fungi community for trees located 2 m from the road mostly differed from those located at 30, 10, and 50 m ( Figure Figure 6. Relative amount of species abundance in relation to road distance (A) and breakouts for bacteria classes with increasing (B) and decreasing (C) abundance.
Fungi. The analyzed samples contained at least 39,808 reads, and one sample contained 47,404. All high-quality reads were rarefied to get even depths of 28,000 for all samples and binned into OTUs at 97% sequence identity. The three fungi phyla from four phyla had a relative abundance above 0.1%. Ascomycota and Basidiomycota in total accounted for more than 88% of all sequencing reads (Figure 7). There were 13 classes of fungi with a relative abundance above 0.1% (Table S4). Exobasidiomycetes (p_ Basidiomycota) and Dothideomycetes (p_ Ascomycota) in total accounted for 49% of sequences (44-54%) with a relative abundance above 10%. The most abundant genera (relative abundance 9-23%) were Microstroma and Taphrina (2 m distance from the road), Pseudomicrostroma (10 m), Ampelomyces (30 m), and Pseudomicrostroma (50 m).
At the phylum level, no distinct distribution trend was found from the roadside to the forest ( Figure 8). As was noted for bacteria, the abundance distribution of most fungi species did not change from the roadside to the forest ( Figure 8A). At the same time, the portion of fungi species abundance of which was sensitive to the traffic effect was higher than those having the resilience to that. The Dothideomycetes class dominated in both groups-those with a decreasing and an increasing trend ( Figure 8B,C). The abundance of Exobasidiomycetes was almost exclusively characterized by an increase from the roadside to the forest, while the abundance of species belonging to the Agaricostilbomycetes class demonstrated a net decline along the gradient ( Figure 8B,C). At the species level, the most sensitive to the pollution gradient were Dothiora sorbi and Exobasidium miyabei (relative abundance increasing from the roadside to the forest by 58 and 1460 times, respectively), and the more resistant were Kondoa yuccicola and Erythrobasidium hasegawianum, with relative abundance decreasing from the roadside to the forest by 8 and more than 20 times, respectively) ( Table S4).
The taxonomic structure of the phylloplane's bacteria community for trees located 10 m from the road differed compared to those at 2, 30, and 50 m distance ( Figure 9A). The major similarity was found between bacterial communities for the trees located at 30 and 50 m distance. The taxonomic structure of the phylloplane's fungi community for trees located 2 m from the road mostly differed from those located at 30, 10, and 50 m ( Figure 9B). Hence, the bacteria and fungi community of the phylloplane for trees located 10 m and 2 m from the road differed compared to other studied sites.
Driving Factors of Phylloplane Characteristics
Redundancy analysis (RDA) has been used to illustrate variations in phylloplane characteristics (microbial activity, DNA, and pathogen amounts) among the studied sites and the relationships with the concentration of chemical elements. The first two axes together describe 80.6% of total variance (Figure 10
Driving Factors of Phylloplane Characteristics
Redundancy analysis (RDA) has been used to illustrate variations in phylloplane characteristics (microbial activity, DNA, and pathogen amounts) among the studied sites and the relationships with the concentration of chemical elements. The first two axes together describe 80.6% of total variance ( Figure 10). RDA 1 was positively correlated with Mn content (r = 0.50), and negatively correlated with Zn, Ca, Cu, and Na (r = −0.78, −0.71, −0.57, and −0.39, respectively). RDA 2 was positively correlated with the K content (r = 0.62) and negatively associated with Fe and Al (r = −0.51 and −0.50, respectively). The road sites are clearly grouped on the left side of the graph according to RDA 1. According to the angles between the vectors of microbial properties and explanatory variables, it follows that the main predictor for basal respiration and DNA amount is the Zn content (positive and negative effect, respectively), for microbial functional diversity it is K and Al content (positive and negative effect), and for the amount of pathogens no clear driver was found. Detailed stepwise linear regression analysis was used to explain the patterns obtained by RDA. The prevailing portion (43% and 76%) of the explained variance in basal respiration and DNA amount was associated with the Zn content ( Figure 11A,B), and 35% and 18% of the variance, respectively, for microbial functional diversity of phylloplane was explained by K and Al content ( Figure 11C). For the pathogens the explained variance by studied elements did not rise to the level significance ( Figure 11D). The contribution of other studied elements to phylloplane characteristics was poor and nonsignificant. 11 and 18% of the variance, respectively, for microbial functional diversity of phyllo was explained by K and Al content ( Figure 11C). For the pathogens the explained var by studied elements did not rise to the level significance ( Figure 11D). The contributi other studied elements to phylloplane characteristics was poor and nonsignificant.
Environmental Conditions
Evidence for increases in the concentration of potential toxic elements (e.g., Cd, Cr, Cu, Ni, Pb, Zn) in roadside soils has been reported for Australia [37], Russia [10], China [38], the USA [39], and many European countries [40]. The burning of fossil fuels, consumption of car tires, brake wear, and engine oil are the primary sources of these elements [41]. In the particulate matter collected from the air, Ba, Bi, Cu, Sb, Sn, and Zr have been proposed specifically as traffic-related tracers [42]. Dust collected from the canopy of the roadside trees in 20 countries around Europe was characterized by an increase in the content of Fe and such trace elements as Ti, Cr, Mn, Ni, Cu, Zn, Mo, Sn, and Sb [12]. Instead, the presence in the air and in leaf dust of such elements as Na, Cl, Ca, S Ce, Cs, La, Li, Rb, Sr, and U has been attributed to a soil resuspension process and marine and salt-mine aerosol transfer. Depending on the wind conditions, topography, and physico-chemical characteristic of the substance, the dispersion of the pollutants varies from a few meters from the road to hundreds of meters, with major deposition at a distance of 17-20 m [43,44]. In our study, among the analyzed elements on leaves, a clear negative trend with distance from the pollution source was found for Zn, Cu, Ca, and Na. Zinc enters the roadside dust with tire wear; it is added to the rubber to speed up the vulcanization process. It is used as an anti-oxidizing additive in engine oil, to prevent corrosion in galvanized car body parts, and it is also present in fuel and released with brake wear. Copper is a tracer of brake and tire wear [45]. While Zn and Cu are indicated among typical traffic contaminants, the presence of Ca and Na on roadside leaves is likely due to anthropogenic activity as well, namely due to the dispersion of road de-icing salts in winter [46]. An increase of soil salinity in Moscow roadside soils has been previously reported [46,47]. Resuspension of this soil is subjected to secondary salinization, and its further deposition on leaf blades increases the concentration of Ca and Na in leaf dust. In recent decades, Mn-containing compounds have been added to vehicle technologies, leading to a further increase of Mn concentrations in roadside soils [37]. In our study, Mn concentrations in leaf dust were instead constantly growing with increasing distance from the road. We hypothesize that the soil near the roadside has a lower Mn content compared to the forest soil, so that with a decline in the paved surface a concentration of this element increases in leaf dust due to resuspension. Although not all selected elements exhibit a clear variation along the established transects, the obtained gradient mirrors well the distribution of pollutants between roadside and adjacent green territories.
Alongside pollution, tree isolation and the abundance of paved surface changes the microclimatic conditions to which the phylloplane is exposed in the roadside in comparison to the forest. In this study, the increase of air temperature, especially on clear sunny days, was found to be considerable for roadside trees. The "cool island" effect of green areas surrounding cities is a well-documented phenomenon, showing a strong correlation between forest cover and air temperature [48]. While in this study relative humidity was not measured, its variation is generally negatively coupled to air temperature, hence a gradient in relative humidity could also be expected [49]. Isolated trees are also exposed to higher UV rates-another factor that can impact the microbial community functioning. The UV protection factor is estimated to vary between 4 and 20 for isolated trees, reaching 100 under closed canopies [50].
Phylloplane: Sensitive Indicators to Distance from the Road
In this study, the activity of the phylloplane turned out to be sensitive to traffic-related air pollution. While the activity of the microbial community of the phylloplane has been largely overlooked, a variety of methods have been proposed to investigate the active portion of the microbiome in soil and water samples [51,52]. The Microresp method was developed for soils, to study the functional diversity of soil microorganisms and its response to variations in environmental conditions [53]. This method was subsequently adopted to study the functioning of aquatic microorganisms [52,54] and to measure pollution-induced community tolerance in waters and soils [29,52]. To our knowledge, the catabolic activity of the phylloplane and its basal respiration has never been evaluated with Microresp, in contrast to the examination of the surface microbial activity in droplet cultures on polystyrene [55,56]. Because the phylloplane shares a portion of the proper microbiome with soils [57,58], it could be expected to show a certain similarity in terms of microbial sensitivity to the pollution load. The specific basal respiration or respiration per unit of biomass of roadside soils was shown to be higher in comparison to control sites in Australia but was not related to the accumulation of any particular metal [59]. In our study, basal respiration was found to be significantly higher for the roadside phylloplane. However, the picture changes once the dimension of the microbial abundance is considered. For the phylloplane, we can take into account its proxy-microbial DNA-similar to some soil-related studies [60], which is five times lower in roadside trees. An increase of the maintenance requirements in the roadside phylloplane could be interpreted as an unstable microbial functioning expressed through high energy consumption per microbial abundance capita in response to pollution and other environmental factors to which the roadside trees are exposed [61]. In other words, higher specific respiration indicates low C use efficiency, meaning that less C is immobilized in microbial biomass, and more C is lost through respiration [62]. Among the analyzed parameters, the Zn content in leaf dust better explains the variation in basal respiration and DNA amount, creating stressful conditions for the phylloplane. A similar effect was demonstrated for soil: high PTE content increased the specific microbial respiration but decreased the microbial abundance [63,64]. Although for microbial cells Zn is an essential micronutrient, required for the stabilization of DNA, RNA, ribosome structure, and enzyme synthesis, its high concentration is toxic for the microbial community. Zn toxicity is executed through the inhibition of proteases, acetate kinases, and coenzyme F420 [65], which lead to a decline in microbial biomass and a depletion of microbial diversity [66,67]. Other studies suggest that high concentrations of Zn lead to a decrease of microbial diversity for both fungi and bacteria [68,69]. In our investigation, Zn content shows a clear trend in line with changes of bacteria and fungi diversity and taxonomic structure of the phylloplane, showing a negative effect on its microbial diversity, corroborating the findings discussed above.
Microbial functional diversity also decreased at the roadsides compared to the forested sites. Its variation was explained by the content of K-nutrient (positive correlation) and Al-PTE (negative correlation). The functional diversity of the roadside phylloplane shifted to the consumers of easily available substrates (carboxylic acids), whereas groups, utilizing more complex aromatic acid compounds (phenolic acids), were less active here. We can suggest the formation of a PTE-tolerant microbial community on the leaves of roadsides, characterized by a low taxonomic diversity and a high metabolic activity in utilizing specific easily available substrates as an energy source [70]. In relation to aromatic compounds-the petrol organic derivatives on leaf surface-many studies demonstrate an increase in their concentration in the roadside environment [10,71,72]. Accordingly, we expected from the roadside phylloplane an enhanced capacity for aromatic ring degradation developed after constant exposure to these pollutants; this, however, was not confirmed.
Among the bacteria, Pseudomonas (g) turned out to be sensitive to traffic-related air pollution; among the fungi, the species Dothiora sorbi and Exobasidium miyabei were clearly affected. Several studies have reported negative effects of PTE on fungal growth and reproduction [73,74]. In a study by [75], the presence of Zn did not affect the more represented classes and families of fungi; however, a decrease in Zn negatively affected the amount of less represented OTUs. A summed effect of multiple stressors, anthropogenic and climatological, which interact at tree isolation in the roadside can explain certain species sensitivity to roadside vicinity.
Despite pollutants, roadside trees are characterized by a major exposure to UV due to tree isolation and unfavorable atmospheric conditions (e.g., higher temperatures as measured in this study and low relative air humidity), which can also impact the phylloplane functioning. It has been reported that a microclimatic stress condition influences plant physiology [76,77], which in turn can manipulate the pH level on the leaves' surface [78]. As is well known, the pH of environmental components (e.g., soil, water) is the driving factor of their microbial activity and taxonomic structure [79,80]. Although we did not measure the pH level for the leaf samples, we cannot exclude the significant influence of this factor on phylloplane structure and activity along the considered gradient.
Summing up, based on our findings, among the sensitive microbial indicators of the phylloplane to air pollution could be named total DNA amount, respiration, and catabolic activity, functional and taxonomic diversity, taxonomic structure, and CLPP. The presence of a certain taxonomic group of fungi or bacteria in the trial is hardly in itself an indicator of air quality because of the uncertain metabolic status and hence contribution to ecological processes of the microorganisms (active vs. dead). Furthermore, the identification could depend on selected primers, bioinformatics, and media in the molecular biology and classic microbiological approaches.
Phylloplane: Resistance to Traffic-Related Air Pollution
During evolutionary changes, microorganisms developed special adaptations to stress conditions, including PTE contamination. The extracellular barrier is an example of the most common and energetically beneficial defense mechanisms and consists of preventing the entry of metal ions into the cell [81]. In addition, there are also intracellular defense mechanisms of microbial cells allowing microorganisms to withstand the PTE presence in the environment. These mechanisms differ depending on the taxonomic group of microorganisms. In bacterial strains, cell resistance to PTE is associated with the ATP activity of their plasma membrane [82]. The melanin pigments of fungi are one of the adaptation mechanisms to environmental stress conditions, enabling the direct binding of the ions of contaminants [83]. The predominance of species of micromycetes containing melanin pigments in soil contaminated with PTE is noted. Their contribution can exceed 50% of the total number of species [84][85][86][87]. In this study, opportunistic bacteria and fungi of the phylloplane were presented in higher abundance at roadside sites compared to forest. For instance, Enterococcus faecalis was present only in the roadside phylloplane, evidently being associated with the vicinity of human walking paths. On the other hand, opportunism in microorganisms was demonstrated to be linked to polyextremotolerance [88]. This is because for microorganisms to successfully infect an individual they should be capable of overcoming many protection barriers. Hence, the capability to survive multiple unfavorable factors, such as elevated temperatures, unfavorable pH, humidity, irradiation, and pollutants also provides opportunistic possibilities [89]. An increase in the abundance of some opportunistic bacteria and fungi in roadside trees observed in this study can be explained by resistance of these microorganisms to multiple stressors. Summing up, air pollution obviously impacts the phylloplane taxonomic structure, creating the condition for highly competitive groups which could be presented as opportunistic microorganisms.
Thus, the study demonstrated a considerable effect of traffic on the activity, taxonomic structure, and diversity of phylloplane Betula pendula, making it a sensitive indicator of the anthropogenic load. Despite the observed variation in some PTE (Zn, Cu, Na) and the confirmed role of the Zn content in the spatial distribution of microbial respiration activity and DNA content, we cannot exclude the effect of exposures to stressful microclimatic conditions (low relative humidity, high temperature, and UV level) on the phylloplane of isolated roadside trees compared to the "cool islands" of green zones. Future investigations should consider the portion of each climatic factor in phylloplane functioning of different tree species in order to develop recommendations for the improvement of microclimatic conditions from traffic zone landscaping. Particular concern is related to the increase in the fraction of potentially pathogenic species in the phylloplane of roadside trees. Consequently, care must be taken when handling leaves, for example when cleaning areas or working with crowns. It is necessary to ensure the rapid removal of foliage in order to reduce the risks of potential contact to pathogens by sensitive urban populations such as the elderly and children.
Study Site and Sampling
Moscow city is the capital of the Russian Federation and one of the largest urban areas in Europe [90]. The Moscow climate is temperate continental with a mean annual temperature of 5.8 • C and an average annual precipitation of 600 mm. Moscow is located in the southern taiga bioclimatic zone; however, natural vegetation remains mainly in natural protected areas, whereas urban landscapes are dominated by introduced species (i.e., Tília, Pópulus, Ácer, Castánea, Bétula, etc.). Historically, industrial activities, traffic, and waste disposal have been the main sources of soil contamination by heavy metals in Moscow. In the past several decades, industrial emissions of heavy metals have substantially reduced, but the impact of traffic remains high [34]. Cu, Zn, Pb, and Cd are the prevalent pollutants [91,92]; however, recent studies report a broader range of heavy metals in air and in soils of Moscow [35,93,94].
The negative effect of Leninsky prospect-one of the most heavily trafficked roads in Moscow-on the Betula pendula phylloplane was studied. The traffic load was estimated based on the density of the transport flow (number of cars per hour) in morning, noon, and evening periods of the working days of summer. Leaves of Betula pendula were sampled along the transects starting from the road and including trees growing 2, 10, 30, and 50 m from the road (Figure 12). Trees growing 2 m from the road were isolated trees without direct contact with other trees, and trees located 10 m from the road belonged to the edge of the urban forest, whereas the trees at the 30 and 50 m distance were inside the forest and were surrounded by other trees. Only healthy trees (class 1 based on a visual tree assessment [95]) belonging to the same age category were selected to reduce heterogeneity.
The tree leaves were collected on 18 August 2020 during the first part of the day (between 10 am and 1 pm), characterized by favorable conditions for plant and microbial functioning in order also to avoid considerable differences in air temperature between studied sites. The last rain event was registered for this part of Moscow 14 days prior to the sampling. The thermal gradient was measured along the 2nd (interim) transect by the autonomous temperature sensor iButton (DS1922) installed at 2 m height with 0.1 • C accuracy and 5-min steps during the 3-day period (including one day before and one day after sampling).
geneity.
The tree leaves were collected on 18 August 2020 during the first part of the day (between 10 am and 1 pm), characterized by favorable conditions for plant and microbial functioning in order also to avoid considerable differences in air temperature between studied sites. The last rain event was registered for this part of Moscow 14 days prior to the sampling. The thermal gradient was measured along the 2nd (interim) transect by the autonomous temperature sensor iButton (DS1922) installed at 2 m height with 0.1 ºC accuracy and 5-min steps during the 3-day period (including one day before and one day after sampling). The gradient in-soil pollution by PTE was measured in situ by the portable X-ray fluorescence analyzer (pXRF) Vanta C. Screening was performed of the three surface samples located within 1 m from the trunk of each of the sampling trees. Soil samples at the locations were collected to measure bulk density, soil organic carbon, and pH (water solution 1:5) as additional indicators of the disturbance level. Leaves were randomly collected from different parts of the canopy at a height between 3 and 4 m above the ground in order to avoid possible disturbance by citizens and to ensure accessibility for sampling [12]. The leaves were placed in sterile bags and delivered to the laboratory, where the preparation of samples for different analytical procedures was started immediately. Leaf area used for each analytical approach was determined by scanning the leaf surface and calculating the area with ImageJ software.
Chemical Analysis
The leaves' samples were prepared for chemical analysis in the following way: 30-40 leaves per each site (surface area 96-417 cm 2 , averaged value 265 cm 2 ) were placed into a 750 mL flask filled by 50 mL deionized water. The flask with water and leaves was put on a lab rotator and shaken for 15 min at 200 rpm. Then, the dust suspension was poured into a 50 mL flask and kept until complete evaporation of the water at 65 • C for 72 h. Evaporated deionized water was used as a control. The concentration of Al, Ca, Cu, Fe, K, Mg, Mn, Na, Pb, Si, and Zn in the samples was measured using ICP-OES Avio2000 (PerkinElmer, Waltham, MA, USA). The chemical elements listed above were chosen to represent different pollution sources: traffic-origin elements, natural-origin elements, industrial-origin elements [42].
DNA Extraction
In total, 60-70 leaves per each tree (surface area 217-317 cm 2 , average value 269 cm 2 ) were mixed with 300 mL of sterile physiological saline solution (8.5 g L −1 ). The obtained suspensions were filtered through Nalgene Rapid-Flow disposable filters with 0.22 PES membrane (Thermo Fisher Scientific, Waltham, MA, USA) to collect the dust deposited on the leaf surface. Then, the membrane filters were cut into small pieces and placed in a Power Bead Pro Tube (QIAGEN, Hilden, Germany). The extraction of DNA from dust deposited on the leaf surface of all samples was performed using the DNeasyPowerSoil Pro Kit (QIAGEN, Hilden, Germany) according to the manufacturer's protocol. Quantification of DNA was determined using a Qubit 2.0 Fluorometer (Invitrogen/Life Technologies, Carlsbad, CA, USA). DNA was subsequently used as a template for a polymerase chain reaction.
PCR Amplification, Library Preparation, and Sequencing
The PCR amplification, library preparations for next-generation sequencing, and Illumina MiSeq sequencing of the bacterial 16S and fungal ITS rRNA genes were conducted by Sequentia Biotech SL (Barcelona, Spain). The V3-V4 regions of the bacterial 16S rRNA gene sequences were amplified using universal primer pairs 341F-805R [96] including sample-specific barcodes and Illumina sequencing adaptors (Illumina Inc., San Diego, CA, USA). The amplification of the fungal ITS-region was performed using ITS1 and ITS4 primers [97]. After quantification and purification of the PCR products, the amplicon libraries for bacteria and fungi were constructed separately using the 16S RNA Metagenomic Sequencing Library Preparation protocol. Paired-end (PE, 2 × 300 nt) sequencing was performed on an Illumina MiSeq (MiSeq Reagent kit v2, Illumina Inc., San Diego, CA, USA) sequencer following the manufacturer's run protocols (Illumina Inc., San Diego, CA, USA).
Bioinformatics
Raw read sequences were quality-trimmed while removing adaptor sequences, using Trimmomatic v0.32360. Sequence quality was performed with the FastQC toolkit (Babraham Bioinformatics, Cambridge, UK). A quality check was performed on the raw sequencing data, removing low-quality bases and adapters while preserving the longest high-quality part of the reads. The minimum length established was 50 bp and the quality score 20, which increases the quality and reliability of the analysis. For the taxonomic profiling and quantification of the samples, the proprietary software GAIA (version 2.02, Sequentia Biotech, Spain) was used. GAIA works as follows: (1) each pair of reads is aligned against one or more reference databases, and the best alignments are extracted; (2) a Lowest Common Ancestor (LCA) algorithm is applied to the best alignments; (3) identity and coverage thresholds are applied to the alignments; (4) taxonomy is summarized and reported. The databases used for this analysis included the 16S and the ITS1 + ITS2 sequences obtained from the NCBI "nr" database. All the sequences from each sample were clustered into OTUs based on their sequence similarity (97% identity). Two different alpha diversity metrics (Chao1 richness and Shannon diversity) were calculated on rarefied OTU tables for all samples. Beta diversity was estimated by weighted and unweighted UniFrac distances between samples [98].
Microbial Activity
The 20 sampled leaves (surface area 120-247 cm 2 , average value 180 cm 2 ) per each tree were placed in 750 mL flasks in 30 mL of sterile water for preparing the dust suspension for microbial activity analysis and microorganism cultivation (see below). The flasks were placed on a rotator for total mixing of dust suspension for 30 min at 200 rpm. Then, the prepared dust suspension was placed in the two 15-mL sterile flasks that represented the bi-replicate of each sample. The samples of the suspension were kept at +4 • C for maximum 3 days prior to the analysis. The microbial activity was assessed by the Microresp technique [53]. Considering that the technique was developed for soil samples, sterile carbonate-free sand was used for enrichment by dust suspension in the ratio 1:10, ensuring optimal moisture conditions for further analysis. Then, enriched sand samples were placed in a sterile 96-deep well (945 µL volume) and either water or solution of four C-substrate groups was added. In particular, carboxylic (ascorbic, citric, oxalic), carbohydrate (Dgalactose, D-fructose, D-glucose), amino-(glycine, L-arginine, L-leucine, α-aminobutyric, L-aspartic), and phenolic (vanillic and syringic) acids were added to the characterized CLPP; water was used to characterize basal respiration (BR). The 96-deep well microplate with the enriched sand was tightly closed in the 96-well microplate with a detection gel and incubated for 6 h at 25 • C. Absorbance by the detection gel was analyzed at a 595-nm wavelength (microplate spectrophotometer FilterMax F5, Molecular Devices, San Jose, CA, USA) before and after incubation and expressed as CO 2 production in µg C g −1 h −1 (Moscatelli et al., 2018). Microbial functional diversity was assessed through the Shannon-Wiener index: H = −ΣPi × ln Pi, where Pi is the ratio of respiration response to i substrate addition to total respiration response for all studied substrates.
Microorganism Cultivation
Complementary to metabarcoding, a cultivable portion of phylloplane bacteria and fungi was analyzed. It is known that due to the specificity of the primers used in molecular genetic analysis, not all species of bacteria and fungi can be detected by this method [99]. The main purpose of the plating method was to detect potentially active opportunistic species of bacteria and fungi for which specific nutrient media and a temperature of 37 • C (human body temperature) were used.
The number of enterobacteria was determined by the plating method on the lactosepeptone medium and Kode's medium. The number and diversity of cultivable opportunistic fungi was determined using Sabouraud agar nutrient media with the addition of lactic acid (4 mL L −1 ) [100]. The petri dishes were incubated for 2-3 days (for bacteria) and 7-14 days (for fungi) at 37 • C. Microscopic fungi were identified by cultural and morphological characteristics (Olympus CX41 microscope) using standard keys [101][102][103]. To characterize the community structure of cultivable opportunistic fungi, a species abundance index (%), which is equal to the ratio of colonies of a particular species to the total number of colonies, was used. For strains isolated as sterile mycelium, identification was carried out based on the analysis of the region of ribosomal genes ITS1-5.8S-ITS2 rDNA. Sequencing of DNA regions was performed using a BigDye Terminator V. 3.1 Cycle Sequencing Kit (Applied Biosystems, Waltham, MA, USA) with subsequent analysis of the reaction products on an Applied Biosystems 3130l Genetic Analyzer sequencer at the Syntol Research and Production Center (Moscow). The names of fungi were specified according to the updated lists of species in the "Species fungorum" database (www.indexfungorum.org, last accessed 15 November 2021). The portion of opportunistic fungi was calculated from the whole cultivated fungi community that represent the group of pathogens that plague vulnerable individuals with low immunity status [104] (Richardson, 1991). The fungi were classified as opportunistic according to the Hoog classification [105]. Opportunistic fungi were divided into three groups, according to their potential hazard to human health: BSL1, BSL2, and BSL3, representing increasing degree of pathogenicity.
Thus, in this study, the chemical and microbiological analyses were performed by flushing samples from both the upper and lower leaf surfaces.
Statistics
The leaf area and amount of dust for each type of analysis were determined. The calculations for chemical properties were performed per leaf surface (cm 2 ) and dust mass (kg), characterizing pollutant quantity and quality, respectively [27]. The same approach was used for microbial properties. Meanplots were used to show the measure of central tendency distribution along the transect from roadside to forest. Descriptive statistics were used to determine the mean and standard error. Significant differences in the variables between the studied sites were examined by one-factor analysis of variance (ANOVA) with Tukey's multiple comparison test. Prior to the analysis, variance homogeneity was checked by the Levene's test. RDA was used to (1) show the total variance of microbial properties across all sites; and (2) test the relationships between the studied variables. The predictor variables (concentration of the chemical elements) explaining a variance of microbial properties were assessed by stepwise linear regression analysis with 999 permutations of residuals to test the significance level. Prior to RDA and multiple regression, both dependent and predictor variables were transformed (log base 10-transformation for BR and portion of pathogenic fungi) according to normal distribution adjustment. Additionally, the predictor variables in RDA were the scale to unit variance. Statistical analysis and visualization of experimental data were performed in R.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/ 10.3390/plants11030402/s1, Table S1: Portion of opportunistic fungi in total amount of fungi (%); Table S2. Cultivable bacteria distribution along transect from roadside to the forest (2-50 m); Table S3: The abundance (operational taxonomic units, OTUs) and relative abundance (%) of bacteria of dust collected from the leaf surface at different distances to the road (2, 10, 30, 50 m); Table S4: The abundance (operational taxonomic units, OTUs) and relative abundance (%) of fungi of dust collected from the leaf surface at different distances to the road (2, 10, 30, 50 m).
|
v3-fos-license
|
2019-04-28T13:03:19.794Z
|
2019-04-26T00:00:00.000
|
135439393
|
{
"extfieldsofstudy": [
"Business",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pone.0214783",
"pdf_hash": "b57c413c6d09b2676ebdc3d4c6a3300ebeed6211",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1073",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "b57c413c6d09b2676ebdc3d4c6a3300ebeed6211",
"year": 2019
}
|
pes2o/s2orc
|
Investigating the exposure of Iranian households to catastrophic health expenditure due to the need to purchase medicines
Background Catastrophic health expenditure (CHE) is an indicator used by the World Health Organization (WHO) to assess equity in households’ payments to the health system. In this paper, we prospectively calculated the population at risk of facing catastrophic expenditure due to purchasing three selected medicines (metformin, atorvastatin and amoxicillin) in Iran. Method This study draws on the data set of the Iranian National Household Survey of 38244 households in Iran. CHE was calculated based on "capacity to pay" using different thresholds. Results 20, 16 and 3 households had to spend more than 40% of their capacity to pay on amoxicillin, atorvastatin and metformin respectively. Lowest priced generic (LPG) medicines were found more affordable than the original brand (OB) medicines. Age, literacy and gender of head of household, economic status, settlement, size and number of breadwinners in the households share important association with CHE. Conclusion Requirement of these specific medicines for long-term may subject the Iranian households to CHE. The study demonstrates important and specific insights for health policy makers in Iran to protect the households from healthcare catastrophes.
Introduction
Financial protection against health costs is one of the objectives of health systems determined by the World Health Organization (WHO) [1]. Therefore, protecting individuals against the financial outcomes of health care-related to the risk and uncertainty about the future health states-should be a consideration of any health system. Financial protection is evaluated through out-of-pocket payments, which are analyzed using two approaches: (a) catastrophic health expenditure (CHE) and (b) impoverishment [2,3]. Medical spending is "catastrophic" if it exceeds a certain proportion of "total expenditure" or "capacity to pay" [4,5], which forces households to decrease spending on basic needs [6]. Spending is "impoverishing" if it is so large that it pushes households under the poverty line [7]. Previous studies which have measured CHE. Yazdi-Feyzabadi et al. studied trends of impoverishing effects of out-of-pocket health expenditures [8] and CHE [9] in 2008-2014, retrospectively in Iran provinces. They used data set of the Iranian National Household Survey. Nekoei-Moghadam et al. also quantified CHE rate in Iran for 2008 by foregoing data set [10]. Kavousi et al. measured the households' exposure to CHE, using the WHO questionnaire in zone 17 of Tehran province [11]. Some studies calculated CHE for a limited population of Qazvin [12] and Torbat-Heydarieh [13]. They used questionnaire to gather information.
A study in Poland considered the inequality and financial burden of medicine expenditures. The impoverishing effect of medicine payments applying two poverty lines (based on absolute and relative poverty), the incidence and intensity of catastrophic medicine expenditures were investigated. The OOP medicine expenditures was significantly unaffordable among the retired and chronically ill [14].
Regarding the methods of measuring affordability Niëns and Brouwer discussed. They noted the positive and negative aspects of retrospective, prospective and LPGW methods for affordability. They identified prospective method to calculate the number of households at risk of CHE. They have calculated medicine affordability as an example for each method [15]. Additionally in other studies Niëns et al. applied prospective method to quantify catastrophic expenditure [5] and impoverishment [16] due to spending on medicine.
Taking medicine is one of the main methods used to treat diseases [17], and thus affects households' exposure to CHE [5]. According to the WHO, medicines account for 20-60% of health expenditure in developing and transitional countries. Up to 90% of people in developing countries pay for medicines by out-of-pocket payments, and this ranks second in family expenditures after food [18]. Studies have shown that medicine expenditures in Iran is rising more rapidly than other health care spending [19]. Health transitions in Iran in recent years have produced a financial burden on the health system, with the incidence of chronic diseases rising [20]. Patients with chronic health conditions usually have to pay for medicine for their entire life. Frequent medical examinations and medicines are necessary for their health. Sometimes, due to the household's financial situation, patients may purchase medicines without referring to a physician [21]. According to Iran's insurance system, a patient buying medicine with a doctor's prescription should pay just 30% of the total cost, whereas without a prescription, the full cost must be paid out-of-pocket [22]. Additionally in case of purchasing the original brand (OB) medicines despite their higher price, insurance organization pays just 70 percent of the LPG equivalent price [23]. Difficulties in paying for medication and related expenditures impose heavy costs on households. Because studies have not considered catastrophic expenditure on medicines in Iran until now, and considering the increase in chronic diseases in recent years, this study aims to be the first one to prospectively compute the population at risk of facing catastrophic expenditure due to the purchase of three selected medicines (metformin, atorvastatin and amoxicillin) in Iran.
Data collection
This cross-sectional study draws on data sets of the Iranian National Household Survey, conducted by the Statistical Center of Iran, for the year 2013. The sample size for this analysisafter removing households without food expenditure data-is 38244 households, of which 18854 lived in urban and 19390 in rural areas.
For our analysis, we required four kinds of data: (a) household income or consumption expenditure, (b) price of medicines, (c) thresholds to measure the financial burden on the households, and (d) prevalence rates for diseases.
"Income" or "consumption expenditure" data is needed to calculate the "capacity to pay" of each household. We used "household consumption expenditure" data as the basis for CTP, since it is a better proxy of welfare and its' measurement is easier than the "household income" [24].
Medicine prices were obtained from Iran's Pharmacists Association [25]. Out-of-pocket payments for medicines are typically 30% of the medicine price (when covered by insurance) plus dispensing fee on each prescription. Three medicines were selected: (a) amoxicillin, (b) metformin and (c) atorvastatin. These were reported as the most used medicines in 2013, by the Food and Drug Administration [26]. The other reason for choosing metformin and atorvastatin was that they are the main medicines prescribed for diabetes and high blood cholesterol, which are highly prevalent in Iran (8.6% and 41.6% respectively among 20-79 year olds) [27,28].
The burden of chronic illness is progressively increasing in Iran. It is notable that in Iran, diabetes was 9th cause of death in 2007, which reached 6th cause in 2017 ranking. High blood cholesterol is a risk factor for the 1st and 2nd causes of death (heart diseases and stroke) and some other chronic diseases [29]. These diseases have a considerable share in global burden of diseases too [30]. Table 1 shows the health conditions for which these medicines are prescribed, the treatment periods and the number of units per treatment course.
Data analysis
"Food expenditure" is a part of total household expenditures which is spent on food. All the other household spending is called "non-food expenditures". In the other perspective, the "subsistence expenditure" is the expenditure to meet basic needs (like shelter and food) to survive in a society. All the other household spending is called "non-subsistence expenditure".
We need "capacity to pay" as the denominator for calculating the rate of catastrophic health expenditure. In cases where the subsistence expenditure is more than food expenditure, CTP equals non-food expenditure, otherwise CTP equals non-subsistence expenditure [24,32].
It is notable that the out of pocket payment (numerator of the ratio) consists of two parts: a) the health expenditure of the household in 2013, b) the out of pocket payments on medicine.
Accordingly, 10 scenarios were proposed (S1 Appendix). First, in three scenarios we study households in which only one member is sick. The second three scenarios include households with two ill members, or in which one member suffers from two diseases simultaneously. In the seventh scenario, all three medicines are used in a household. In the last three scenarios, households have to purchase OB medicines. We build these scenarios consulting medical specialists (a cardiologist, an endocrinologist, an infectious diseases specialist), they believed the scenarios brought in the paper are possible.
We chose four different thresholds (10%, 20%, 30% and 40%) according to Wagstaff's study [33]. The share of out-of-pocket payments exceeding preset thresholds determines the CHE. To achieve the incidence of catastrophic medicine payments it is necessary to Subtract the percentages of households who exposed to CHE because of paying for health in 2013 (the numerator is household health expenditure) from the CHE rate after pay for medicines. The obtained gap shows the percentage of CHE caused by purchasing medicines.
The last part of the study evaluates the impact of several factors on catastrophic expenditure. For this part we did a logistic regression test. Regression analysis was conducted in SPSS version 19, which was based on the scenario in which metformin and atorvastatin were purchased (threshold 40%). The dependent variable was a dichotomous variable (0 = when households did not face 40% threshold of CHE due to the purchase of metformin and atorvastatin, and 1 = when households faced 40% threshold of CHE, after purchasing these two medicines). Table 2 demonstrates the household characteristics. Among the heads of the households 12% were female, 72.7% were literate. About 50.7% of the sample lived in rural areas. There were more than one member with income in about 27.6% of the households. Table 3 shows the households exposed to catastrophic costs after purchasing medicines under different scenarios. First, the percentage of households already facing catastrophic costs is shown-those households who face catastrophic expenditure after paying for health costs. Then, the differences (gaps) caused by the purchase of medicines, which are the main CHE figures, are estimated. The third part of the table shows the expected number of households affected, using the prevalence rates. For example, 16 households spend more than 40% of their ability to pay on atorvastatin. When the proposed thresholds are 30%, 20% and 10%, the results are 32, 80 and 159 respectively. It clearly demonstrates that at a lower CHE threshold, higher number of households are found to experience healthcare catastrophe.
Results
The percentage of the households exposed to CHE increases with the number of illnesses in a family. If we assume the burden of 40%, CHE rate is 0.1 percent when purchasing amoxicillin, 0.2 percent for amoxicillin and atorvastatin, 0.4 percent for amoxicillin, atorvastatin and metformin.
The results also show that the catastrophic effects of medicines vary, especially between LPG (lowest priced generic) and OB medicines. For example when 0.1, 0.1 and 0.3 percent of the households face CHE because of using atorvastatin, metformin and both of them, in separate scenarios, the percentages increase to 1, 4.8 and 7.8 respectively when the households use OB medicines. Table 4 shows the association between exposure to CHE and some households' characteristics. It is found that the literacy, age and gender of head of the household and the size of the household have significant influence on CHE. The logistic test shows that households whose head is illiterate are approximately 1.519 times more likely than those with a literate head to face CHE. The age of head of the household seems to have a direct relation to CHE exposure, since the mean CHE increases with age. The size of the household is inversely associated with CHE: households with five or more members are 0.662 times less likely to expose with CHE than households with fewer members. Rural households face with CHE 1.816 times more than urban households. The economic status of the households is an influential factor on CHE. The higher quintiles are more likely to face CHE than the lower quintiles. Additionally the test shows that the availability of more than one breadwinner in a household, decreases the risk of CHE 0.852 times.
Discussion
A "Health Evolution Plan" was initiated in 2014 to overcome some of the main challenges of Iran's health system [34]. Decreasing the number of households at risk of CHE and reducing out-of-pocket payments are from the main aims defined for health evolution plan. To reach the goals eight packages were developed. For example urban citizens under basic health insurance coverage must pay 6 percent of total hospital bill. This percentage reduces to 3 percent if rural population and urban residents living in cities with less than 20000 population, refer to public hospitals through the referral system. People with no basic health insurance were covered free of charge. Additionally financial protection of poor patients and incurable patients was considered in another package [35]. This plan also encompassed issues related to pharmaceutical costs. For example hospitals are mandated to provide medicines for inpatients. Just sometimes patients' companions are asked to purchase medicines from hospital's drugstore. They should not be asked to provide medicines from out of the hospital. However, the main focus of the plan was to lower out-of-pocket payments for inpatients, mainly those in hospitals belonging to the Ministry of Health and Medical Education [36,37]. Nevertheless, pharmaceutical costs still impose a heavy burden on households. Unfortunately, many people procure their required medicines from pharmacies, without obtaining a physician's prescription [17,38,39]. Also, in Iran, people tend to use OB medicines, especially for serious illnesses, when recovery is slow or when their ability to pay is high. This study illustrated that, not only is medicine unaffordable for many households, but also the purchase of OB medicines increases the probability of exposure to catastrophic health expenditure several times compared to the purchase of LPG products. This finding has also been supported by Niëns et al. who calculated the impoverishment percentage in the Philippines after purchasing OB medicines to be 22%, compared to 7% for LPG equivalents [16]. Pharmaceutical expenditure was identified as an influential factor when it comes to catastrophic costs, by Nekoei et al. They reported the catastrophic costs of Iranian households to be 2.8%, by retrospectively applying the health spending method in 2008 [19]. The CHE surveys within provinces such as Yazd, Shiraz, Kerman, Tehran and others show an increase in inequality in CHE rates between provinces. The percentage of CHE in Iran for 2008-2014 increased [40].
Studies have also indicated that households who were subject to CHE share some features [41]. Accordingly, the present study showed that households with an illiterate head faced CHE 1.519 times more than households with a literate head. This might be explained by the fact that literacy is associated with higher socio-economic classes, they usually have two breadwinners (both couples work). Literacy is considered one of the social determinants of health, its effect on health status have been confirmed elsewhere [42].
This study revealed that when the household size is five or more, catastrophic expenditure is less probable. The results of some other studies carried out in this area also support this finding. Nekoei et al. showed that households with more than six members were less prone to catastrophic health expenditure [19]. Saber-Mahani et al. indicated a negative relationship between household size and catastrophic expenditure [43]. Having a large family in China is a protective factor against health costs, as identified by Li et al. [44]. This can be explained by the fact that in Iran, older households tend to have more members and more breadwinners than younger households. Meanwhile, there are usually small children in younger households, making them more prone to diseases and potentially less resistant in terms of economic status. This study showed the relationship between the gender of the head of household and CHE probability. The households with female head were at risk of catastrophic expenditure, as it was identified in previous study [45]. Despite the high number of women having jobs in Iran, more focus on this issue is still required. The other factor found relevant is age. As the age of the household's head increases, the likelihood of exposure to CHE increases [46]. This is because of the reality that, with greater age, the risk of suffering from various diseases increases. This effect is exacerbated by the reduction in income at higher ages (due to retirement or disability). As indicated by Yardim et al., households with disabled or senior members are at greater risk of CHE [4].
The households' income was another key factor. The likelihood of high quintiles to be exposed to CHE was higher than low quintiles, in this study. A study by Fazaeli demonstrated that high income households pay for health several times more than the low income ones. This increases risk of CHE, however they do not often end with health impoverishment due to their high income [47].
The high income households' utilization of the expensive private sector services might be the reason. For example, a study on the catastrophic dental health expenditure (CDHE) reported that the rate of CDHE was greater in high income households [6]. However some literatures reported a negative relation between income levels and the CHE rates, which is because of the vulnerability of the poor against financial risks [10,41,48]. Löfgren indicated no significant relationship between households' income and CHE [45].
The rural residence were significantly at risk of CHE. Different studies confirmed this result [4,41,44]. It might be resulted from their low income, low education levels or delay in diagnosis of their diseases [41].
Conclusion
This study shows that taking medicine can make households subject to CHE (despite lower price of medicine compared with many other countries because of subsidy allocation). The age, literacy, and gender of the head of household and the size of the household are some factors that are found to be associated with CHE. The study demonstrates important and specific insights for health policymakers in Iran to protect the households from healthcare catastrophes, when long term requirement of these specific medicines raise potential risks for the Iranian households to experience CHE.
To reach Iran's plan to decrease the catastrophic health costs to one percent, any part of the total health expenditure plays its role and has its own share. The results can be used to compare the catastrophic cost of medicines with the catastrophic cost of other treatments or health requirements. It may be helpful to make proper policies to reduce the cost of the section, which makes more catastrophic costs.
|
v3-fos-license
|
2020-09-10T13:57:50.521Z
|
2020-09-10T00:00:00.000
|
221568899
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ejnmmires.springeropen.com/track/pdf/10.1186/s13550-020-00689-z",
"pdf_hash": "b25a950449342c0d67e55968d32a1f1fe91fbfca",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1074",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "b25a950449342c0d67e55968d32a1f1fe91fbfca",
"year": 2020
}
|
pes2o/s2orc
|
The feasibility of [18F]EF5-PET/CT to image hypoxia in ovarian tumors: a clinical study
Rationale Evaluation of the feasibility of [18F]EF5-PET/CT scan in identifying hypoxic lesions in ovarian tumors in prospective clinical setting. Methods Fifteen patients with a suspected malignant ovarian tumor were scanned with [18F]EF5 and [18F]FDG-PET/CT preoperatively. The distribution of [18F]EF5-uptake, total intraabdominal metabolic tumor volume (TMTV), and hypoxic subvolume (HSV) were assessed. Results [18F]EF5-PET/CT suggested hypoxia in 47% (7/15) patients. The median HSV was 87 cm3 (31% of TMTV). The [18F]EF5-uptake was detected in primary tumors and in four patients also in intra-abdominal metastases. The [18F]EF5-uptake in cancer tissue was low compared to physiological excretory pathways, complicating the interpretation of PET/CT images. Conclusions [18F]EF5-PET/CT is not feasible in ovarian cancer imaging in clinical setting due to physiological intra-abdominal [18F]EF5-accumulation. However, it may be useful when used complementarily to FDG-PET/CT.
Introduction
Ovarian cancer (OC) is the most lethal gynecological malignancy, and the majority of patients are diagnosed at an advanced stage [1]. Although OC is initially chemosensitive, most women experience multiple and finally chemoresistant relapses. The survival odds have not markedly improved despite extensive research, and completeness of surgery is still a major prognostic factor [2].
The presence of hypoxic regions in solid tumors is associated with a poor prognosis for many cancer types [3][4][5]. Hypoxia-mediated chemoresistance is also the greatest clinical challenge in OC [6,7].
There is a considerable need for non-invasive imaging of tumor hypoxia since it provides additional information, which could be integrated into strategies of treatment [8]. The method can improve therapeutic outcomes by predicting chemoresistance and selecting potentially treatment-resistant tumors for targeted surgery. 18 F-nitroimidazolpentafluoropropylacetamide ([ 18 F]EF5) is one of the extensively investigated and clinically tested tracers of tissue hypoxia [9,10]. 18 F]EF5 belongs to the nitroimidazole group and has considerable membrane permeability and capability to accumulate in viable hypoxic, though not in apoptotic or necrotic cells [11][12][13].
Since there is no systematic data evaluating the eligibility of hypoxia-imaging among patients with ovarian malignancy, we conducted the prospective clinical study to evaluate the feasibility of [ 18 F]EF5-PET/CT scan in identifying hypoxic lesions in ovarian tumors.
Study population
This prospective non-randomized study was conducted at Turku University Hospital, Finland, between November 2017 and June 2019. Patients between 38 and 79 years of age with ovarian tumor, who were not pregnant, nursing, or had a history of previous malignancies were included.
Ethical approval was obtained from the institutional review board (18.10.2016 §443), and all subjects signed an informed consent form, ClinicalTrials.gov identifier: NCT04001023.
A whole-body contrast-enhanced [ 18 F]FDG-PET/CT and [ 18 F]EF5-PET/CT of the abdomen were performed on separate days preoperatively. PET/CT images were then evaluated by a nuclear medicine specialist and gynecological oncologist to assess the distribution of the cancer and to determine regions of suspected hypoxia in the intraabdominal tumor load for targeted biopsies for future research.
PET/CT scanning procedure
The PET/CT studies were performed with a digital PET/ CT scanner: Discovery MI (General Electric Medical Systems, Milwaukee, WI, USA). It has combined PET/ CT-scanners with a 128-slice CT and a 3D PET imaging capability. The PET imaging field of view (FOV) was 70 cm in diameter and 20 cm in axial length. To obtain attenuation correction for 511 keV photon distribution, the transmission scan was performed using a low-dose (noise index 30, automatic 3D current modulation, 10-120 mAs, and 120 kVp) CT protocol.
The patients received an intravenous injection of 370 MBq of 18 F-EF5. A static emission scan was acquired 180 min from the tracer injection to cover the entire abdomen (3 bed positions, 7.5 min/bed). The patients voided prior to the scan. The sinogram data was corrected for deadtime, decay, and photon attenuation and reconstructed in a 256 × 256 matrix. Image reconstruction followed the The scans were performed in a random order depending on the availability of the [ 18 F]EF5 and camera. The mean interval between scans was 2 (range 1-7) days. A hypoxic voxel was defined using a threshold tumor to gluteus maximus muscle ratio (TMR) for [ 18 F]EF5uptake of 1.5, based on earlier experience [14]. The hypoxic subvolume (HSV) was defined as the sum volume of all lesions with TMR over 1.5.
Statistical analyses
Statistical analyses were performed using JMP Pro 13 software from SAS. Continuous variables were compared using a Wilcoxon rank-sum test. A non-parametric Spearman rank correlation test was used to evaluate the association between SUVmax values. Two-tailed P values < 0.05 were considered statistically significant.
Patients' clinical and imaging characteristics are presented in Table 1 Our study included two patients with non-malign tumors (patient nr 8 and 11), which presented no The physiological uptake of [ 18 F]EF5 in the gall bladder/bile, small intestine, and urinary bladder was notably higher than in the tumors (Fig. 3).
A demonstrative EF5-and FDG-PET/CT images of a patient with advanced ovarian cancer are presented in Fig. 4.
Discussion
Hypoxia is a common phenomenon in cancer with 50-60% of solid tumors containing hypoxic regions [15]. While hypoxia has a well-established role in promoting hematogenous metastases of cancer cells [16], the hematogenous spread is rare in OC at the time of diagnosis [17]. It should also be noted that the role of hypoxia in the transcoelomic spread to the peritoneum and omentum (common to OC) has not been widely investigated. On the basis of our study, [ 18 F]EF5-PET/CT suggested hypoxia in half of the patients and the distribution of [ 18 F]EF5-uptake was variable. [ 18 F]EF5-uptake was detected mainly inside the ovarian tumor and less often in metastases. One preclinical study suggested a hypoxic environment to induce omental/ peritoneal metastases [18]. Another study [19] which included two OC patients detected EF5-uptake and severe hypoxia in a peritoneal carcinosis biopsied laparoscopically promptly after the injection of EF5. Our cases with widespread peritoneal carcinosis typically had several [ 18 F]FDG-avid areas but only one patient had [ 18 F]EF5-avid peritoneal lesion.
The previous hypoxia imaging studies are conducted mostly on solid and locally advanced tumors [3,5,14,20,21] [22], where cancer cells rely widely on glycolysis and reduce their respiration regardless of tissue oxygenation level. Unlike OC, head and neck and lung cancer are isolated tumors that are not surrounded by physiologically EF5-affine tissues. Our study is prospective, and two tumors eventually appeared to be benign. Nevertheless, we consider it important to present them especially as they showed no EF5 uptake.
Previously, two excretory paths of highly lipophilic [ 18 F]EF5-tracer have been demonstrated [23,24]. In the latter, it was assumed that due to slow tracer biliary excretion, only small amounts of activity would be seen in the small intestine. However, our study revealed an excessive [ 18 F]EF5-uptake in the bile and small intestine. In contrast to the supradiaphragmatic tumors, this phenomenon imposes limitations on assessing tumors presenting weak [ 18 F]EF5-uptake. Especially when located near to the intestine, metastases may be easily be mistaken to physiological uptake and remain unnoted.
Conclusion
Non-invasive hypoxia imaging with [ 18 F]EF5-PET/CT is possible, but its clinical use is restrained by the weak tumor uptake of the tracer compared to the non-specific uptake in excretory organs. The potential usefulness of with [ 18 F]EF5-PET/CT in OC could be complementary to FDG-PET/CT with the intent to determine high-risk patients. The role of hypoxia in OC is intensively studied and [ 18 F]EF5-PET/CT forms an attractive tool for patient stratification.
|
v3-fos-license
|
2021-09-28T01:09:54.262Z
|
2021-07-06T00:00:00.000
|
237832018
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1073/14/14/4079/pdf",
"pdf_hash": "0cd3b34d2687dae9273836bd52a5e9bda34cb089",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1076",
"s2fieldsofstudy": [
"Geology"
],
"sha1": "1fd8053498c5114b6883aae8911772417e714f37",
"year": 2021
}
|
pes2o/s2orc
|
Machine Learning—A Review of Applications in Mineral Resource Estimation
: Mineral resource estimation involves the determination of the grade and tonnage of a mineral deposit based on its geological characteristics using various estimation methods. Conventional estimation methods, such as geometric and geostatistical techniques, remain the most widely used methods for resource estimation. However, recent advances in computer algorithms have allowed researchers to explore the potential of machine learning techniques in mineral resource estimation. This study presents a comprehensive review of papers that have employed machine learning to estimate mineral resources. The review covers popular machine learning techniques and their implementation and limitations. Papers that performed a comparative analysis of both conventional and machine learning techniques were also considered. The literature shows that the machine learning models can accommodate several geological parameters and effectively approximate complex nonlinear relationships among them, exhibiting superior performance over the conventional techniques.
Introduction
Mineral resources are indispensable to the sustenance of modern civilization [1,2]. They play essential roles in socioeconomic development, industrial processes, manufacturing of modern technologies, and construction of modern transportation systems [2][3][4][5]. A mineral resource is commonly defined as "a concentration of naturally occurring material in or on the Earth's crust in such form and amount that economic extraction of a commodity from the concentration is currently or potentially feasible" [6][7][8]. Evaluation of these resources (mineral resource estimation) is a crucial and challenging task in every mineral exploration and mining project, irrespective of size, commodity, and deposit type [9,10]. Mineral resource estimation is performed to determine the quantity and quality of a mineral deposit and to establish confidence in its geological interpretation, and it requires careful and detailed consideration of high spatial variability and uncertainty associated with geological formations [11]. Therefore, a reliable mineral resource estimate is critical to the success of every mining project.
Mineral resources are subdivided into inferred, indicated, and measured categories based on increasing geological confidence and knowledge, as illustrated in Figure 1 [8]. The estimation of a mineral resource is followed by a mineral reserve estimation, which is carried out to determine the tonnage and average grade of a mineral deposit that is economically and technologically feasible to mine [12]. Mineral resource estimation underlies the generation of mineral reserve estimates. Mineral reserve estimate establishes the mineable portion of a resource and forms the foundation for economic anaslysis of a mineral deposit as well as the future potential of an operating mine. The accuracy of a reserve estimation is essential to the quality of the geological interpretation [13,14]. It is also vital to mine planning and design, including the utility of short-term and long-term mine plans. Moreover, the estimation accuracy is key to mining decisions, such as capital allocation, operating policy, depletion rate, and depreciation charges [13,15,16]. Therefore, an accurate reserve estimation is critical to the feasibility, sustainability, and daily/future operations of a mining project. This phase of the estimation process is also referred to as ore reserve estimation and grade estimation [17]. It should be noted that mineral resource or reserve estimates are not the only factors that determine the extractability of a mineral resource; there are other deciding factors, such as economic, environmental, climatic, and social restrictions [6]. Mineral reserves are classified into probable and proved reserves. Figure 1 shows a general classification of exploration results based on the levels of confidence in geological knowledge and technical and economic considerations about the deposit as established by the Australasian Code for Reporting of Identified Mineral Resources and Ore Reserves (The JORC Code) [17]. There are two concepts underlying reserve estimation: the concept of extension, where attributes of a sample are extended to blocks to be estimated; and the concept of error estimation, where the validity of an estimation method is assessed based on the error involved [11]. The methods utilized to perform the estimate are important, as they can influence the reliability and accuracy of the estimate. Several estimation methods have been proposed and implemented in the literature. These methods are largely categorized into geometric and geostatistical estimation methods [11,12], and they are termed conventional techniques in this paper. The geometric techniques (e.g., polygonal, triangular, random stratified grids, and cross-sectional methods) are simple and require few input parameters and are often applied at the early stages of a mineral project or to verify the results of the sophisticated estimation methods [11]. However, the geostatistical (e.g., kriging, inverse distance weighting, and conditional simulation) techniques are more sophisticated. The conventional techniques have some inherent limitations. Most notably, they tend to perform poorly with highly heterogeneous datasets, overestimate or underestimate resources, and require significant manual processing [18,19]. Because of these shortfalls, currently, machine learning (ML) methods are being implemented in mineral resource estimation [20][21][22][23][24].
ML techniques are algorithms capable of learning and modeling complex nonlinear patterns in a large dataset [25,26]. Since 1993, many authors have been exploring the potential of ML in resource estimation, resulting in several research publications. Even though the implementation of artificial intelligence and autonomous technologies in the mining industry began decades ago [26,27], it was not until 1993 that ML applications in mineral resource estimation gained enormous research interest. Zhang et al. [28] noted that ML improves resource estimation in the following ways: (i) samples that are rejected in conventional resource estimates because they do not satisfy all quality control requirements can be used provided that the geological descriptions and measurements are reliable; and (ii) resource estimation block models can be constructed using fewer assays and more geology, leading to a reduction in operational costs. Additionally, the ML-based resource estimation approach is significantly cheaper and faster than conventional resource estimation [28]. In addition, ML can modernize hypothesis-testing and geological modeling, contributing to the understanding of various deposit types estimation [28]. Moreover, ML techniques can be employed to address operational challenges and improve safety in different sectors of the mining industry, including mineral prospecting and exploration, mineral evaluation, mine planning, mine scheduling, equipment selection, underground and surface equipment operation, drilling and blasting, mineral processing, and mine reclamation [26,[29][30][31].
In addition to assessing the accuracy of different ML and conventional estimation models, these papers examine how various model variables, lithology, and data partition affect model performance. Many of these papers in the literature are spread across various journals and research databases. Therefore, it is imperative these works are consolidated, examined, and compared to form a coherent piece that would inform future research and serve as a reference for interested resource specialists. Our objective is that this paper would provide industry practitioners with an up-to-date knowledge about these emerging techniques and guide them on the choice of the technique applied for different estimation tasks. Additionally, it would also guide researchers to identify new ideas and areas requiring further scientific examination, as the review highlights some limitations and future trends of ML applications in mineral resource estimation.
The remaining part of this paper is divided into five sections. Section 2 outlines the review methodology. Section 3 discusses conventional resource estimation approaches, including geometric and geostatistical methods. Section 4 reviews relevant machine learning techniques that are used for resource estimation. Section 5 presents discussions and highlights key emerging issues that could be the focus of future studies. Section 6 covers concluding remarks.
Review Scope
We conducted an extensive literature search to identify relevant peer-reviewed publications indexed in major scientific research databases, such as Web of Science, Google Scholar, Scopus, and ScienceDirect. We used keywords to limit the search scope, including mineral resource estimation, ore grade estimation, reserve estimation, machine learning, soft computing, neural networks, mineral deposit, and support vector machines. Boolean operators (AND, OR, and NOT) and strings (e.g., 'reserve estimate') were adopted to improve the search result. Another search strategy employed is snowballing (e.g., forward and backward snowballing), where initial search results lead to the discovery of more papers [32,33].
The search resulted in a plethora of journal, conference, and media publications on mineral resource estimation methods. The result was narrowed solely to peer-reviewed journals, except in a few instances where peer-reviewed conference papers and a Ph.D. dissertation were considered. Only papers published in English have been reviewed, though the search result, especially in Google Scholar, had papers published in other languages. Initial paper selection and review involves content analysis of the title, abstract, and conclusion of a paper to determine if it falls within our review criteria. The review criteria (see Figure 2) are: the paper must be published in a peer-reviewed journal and the content must be related to mineral exploration, especially resource estimation and machine learning. Papers satisfying these criteria were reviewed thoroughly with critical attention to the methods used, algorithm adopted and implementation, findings, conclusion, and recommendation. The review covers peer-reviewed publications on machine learning applications in mineral resource, mineral reserve, ore reserve, and grade estimations. For review purposes, we assumed that these different estimation groups deal with the assessment of occurrence, concentration, and tonnage of a mineral in a specific geological location with varying degrees of confidence. These terms are used interchangeably in this article. This general definition affords the flexibility to consider a wide range of relevant publications relating to machine learning and mineral estimation. A conceptual framework of the review process and strategies is illustrated in Figure 2. As indicated in Figure 2, the review process starts with the analysis of search results, which is a collection of papers resulting from a keyword search in various academic databases. Based on the research area/field, the result was streamlined, discarding papers that were not related to the subject matter. Subsequently, the remaining papers were further divided into two main categories: rejected papers and accepted papers. Each paper goes through three consecutive decision questions (language, abstract, and journal type) to determine its category. Papers in the accepted category must pass all the decisions stages; else it is rejected. Snowballing is applied to papers that satisfy the language and abstract stages. Following that, the accepted papers go through a detailed content review.
Review Summary
The search results included 131 publications. Out of which, 51 were related to machine learning application in mineral resource estimation, while the remaining references cover conventional mineral resource estimation techniques and general knowledge about machine learning. The publication period is limited to 1993-2020; however, a few recent articles published in 2021 were also reviewed. Some of the notable journals where the search results were retrieved include Natural Resources Research, Mathematical Geology, Computers & Geosciences, Computational Geosciences, and Arabian Journal of Geosciences. Figure 3 shows that majority of the ML papers (51 papers) reviewed have been cited multiple times, indicating recognition of these papers by scholars in this field. To assess the research trend in this field over the years, the research result (51 papers) was categorized based on the year of publication. Figure 4 shows the distributions of ML-based mineral resource estimation research from 1993 to 2020. It can be observed from Figure 4 that the number of papers per year in the field has experienced two cycles of increase and decrease since 1993, i.e., from 1993 to 2014 and 2015 to 2019. It seems 2020 is the beginning of another surge period, reflecting the increasing research interest in ML techniques in the extractive industry. It is interesting to note that artificial neural network (ANN) is the most applied technique, followed by support vector machine (SVM). The remaining techniques are ensemble super learner (Ensemble), inverse distance weighted and artificial neural network (IDW-ANN), genetic algorithms (GA), k-nearest neighbor algorithm (kNN), support vector machines (SVM), support vector regression (SVR), random forest (RF), relevance vector machines (RVM), Gaussian process (GP), and new machine learning-based algorithm (GS-Pred). This is likely due to the strong capabilities of these techniques in pattern recognition and modeling complex systems. These ML techniques allow the modeling of physical features in complex systems without requiring explicit mathematical representations or exhaustive experiments, unlike geostatistical methods. Thus, ML has the potential to provide a more accurate and efficient solution in estimating mineral grades. Table 1 details the type of data used for the reviewed ML techniques. The vast majority (more than 80%) were based on field data obtained from exploration drilling (drill cores) or trenching. The significant usage of field data shows the industrial applicability of the proposed ML techniques, suggesting that these techniques can be incorporated in resource estimation tools in the industry. Usage of historical, laboratory, and simulated data account for less than 20% of the implementation. Usually, demonstration using laboratory data involves the acquisition of rock sample images that are subsequently processed with machine vision algorithms. Figure 6 demonstrates the distribution of deposit types analyzed using ML techniques. Interestingly, iron ore and copper deposits are the most evaluated mineral deposits with ML techniques, followed by gold deposits. A summary of the various ML techniques applied to estimate different deposits is presented in Table 2. Table 1. Summary of implementation of machine learning techniques.
Implementation Count
Historical and field data 1 Laboratory-scale data 1 Experimental and laboratory-scale data 2 Historical data 3 Simulated and field data 3 Field data 42 1993,1996,1998,1999,2002,2004,2005,2006,2008,2009,2010,2011,2012,2015,2016,2017,2018,2020 where Ensemble is the ensemble super learner, IDW-ANN is the inverse distance weighted and artificial neural network, GA is the genetic algorithms, kNN is the k-nearest neighbor algorithm, SVM is the support vector machines, SVR is the support vector regression, RF is the random forest, RVM is the relevance vector machines, GP is the Gaussian process, and GS-Pred is a new machine learning-based algorithm proposed by Zhang et al. [28].
Conventional Resource Estimation Techniques
In this paper, we referred to conventional resource estimation techniques as geometric and geostatistical methods. These are estimation techniques that are widely applied in mineral deposit evaluation. They have been in use in the mining industry for a long time, and they form the basis of most mineral deposits being exploited today.
The geometric technique includes classical polygon methods, square blocks, rectangular uniform blocks, triangular blocks, and polygonal blocks. These methods, particularly the polygonal approach, have been employed to estimate ore for a wide spectrum of mineral deposits [34,35], and they are most useful at the initial stage of exploration when collated data is relatively small [35]. The basis of this technique is that grades are allocated a definite area of influence generated by constructing the perpendicular bisectors between adjacent samples or intersections to obtain a regional estimate [36,37]. They are conceptually simple, fast, able to decluster irregular data, and adaptable to narrow orebodies, but may not be able to model thick and nontabular orebodies [36].
Geostatistics is a subdivision of statistics concerned with the prediction of random variables associated with spatial and spatiotemporal datasets [38,39]. It has a set of statistical methods that are widely applied in the mining industry, especially for ore grade prediction in mining operations. Other earth science disciplines that apply geostatistics are petroleum, hydrology, and agriculture. Among the various geostatistical techniques, inverse distance weighting (IDW) and kriging are by far the most common method applied in mineral resource estimation [40][41][42].
Inverse Distance Weighting (IDW)
The IDW methods use the attributes of a known point or sample to interpolate the attributes of an unknown point or block with a weighting factor. According to Chen et al. [43], the assumption is based on the similarity principle that two points maintain similar properties if they are closer to each other. Conversely, if they are farther apart, the weaker the similarities. IDW methods have been successfully implemented to establish ore grades for mineral deposits. Even today, some operating mines use IDW as part of their routine grade control estimation tools due to its low computational cost and easy implementation. Analysis of a survey conducted from 1997 to 1998 indicates that in the Eastern Goldfields of Western Australia, IDW was the most popular grade estimation technique [44]. Dominy and Hunt [44] reported that companies are more likely to employ conventional methods than geostatistical methods in both surface and underground mining operations because some conventional methods have "shown to be historically adequate." Other reasons for preference of the conventional techniques have to do with company policy or lack of sufficient data to generate a variogram [44].
Yasrebi et al. [45] applied IDW based on combined variograms to generate block models in the Kahang Cu-Mo porphyry deposit in Central Iran. The result from the combined IDW and experimental variogram revealed that the enriched elemental concentrations for Cu and Mo are associated with the central, NE, and NW parts of the area. Shahbeik et al. [40] also employed IDW to establish the ore and waste boundaries for the Dardevey iron ore deposit in Iran. In certain geological settings, IDW is considered an effective alternative to kriging methods. For instance, correlation analysis of IDW and Ordinary Kriging (OK) performed by Al-Hassan and David [46] in evaluating a gold deposit indicates a strong correlation coefficient of 0.93, suggesting that IDW can be used as a good alternative. Again, when compared with OK in limestone deposits [47,48], IDW showed better results, indicating that IDW can be considered a convenient estimation method for limestone deposits.
Kriging
Kriging has gained enormous recognition in the mining industry and has proven to be a good estimator for mineral resources. Kriging is an estimator designed primarily for local estimation of block grades as a linear combination of available data in or near the block such that the estimate is unbiased with minimal error variance [11,49]. There are many variants of kriging techniques applied in the mining industry, with the most common types being: simple kriging (SK), ordinary kriging (OK), and indicator kriging (IK). Each of the variants differs in underlying assumptions regarding the local or stationary domain mean [11]. Kriging is often associated with the acronym BLUE, meaning the best linear unbiased estimator.
Many scholarly articles have reported kriging, particularly ordinary kriging, as a good estimator for ore reserve estimation [35,[49][50][51]. Daya [52] applied ordinary kriging to classify an iron ore deposit. The cross-validation results of his study showed a correlation between estimated and actual data with a correlation coefficient of 0.773. Further, Daya and Bejari [50] compared the performance of two kriging techniques, simple kriging and ordinary kriging, in copper deposits and found that while simple kriging produced a smoother result, the result obtained from ordinary kriging was more accurate. Studies by Hekmatnejad et al. [53] showed that ordinary kriging also compares well with non-linear kriging techniques (e.g., disjunctive kriging). However, in some instances, disjunctive kriging may outperform ordinary kriging [53]. Other advanced kriging techniques, such as fuzzy kriging and compositional kriging, have also shown reliable results. For example, Taboada et al. [54] modeled and zoned a quartz deposit in Sierra del Pico Sacro, Spain, into four different quality grades (silicon metal, ferrosilicon, aggregate, and kaolin) using compositional kriging. The result was considered satisfactory as it matched the geological reality of the deposit.
Machine Learning Techniques
The increased interest in machine learning (ML) in recent years is reflected in the growing numbers of scholarly publications about its application in geoscience. ML techniques cover a broad spectrum of geoscience applications, ranging from identifying geochemical anomalies to evaluating mineral resources. ML techniques are not new in the extractive industry; however, their application has surged during the last decade due to promising results to complex problems in the industry. ML is the study and application of computer algorithms to make intelligent systems that improve automatically through the experience without being explicitly programmed [25]. It is classified as a subfield of artificial intelligence (AI), which is the science and engineering of making intelligent machines. ML applies computer algorithms to analyze and learn from data, to decide or predict outcomes in various fields depending on the structure of available data being analyzed. ML models are categorized as supervised learning, unsupervised learning, and reinforcement learning [55]. Each of these classes is further categorized, based on the nature of the problem being solved, using various corresponding learning algorithms or applications to handle such problems. In practice, there are many ML algorithms applied in engineering fields; however, only those commonly applied to mineral resources estimation problems are considered in this paper.
A generalized ML implementation process is presented in Figure 7. The first step in the ML model development cycle (problem definition) deals with understanding the problem, characterizing it, and eliciting required knowledge in acquiring the relevant data. The second step (data collection) involves the collection of all relevant data based on the features prescribed in the first step, followed by data preparation, pre-processing, and transformation. Feature selection deals with automatic or manual selection parameters or variables that contribute most to the prediction variable or output of interest. Next, the data is divided into training, validation, and testing sets based on a predefined ratio (data partition). Typically, 60% of the data is used for training, 20% for validation, and 20% for testing. Following that, an ML model is trained, validated, and tested using the partitioned datasets (train model). Model evaluation involves the usage of new sample data to re-verify the model performance. The model parameters (e.g., number of training steps, learning rate, initialization values) can be revised until a satisfactory performance is achieved, and then the model can be deployed for prediction. Prominent among ML methods (see Figure 8) that have been employed to estimate mineral resources are artificial neural network (ANN), support vector machine (SVM), random forest (RF), gaussian processes (GP), and fuzzy theory sets. These models have been successfully applied in evaluating different mineral deposits, including limestone, gold, and copper. The proceeding subsections review application of these algorithms to mineral resource estimation problems. There is extensive documentation in the literature regarding assumptions, mathematical computation, and architecture of these techniques; thus, this paper focuses largely on their application.
Artificial Neural Network (ANN)
ANN is a computational network presenting a simplified abstraction of the human brain. Conceptually, this computational network mimics the operations of biological neural networks to recognize existing relationships in a set of data and predict output values for given input values. It consists of layers of interconnected nodes, which represent artificial neurons [56]. The layers are categorized into three divisions; input layer (receives the raw data), hidden layer (process the raw data), and output layer (processed data). The number of layers and neurons (topology) in a network determines the structure of a neural network or network architecture. Figure 9 illustrates a basic ANN architecture comprising an input layer with three variables, a hidden layer with two neurons, and an output layer with one variable. Further, the neural network layers are transformed into a computational network made up of weight, bias or offset, and transfer or activation function that are responsible for data processing. Increasingly, ANN has infiltrated many engineering and statistical applications involving clustering, regression, forecasting, signal processing, and modeling problems. The wide recognition for ANNs could be attributed to their ability to learn and model complex non-linear relationships and make generalizations after learning from previous data [57]. There are several types and modifications of ANNs in practice today for different problems in geoscience, including multilayer perceptron (MLP), radial basis function (RBF) networks, general regression neural networks (GRNN), probabilistic neural networks (PNN), selforganizing Kohonen maps (SOM), Gaussian mixture models (GMM), and mixture density networks (MDN). In mineral exploration, ANNs have been widely applied for solving complex geospatial problems, such as prospectivity of mineral resources [58,59], mapping of mineral resource distribution [60][61][62], classification of mineral deposit [63,64], and mineral recovery [65]. Given their strong ability to manipulate large and complex data structure, reason over imprecise and fuzzy data, and to provide adequate and quick responses to new information [66], ANN is considered to be a robust alternative to geostatistical methods for evaluating mineral resources and reserves [67][68][69].
Generally, geological information, such as spatial location and assay results obtained from exploration programs, forms the basis of ANN models in resource estimation. The spatial location of the borehole usually presents the input layer, while the output layer accounts for the ore grade. Wu and Zhou [37] implemented a multilayer feed-forward neural network with a sigmoid activation function to capture spatial distributions of grade based on field assay of borehole locations in a copper deposit. The network architecture comprises an input layer with two variables (borehole coordinates), two hidden layers with 28 neurons each, and an out layer with one variable (ore grade). Al-Alawi and Tawo [66] demonstrated the potential of ANN to determine suitable drilling patterns and eventual reduction in exploration cost. They developed a multilayer feed-forward ANN-based model with a back-propagation algorithm to estimate point grades for a bauxite deposit. The model, which showed reasonable agreement with kriging, was applied to unsampled points to determine feasible areas for infill drilling. Further, Tahmasebi and Hezarkhani [70] employed a modular feed-forward neural network to estimate the grade of an iron ore deposit with improved performance compared to ordinary kriging and conventional multilayer perceptron neural networks. In a slate deposit, Matías et al. [71] determined the quality of slate mine using regularization networks (RN), multilayer perceptron (MLP), and radial basis function (RBF) network and compared the accuracy of the models with kriging.
Studies have also gone beyond the usage of borehole locations as input variables to incorporate other relevant geological attributes such as lithology and alteration obtained during exploration. Ore deposits with varied grade attributes (i.e., output layer with multiple nodes) are possible to model with ANN. For instance, Chatterjee et al. [72] modeled a limestone deposit using a feed-forward neural network where the input layer consists of spatial location (X, Y, and Z coordinates) and lithological information, and the output layer of four grade types (silica (SiO 2 ), alumina (Al 2 O 3 ), calcium oxide (CaO), and ferrous oxide (Fe 2 O 3 )). The ANN model showed superior performance when compared with ordinary kriging. Kaplan and Topal [73] proposed a grade estimation technique that combines multilayer feed-forward neural network (NN) and k-nearest neighbor (kNN) models to estimate the grade distribution within a mineral deposit (see Figure 10). The models were created using lithology, alteration, and sample locations (easting, northing, and elevation) obtained from the drill hole data. The proposed approach explicitly maintains pattern recognition over the geological features and the chemical composition (mineral grade) of the data. Before the estimation of grades, rock types and alterations were predicted at unsampled locations using the kNN algorithm. The result showed that inclusion of the geological information (lithology and alteration) as input parameters improved the model, which had a mean absolute error (MAE) of 0.507 and coefficient of determination (R 2 ) is 0.528, whereas the ANN model that only uses the coordinates of sample points as an input yielded an MAE value of 0.862 and R 2 = 0.112. More recently, fuzzy uncertainties associated with geological data are being modeled with hybrid neural-fuzzy algorithms [74][75][76][77][78] to quantify uncertainties in evaluating mineral inventory parameters. Similar ANN-based mineral estimation models include Wavelet neural network (WNN) for copper deposit [79], recurrent neural network (RNN) for iron ore deposit [80], Kalman learning algorithm (modified back-propagation neural network) for lead (Pb), and zinc (Zn) deposit [81], local linear radial basis function (LLRBF) neural network for phosphate deposit [82], and radial basis function (RBF) network for offshore placer gold deposit [83,84]. Table 3 shows a summary of some ANN-based resource estimation models. [23]
ANN and kNN
Geological information (lithology and alteration) and sample location (X, Y, and Z) Gold 123 drill holes The model predicted the grades on a test dataset with a mean absolute error (MAE) of 0.507 and R 2 is 0.528. [73]
Support Vector Machines (SVM)
Another machine learning technique gaining significant attention in resource estimation is support vector machines. SVMs are supervised learning models that construct a hyperplane in high dimensional feature space to separate different classes through nonlinear mapping [85]. Given a set of training examples, which belong to different categories, an SVM model constructs a separating line with a gap between the data categories. The model then maps new examples into the same space and predicts the respective side of the separating line or category. SVMs are becoming popular for solving classification, regression, and outlier detection problems. According to Chatterjee and Bandopadhyay [86] and Xiao-li et al. [87], SVMs function just like ANN, but with the added advantages of achieving global minimum and efficient handling of over-fitting problems; hence their growing prominence in mineral exploration applications is evident. Examples of SVM application in mineral exploration include: mineral prospectivity [88,89], potential mineralized zones mapping [90][91][92], fluid inclusion modeling [93], exploration drill holes location identification [94], alteration zone separation [95], and slate deposit characterization [96].
Further application of SVM in mineral exploration has to do with resource estimation. Das Goswami et al. [97] employed support vector regression (SVR) to model an iron ore deposit. They assessed the grade of the deposit and compared the model with gaussian process regression and ordinary kriging. The model output (Fe grade) was determined using spatial coordinates (X, Y, and Z) and ten lithological features as input. As demonstrated by Chatterjee and Bandopadhyay [86], SVM has also proven to be sufficient for evaluating placer deposits, which are often sparsely sampled, increasing the complexity of the estimation process. Using exploration data from a platinum deposit, the authors adopted least square support vector regression (LS-SVR) and combined neighboring borehole samples as the model input instead of the conventional spatial coordinates to estimate the ore grade. The improved estimate result of the model compared with conventional SVM and ordinary kriging was attributed to the composting of neighboring samples. Further improvement was expected with an increase in neighbor samples. Similarly, Zhang et al. [98] applied least square support vector regression to the model to estimate the ore grade of a seafloor hydrothermal sulfide deposit. The model demonstrated superior results in comparison to inverse distance weight (IDW), ordinary kriging (OK), and back-propagation (BP) neural network, and showed robust predictive and generalization ability.
During exploration drilling, there are instances where drill cores cannot be recovered, resulting in missing data and incomplete core samples. This situation complicates the estimation process and may eventually render the result inaccurate. To solve this estimation problem, Zhang et al. [99] incorporated a relevance vector machine (RVM)-a modified version of SVM based on Bayesian treatment [100]-and expected squared distance (ESD) algorithms to determine the missing values and estimate the ore grade. Li et al. [87] also proposed an integrated model of self-adaptive learning-based particle swarm optimization (SLPSO) and support vector regression (SVR) for porphyry copper ore grade estimation. Table 4 presents a summary of SVM-based resource estimation models. The SVM result compared well with kriging with the advantage of easy interpretation, better control over outliers, and greater sparsity. [96]
Random Forest (RF)
Random forest is an ensemble learning technique consisting of collection decision trees that are trained using random training subset to output mode class in classification and mean prediction in regression problems [101,102]. Each decision tree outputs a class, and the class with the most votes is the model's prediction. Since the final output is a combined decision, it most likely outperforms the prediction of an individual tree. The decision trees are constructed by bagging or bootstrap aggregation, meaning the samples are drawn from a subset of training data with replacement. Figure 11 illustrates a schematic framework of RF with five samples and three variables. In Figure 11, the trees are created in an ensemble by drawing a subset of training samples through replacement, meaning that the same sample can be selected several times, while others may not be selected at all (i.e., training phase). Then, each tree votes for a class membership, and the membership class with the maximum votes will be the one that is finally selected (i.e., classification phase). Studies have shown that RF has several desirable qualities, including handling large data sets efficiently; ability to detect outliers and noise; capacity to produce suitable internal estimates of generalization error; and computationally cheaper than bagging or boosting [101,[103][104][105]. In addition to its enormous application in areas, such as land cover classification [104], groundwater modeling [107,108], and mineral prospectivity [89,109], RF is gaining popularity in predicting the ore grade of mineral deposits. For example, Jafrasteh et al. [110] investigated the potential of an RF-based model for evaluating grades of porphyry copper deposit and the model's performance with other machine learning algorithms (e.g., multilayer perceptron neural network) and geostatistical techniques (e.g., indicator kriging, and ordinary kriging). The RF-based model was made up of hyper-parameters that were adjusted by 500 decision trees sufficient to ensure convergence of the model. Sheng et al. [111] combined laser-induced breakdown spectroscopy (LIBS) and random forest (RF) to classify iron ore samples with 100% prediction accuracy. In another study, O'Brien et al. [112] classified gahnite compositions of the Broken Hill deposit to determine most prospective Pb-Zn-Ag mineralization in the Broken Hill domain, Australia. The results show that RF is a better technique for compositional discrimination in the Broken Hill domain compared with linear discriminant analysis. In an exploration study, Schnitzler et al. [113] used RF to assess sodium (Na) concentration in the Matagami mining district of Québec, Canada and illustrated that RF could be an efficient tool for estimating missing or unmeasured geochemical elements in an exploration database. Matin and Chelgan [114] also alluded to the potential of RF to model complex relationships in their study to investigate gross calorific value of coal samples from 26 US states, where it was applied with satisfactory results. Table 5 summaries RF-based resource estimation models.
RF
Emission spectral lines of Si and Ti as the input data and ore class as output data Iron ore 300 analytical spectra were acquired from 10 classes of iron ores The model exhibits better predictive power with an accuracy of 97.5% for all spectral data as the input and prediction accuracy of 100% for iron ore samples. [111] RF Compositions of gahnite in sulfide-bearing rocks Polymetallic 533 The model classified gahnite into various schemes with misclassification rates of 1.6, 3.3, and 4.7%. [112]
Emerging and Hybrid Algorithms
There are other machine learning techniques being applied to mineral resource estimation problems, though they are not widely used compared to the above-mentioned machine learning models. Gaussian processes [115,116], for instance, is popular in geoscience, especially for geochemical anomaly detection [117,118] and mineral prospectivity studies [119,120], but we observed in the literature that it had been applied once to estimate ore grade of a deposit [110]. Jafrasteh et al. [110] suggested that GP models gave the best results when spatial input and covariance functions were processed using symmetric standardization (SS) and anisotropic exponential kernel (AK), respectively. Thus, in their experimental study, they applied GP with symmetric standardization and an anisotropic exponential kernel GP to evaluate the grade of a copper deposit. And as expected, the GP-SS-AK algorithm performed well among other GP algorithms, including the exponential kernel, isotropic exponential kernel, symmetric standardization, and anisotropic exponential kernel.
Recently, researchers have been exploring the potential of incorporating machine learning techniques with other sophisticated soft computing methods (e.g., genetic algorithm, ant colony, fuzzy set theory, etc.). The aim is to synthesize the advantages of these algorithms to build a single algorithm with improved accuracy for resource estimation. Examples of these models include grade estimation of copper deposit using self-adaptive learning-based particle swarm optimization and support vector regression (SLPSO-SVR) [87]; multilayer perceptron neural network optimized by invasive weed optimization algorithm to predict grades of gold and silver in gold deposit [121]; local linear radial basis function neural network trained with a combination of simultaneous perturbation artificial bee colony algorithm and back-propagation method to estimate phosphate deposit [82]; combination of multilayer perceptron neural network and genetic algorithm for an iron deposit [122]; and extreme learning machine (ELM) variants based on hard limit, sigmoid, triangular basis, sine and radial basis activation functions to gold deposit [19]. Yu et al. [123] optimized iron ore grades using a combination of stochastic simulation, artificial neural network, and genetic algorithm. Further, given the potential of computer vision methods, studies, such as Chatterjee and Bhattacherjee [124], Patel et al. [125], Patel and Chatterjee [126], Perez et al. [127], and Zhang et al. [128] also integrated image recognition algorithms with other machine learning methods to classify and predict ore grade. Table 6 presents a summary of some of the emerging ML techniques applied in estimating mineral resources. ELM Input data is X, Y, and Z coordinate, and the output is ore grade.
Gold 3759 drill holes ELM with sigmoid activation function better than other models with R 2 of 0.9193. [19] Machine Vision and Image classification A total of 280 image features were extracted from ore sample images captured on a belt conveyor.
Iron ore 280 image features
The model showed a satisfactory prediction of iron ore grade with R 2 of 0.9402. [125]
Discussion and Future Directions
The continued advancement in computer power has unraveled novel and sophisticated soft computing techniques capable of handling large and complex dimensional data structures that were not possible in the past. These soft computing techniques include machine learning coupled with big data technologies to create a new paradigm of powerful resource estimation models. Machine learning applications have infiltrated all fields and industries, from engineering to sociology, agriculture to astronomy. Some scholars even refer to it as the new electricity driven by data. Geoscience has also witnessed tremendous advances with the onset of ML. There is a consensus among geoscientists that ML algorithms are suitable for geospatial data that often exhibit complex and high spatial variations [129][130][131], and the algorithms can produce superior results for classification and regression problems than geostatistical techniques, especially when the relationship is non-linear [132][133][134][135][136][137]. Some of the ML algorithms are universal, adaptive, non-linear, robust, and efficient [129]. Another favorable characteristic of ML is that it allows the discovery of new information or a deeper understanding of existing geospatial data. This is evident in the assertion by Nwaila et al. [138] that the advent of ML made sedimentological data, which were initially collected for qualitative assessment of gold mineralization, more meaningful and contextually relevant. ML-based estimation algorithms can accommodate a combination of several geospatial parameters for grade prediction and ore classification. In addition to the spatial location of drill holes and composite grade, lithological, geochemical, and rock imagery features, which are often overlooked or not accommodated in geostatistical techniques, can be included in the dependent variables of the model [139], resulting in a more improved estimation model. The main appealing features of ML for resource estimation, in contrast to conventional estimation techniques, are that it (i) requires fewer data pre-processing, (ii) handles complex non-linear relationships, (iii) relatively cheaper and faster, and (iv) does not assume underlying spatial distribution and handles incomplete data.
Despite the strong potential of ML and its recent increasing application in the evaluation of mineral resources, geostatistical methods remain the benchmark estimation technique in the mining industry. ML techniques are considered more of a complementary tool for validating results obtained from geostatistical methods than sole estimation techniques for mineral projects. Glacken & Snowden [36] noted that many estimation tasks are still conducted using polygonal techniques regardless of the availability of sophisticated estimation methods. The high preference for geostatistical techniques may be because they have been applied in the industry for a long period, forming the basis for many mine designs in operation today. In addition, their underlying framework of linear correlation of samples and stationarity [97,133,140] makes them less computationally expensive compared to ML techniques. Furthermore, they can be integrated with other statistical concepts, for example, conditional simulation, which is a combination of kriging and the Monte Carlo sampling method [36]. Moreover, many resource engineers have gained comprehensive experience over the years in applications of these methods with reliable results. Thus, they would be more comfortable working with techniques that are widely accepted in the industry. Stakeholders also have confidence in these techniques.
Additionally, most geostatistical algorithms have been automated in modern mineral resource software packages (e.g., Datamine, GEMS, Leapfrog, Micromine, Surpac, Vulcan, etc.) for easy data manipulation and estimation. This allows the estimators to focus more on the data preparation and result interpretation. Another reason could be that geostatistical Although geostatistical techniques are the industry's estimation standard, they possess certain inherent limitations. As mentioned earlier, these methods are generally linear estimators; thus, grades of unknown samples are determined based on second-order statistics and the use of linear correlation or variogram, which describes mineralization of a deposit [97]. According to Das Goswami et al. [134], second-order statistics work reasonably well with statistical processes following the Gaussian process. Nonetheless, it is difficult to model complex geological structures with high variability that do not follow the Gaussian process. Tutmez [76] also pointed out that geostatistical methods perform poorly on small data sets (i.e., not enough data to achieve acceptable variogram calculation). Therefore, they are not suitable for small deposits or during the initial exploration stage, where data is limited. They also require significant data-processing [137]. In such situations, ML techniques can be adopted.
ML techniques are known to sufficiently handle spatial uncertainty associated with geological data [89,97,133,141] sbecause of their ability to determine the relationship between complex and non-linear input and output variables [97]. ML techniques can learn and map inherent relationships among the data variables. Given enough geological data (such as drill hole coordinates and assay results), ML techniques can learn the relationship between input parameters (coordinates of drilled holes) and output parameters (ore grades). The trained model can then be used to estimate grades for unknown points within the same geological area. In effect, no assumption is made about factors or relationships of ore grade spatial variations, such as linearity between drilled holes [133] as in the case of geostatistical techniques. Several case studies (see Tables 3-6) have illustrated the potential of ML techniques as a good estimator of ore grade for different mineral deposits; some studies have done so by comparing its performance with geostatistical techniques.
Results from studies comparing the performance of both models are mixed. In some cases, geostatistical techniques showed better output and vice versa. In other words, there is no outright best method between the two, and they seem to complement each other. Generally, the ML techniques seem to have higher accuracy than the geostatistical methods. When Das Goswami et al. [97] compared two ML techniques (general regression neural network (GRNN) and multilayer perceptron neural network (MLPNN)) and one geostatistical technique (ordinary kriging (OK)) in an iron deposit, they found that GRNN exhibits better generalization potentials and also provides higher accurate prediction than MLPNN or OK models. MLPNN and OK were also observed to overestimate lower grades and underestimate the higher grades, while GRNN showed minor variation between the predicted and actual iron grade. In the Nome gold ore reserve estimation, Dutta et al. [102] demon-strated that SVM produced better estimates compared to ANN and OK. Afeni et al. [142] re-examined grade estimates for an iron ore deposit using MLPNN and OK and observed that the OK model showed superior performance than the MLPNN model. The total resource definition by OK was about 12% lower than that of the conventional method currently used in the mine. Karami and Afzal [101] also evaluated the performance of ANN and IDW on a copper deposit and reported that the IDW method showed less variance, while ANN demonstrated high overestimation and underestimation. Thus, in this instance, IDW is a better estimator than ANN. In Kaplan and Topal's [73] study, the NN and kNN model underestimated any grades between the 15 ppm and 20 ppm range. Upon close examination of sample points, they observed that the network could not ignore the effect of discontinuity of lithology in areas where mineralization is structurally controlled and a test point located near a fault; thus, the model is more suitable for mineralization controlled by lithology than structure.
It is worth mentioning that ML techniques are not a panacea to all resource estimation problems, as they also have limitations. Jafrasteh et al. [115] indicated that ML techniques are effective only if the training and test datasets have similar distributions. The models tend to perform poorly when the samples are far apart with increasing spatial variation. Thus, the training data and test dataset must be carefully partitioned to ensure that they have the same or similar geological characteristics. This problem can be resolved by assigning a higher weight to training samples closer to the test samples [141]. Kapageridis [143] observed that the dimension of input data influenced the performance of ML techniques, particularly ANN applied to ore grade estimation. After examining 2D and 3D spaces with varying input configurations, ranging from two to sixteen input dimensions on different deposit types (potash, marl, phosphate, and copper), the author concluded that for ANN, "there is no globally applicable configuration for all deposit and sampling scheme types, and each deposit and sampling scheme must be considered separately to find the best configuration applicable." The adopted input dimension can either hurt performance when available data is small or improve when available data is large enough. Thus, it can be inferred that there is no one-fit-all ML technique for all resource estimation problems, but the choice of technique is contingent on available data and deposit characteristics. Further, we observed that many ML-based grade estimation models are limited only to geological parameters (e.g., using only spatial coordinates). Erdogan Erten et al. [144] corroborate this observation, stating that "ML models provide useful tools to generate spatial estimations of geological features, but they do not consider the spatial dependence among the observations and they primarily use coordinates as predictors. Thus, many ML models produce visible artifacts in the resulting estimates along the coordinate directions." The authors proposed using ensemble super learner (ESL) to address this weakness.
Again, ML algorithms that focus on finding the best solution for a model tend to overfit or underfit because the optimum model for training and testing the dataset may not necessarily have the best generalization ability [145]. This problem can be addressed using ensemble methods, where multiple learning algorithms are applied with different parameters, and the resulting solutions are averaged to obtain better performance than any of the constituent learning algorithms [146][147][148]. Examples of this learning method are Bayes optimal classifier, bootstrap aggregating (bagging), boosting, Bayesian model averaging, bucket of models, and stacking. Chatterjee et al. [148] investigated the performance of genetic algorithms and k-means clustering based on an ensemble neural network for a lead-zinc deposit. The result showed no significant difference in the model performance. They attributed the marginal performance to the high coefficient of variations and skewness of the data sample. But Tahmasebi et al. [78] pointed out that, perhaps, optimizing ANN's parameters and topology could have improved the result. An earlier study conducted by Tahmasebi and Hezarkhani [149] showed better results when the ANN's parameters and topology were optimized with genetic algorithms.
Another critical factor observed to influence the accuracy of ML-based estimation models is data partitioning [110,150]. Most studies assume random partitioning to divide the sample data into training (e.g., 60%), validation (e.g., 20%), and testing (e.g., 20%). However, due to variation in drillhole samples (in both 2D and 3D) and erratic distribution of geochemical anomalies, such an approach may bias model performance as closer holes tend to exhibit similar features than farther holes. Thus, it is important to consider lithological features and other inherent characteristics particular to the formation when deciding which data partition regime to adopt. Additionally, instead of using composite values, individual drillholes can be modeled along the z-axis based on core sample intervals. The result obtained from each local drillhole model can be synthesized as input-output parameters for the global model. This would mimic spatial distribution along the drillholes, allow the inclusion of more features, produce more realistic models, and improve performance and accuracy.
ML is data-driven, requiring a large dataset for training, testing, and validation. The enormous volume of data needed to implement and derive meaningful results from ML successfully may not be readily available at the beginning of exploration programs or in greenfield projects. Acquisition of extra data is time-consuming and expensive. Therefore, ML may be more appropriate for brownfields projects, operating mines, or reassessing decommissioned projects. However, this is not to say that ML cannot be applied to small mineral estimation problems, as certain soft computing techniques such as fuzzy set theory [74] are well suited for such problems. Other notable shortcomings of ML models include overfitting [97,132,143] higher computational running time [134], and smoothing effect [132].
Considering the long usage of conventional estimation techniques, knowing that they have been tried, tested, and modified over the years to achieve the reliable estimates, it would be challenging for resource estimators to abandon these techniques for ML methods. Such a switch may result in chaos in the industry; stakeholders would doubt reserve estimates, and financial institutions would be reluctant to fund mineral projects because they may not trust the project value. In terms of resource estimation, ML techniques are not matured yet, and most of the applications are in academic settings. It would take time for it to get industry-wide recognition and acceptance. Studies should focus on integrating traditional/conventional and ML techniques and form hybrid algorithms to fasten the adoption process. It is worth noting that most ANN-based models in practice assume a deterministic approach to model non-linear systems [151,152]. However, natural phenomena such as the occurrence of mineral resources do not follow deterministic processes, as they are characterized by stochastic properties with high uncertainty. Consequently, Kaplan and Topal [73] asserted that no matter how much network is trained to improve the prediction, independent stochastic events cannot be predicted by any neural network models. However, we believe that a more advanced ML algorithm such as stochastic neural networks [152], which account for random processes in a system, and deep learning could address the heterogeneity of geological formations in resource estimation to some extent. These novel algorithms should utilize the merits of both conventional grade estimation methods and ML techniques and should also be able to estimate multiple ores since there are deposits with multiple mineralization, as many of the current ML-based mineral estimation models focus on only one commodity. Further, the ML and hybrid algorithms should be incorporated in mining software to make ML more accessible to resource engineers. Some scholars [153,154] are already building ML-based estimation software where the user would only provide sample assay data (e.g., borehole coordinates and ore grade), and the software will analyze and predict grade for unsampled areas.
The inherent heterogeneous nature of mineral deposits, which is evident in exploration datasets with varying mineralogy, mineral content, natural fracture, lithology, and other properties, can be likened to concept drift in machine learning. Concept drift refers to the unexpected changes in underlying data distribution over time [155][156][157]. The concept suggests that as data evolve over time, the distribution underlying the data is likely to change, resulting in the poor and degrading performance of predictive models. Thus, the model must be updated regularly in real-time, as new data is obtained. The erratic distribution of ore grades means that resource estimates are subject to change as more data is gathered during the exploration or production phase. Therefore, ML-based resource estimation models should be able to analyze emerging data in real-time to reflect current grade distribution of a deposit. The idea is that the ML model should train, test, and validate each incoming dataset and use the result to update previous estimates. Studies have examined the problem of concept drift and proposed different approaches for detecting and handling it in several fields, including weather forecast, smart grid analysis, spam filtering, and predictive maintenance [157][158][159][160]. Žliobaitė et al. [155] emphasized the application of Big Data Management and automation tools and the need to account for the evolving nature of data collected over time. Further, they recommended moving from the adaptive algorithms towards adaptive systems that would automate the full knowledge discovery process and scaling solutions to meet the computational challenges of Big Data applications. With Big Data applications, mineral exploration data can be organized in the form of data streams rather than static databases to enable online and real-time prediction. Zhukov et al. [157] also proposed an approach using decision tree ensemble classification method based on the random forest (RF) algorithm for addressing concept drift in smart grid analysis. The proposed model compared well with concept drift approaches like Online Random Forest and Accuracy Weighted Ensemble (AWE). ML techniques such as lazy learning algorithm, incremental learning, support vector machines, classification trees C4.5, genetic algorithms, and neural networks have also been employed to address concept drift.
Such concepts can be extended to cover other downstream activities of mineral exploration and exploitation. For instance, after ore reserve estimation, ML techniques should be able to: develop block models based on the grade estimate, determine the cut-off grade, propose pit shells (geotechnical features of the formation included), perform project valuation [161,162], design mining sequence, and schedules for optimal extraction. All these processes can be automated as an integrated computer program, like automation of truck haulage, requiring less human interference and error. Full implementation of ML in resource estimation coupled with deep learning and other AI technologies like internet-of-things, drones, robotics, and blockchain would transform mining into the mine of the future being proposed by major stakeholders in the industry.
With the deployment of Big Data Management and Automation systems in the mining industry, we foresee the successful implementation of advanced ML techniques such as deep learning that requires enormous data to produce reasonable results. The availability of such datasets would help researchers apply ML techniques efficiently. ML applications in mineral resource estimation are currently focused on evaluating resources during the exploration stage, with little application post-exploration. We envisage that future studies would examine its applications during the operational stage of a mine, for example, grade control evaluation and reconciliation in surface and underground mining. In addition, given the recent proliferation of ML techniques, future research needs to determine which algorithms are most robust and appropriate for resource estimation. An industry-accepted ML application standard (ML Rubric) can be developed whereby all algorithms are subjected to a set of selection criteria, and every algorithm must obtain a pre-defined passing score before implementation. Some factors that can be considered in the ML Rubric are project goal, nature of data, ground condition, alteration levels, geological settings, model interpretability, minimum sample size, minimum features, acceptable error margin, performance metric, and computing resources. Figure 12 illustrates a schematic of the ML Rubric factors, consisting of three main selection categories (algorithm attribute, project characteristics, and implementation process). Ultimately, such a standard would help harmonize ML-based models, eliminate discrepancies, and promote acceptance of ML applications not only in mineral resource estimation but other sectors of the mining industry as well. It is important to indicate that a major drawback of ML applications is data limitation. The literature shows that most ML-based mineral resource estimation models were developed using composite values obtained from a few boreholes. Though these models produced satisfactory results, their performance can be improved with access to more data. The performance and accuracy of ML techniques, including deep learning, are highly dependent on a large and quality dataset that is partitioned appropriately into training, validation, and testing sets to ensure each set is a representative of the population [31]. It is well known that the functional capabilities of artificial intelligence and ML rest upon a sufficient dataset. However, what constitutes an adequate dataset size is not well defined, as the amount of data required can vary from one project to another. In this regard, studies such as Ganguli et al. [163] have provided recommendations to address ML application challenges peculiar to the mining industry. They recommended a comprehensive understanding of the modeling process before implementation and advised caution when using soft computing tools and software products [31]. Their recommendation also included the random data partition in three subsets (i.e., training, testing, and validation). Further, they suggested that the training subset should contain the highest and lowest values, and samples should be assigned to the training subset first, followed by validation and testing, during data grouping/segmentation. Additionally, the best data collection and processing protocols should be observed during model development to minimize error and ensure the dataset is good enough for its intended use.
Conclusions
Mineral resource estimation is a challenging task with enormous uncertainties, and considering the erratic nature of geological formations, results from such estimates are critical to all mining operations. Whether a mineral resource is classified as a reserve, ore, waste, and eventually a mine depends on the estimation outcome. Thus, the choice of the estimation method is key in the mineral resource estimation process. The selection of an unsuitable estimation method could defeat the purpose of an exploration program and may cause loss to capital expenditure. The common estimation methods in practice are in the geostatistical categories. These methods have been used to establish some of the most successful mining operations in the world, despite their weaknesses.
Recent developments in computer technologies allowed researchers to implement ML algorithms in mineral resource estimation. Results from such studies showed that ML algorithms are powerful tools for solving both linear and non-linear complex geological problems. These algorithms can model complex geological data, identify relationships, classify geological features, and detect geochemical anomalies with little human intervention. The most widely applied ML algorithms in this field are ANN, SVM, and RF. Given borehole data, the model can estimate ore grade values. However, despite their enormous potential in resource estimation, ML-based models also possess some inherent weaknesses such as overfitting, longer computational time, and smoothing effect.
These limitations can be addressed by combining them with other ML models and deep learning algorithms. The ML models could also be employed to augment existing estimation methods or perhaps integrated with traditional methods to form hybrid estimation methods. Furthermore, the mathematical complication associated with some of the ML algorithms, which could be a reason for their low implementation in the mining industry, can be rectified by incorporating them in existing resource estimation software as pre-trained models. Thus, the pre-trained models would make the implementation process much faster, easier, and user-friendly. Additionally, the resource engineer would not have to develop and train the algorithm from scratch but focus on retraining the pre-trained model with a new dataset. Future research can examine the application of ML techniques for grade estimation during a post-exploration stage (e.g., grade control and reconciliation), and they can also consider utilizing advanced ML methods, such as stochastic neural network and deep learning. Also, industry standards regarding ML applications can be developed to guide algorithm selection, build confidence in ML-based resource estimates, and promote industry-wide acceptance of ML techniques.
|
v3-fos-license
|
2021-09-25T16:12:03.115Z
|
2021-08-23T00:00:00.000
|
238717371
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.scienceopen.com/document_file/065fe7e4-cea8-4e64-bf6c-18a48b698ce4/ScienceOpenPreprint/EXAMING%20Childhood%20Lead%20Exposure.pdf",
"pdf_hash": "c90cb6bfc42dd2f9e44dab85f8ba5cbfba5a3e27",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1079",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "11844c23b14699b1d956b0676b1ef81c91c7e041",
"year": 2021
}
|
pes2o/s2orc
|
Examining the Lead Exposure and its Effects among U.S Children
According to the United States Environmental Protection Agency (2019), lead is an element that is naturally found on the surface of the earth. Lead is also harmful to humans, especially young children. Similarly, the CDC (2021) has reported that lead naturally occurs in the environment such as soil, air, and water. However, it is also commonly found in older homes. This is especially true with lower socioeconomic communities’ housing. Lead exposure commonly comes from old lead pipes, faucets, and plumbing fixtures. In addition to water contamination, nearly 23 million houses have lead-based paint hazards in the United States (Egan et al., 2021). Egan and his colleagues also found that there are more than 3.6 million households with children younger than six years old living in these lead-filled homes. More than 6 million housing units still have lead water pipes in the United States (Dignam et al., 2019).
Lead poisoning may also cause neurotoxic effects in children, resulting in long-term lower IQ, inattentiveness, and other behavioral problems (Seifu et al., 2020). In addition, according to the United States EPA (2019), lead is more dangerous to young children because it causes neurobehavioral deficits in cognition, motor abilities, and brain development.
Agent
Lead exposure is one of the most common and well-known direct and indirect environmental diseases among children in the United States. The dust in older housing and leadbased paint is one of the primary sources of lead exposure among U.S children (Egan et al., 2021). Consumer products such as imported foods as well as lead exposure in the workplace are the other common sources of lead exposure (Ettinger et al., 2019). Some occupations with possible lead exposure include working with batteries, recycling, and smelting. A parent's exposure at these workplaces can affect the children at home because parents carry lead particles on their clothes, shoes, and other items that they may bring with them. Primarily, children usually get exposed to lead by ingesting contaminated dust or soil. This lead-contaminated dust and soil usually originates from lead paint (Moody et al.,2016). Drinking water is another major source of lead exposure, especially in low socioeconomic children, because of the old infrastructure (Hanna-Attisha et al., 2016). The children can get exposed to lead through inhalation of air pollution, lead dust, and other pollutants. Specks of dust from lead-contaminated houses are the primary root causes of lead exposure among children .
Environmental Risk Factors
There are several persistent risk factors that are behind the higher prevalence of blood lead levels in U.S children. Although blood lead levels dramatically decreased in children in the U.S over the past 40 years, minority children with low socioeconomic status have higher blood lead levels of more than non-minority children (Egan et al., 2021). Egan and his colleagues found that Black and Hispanic children have higher blood lead level prevalence when compared to white children and those from higher-income families. Similarly, lead exposure disproportionately affects African American children in the U.S. who have the highest blood lead levels of all demographics (Yeter et al., 2020) According to the CDC (2021), children who live below the federal poverty level, those who have lower socioeconomic statuses and those who live in houses built before 1978 have higher vulnerabilities to lead exposure. The other factors that contribute to this problem are race/ethnicity, poverty, and older housing. Moody et al. (2016) found that children who have lower socioeconomic statuses and minority communities have higher rates of childhood lead exposure. Likewise, Seifu et al. (2020) reported that low socioeconomic status is a major risk factor that drives the elevated blood lead levels in U.S children. Egan et al. (2021) also suggested that sociodemographic characteristics such as income level and housing are the other factors that drive the higher prevalence of blood lead levels in U.S children.
Similarly, Seifu et al. (2020) reported, after analyzing the blood of 301 foreign-born refugee children in the U.S, that foreign-born refugee children have higher elevated blood lead levels. The average of elevated blood levels of the foreign-born refugee children upon arrival was 9 µg/dL with a range from 5 to 27 µg/dL (Seitu et al., 2020). The reason why foreign-born children have higher blood lead levels can be the low socioeconomic status of their country. 5 Therefore, I believe socioeconomic status can be the primary cause of elevated blood lead levels among children.
Toxicity Assessment
According to the CDC (2021), there are no safe blood lead levels, and even trace amounts of lead in the blood can be harmful to children's health. Sachdeva et al. (2018) reported that lead toxicity is not well diagnosed since patients are usually asymptomatic. Moreover, there is no known level of lead that is not toxic to the body. Even though no known blood lead cut off point exists, blood lead levels of 5-9 µg/dL are the levels at which the CDC recommends an immediate public health action. According to Sachdeva and his colleagues, 70% of lead is deposited in children's bones and teeth, which can then be released into the bloodstream.
Lead toxicity causes various pathophysiological effects in children, including oxidative stress, hematological changes, and effects on the kidneys and heart. The major pathological mechanism resulting from lead poisoning is oxidative stress, which occurs when the body's ability to compensate is overturned, resulting in cellular damage (Sachdeva et al., 2018).
Reactive oxygen species (ROS) such as hydrogen peroxide cause cellular damage by exhausting the antioxidant system, which detoxifies the excess ROS. Lead toxicity also causes hematological changes, such as red blood cells becoming fragile and their cell membranes being damaged, causing the cellular function to be disrupted, resulting in an imbalance of substance moving in and out of the cell (Sachdeva et al., 2018). According to Sachdeva and his colleagues, even minor lead exposure can cause chronic kidney failure, resulting in glycosuria (excess sugar in the urine) and aminoaciduria (an abnormal number of amino acids in the urine). Cardiovascular effects of lead toxicity include hypertension and other cardiovascular diseases (Sachdeva et al., 2018).
However, lead poisoning on the human body is not limited to the effects mentioned above; lead poisoning can affect any organ of the body and any level of the organ's systems. For example, as discussed in detail in the problem formulation, one of the major effects of lead poisoning on children's bodies is neurobehavioral effects, motor inabilities, low I.Q., growth retardation, inattentiveness, and other neurobehavioral effects are examples. In addition, despite limited research studies, Sachdeva et al. (2018) reported that lead toxicity could even cause human cancers, implying that lead poisoning is more dangerous than previously thought.
Disease Surveillance
There are numerous methods for measuring lead in tissues such as plasma, urine, bone, and teeth. However, blood lead concentration is one of the most commonly used methods for determining blood lead levels (National Research Council (U.S.), 1993). The level of lead in the blood is measured in micrograms per deciliter of blood (µg/dL). In the United States, the prevalence of lead exposure and lead poisoning has decreased; however, the number of cases remains alarming. Thirty-four states and the District of Columbia reported a total of 24,000 children with blood lead levels greater than or equal to 10 µg/dL, as well as 243,000 children with blood lead levels greater than or equal to 5 µg/dL (Raymond et al., 2014). In 2010, 24,546 children aged 1 to 5 years had blood lead levels greater than 10 µg/dL, while 19,915 children aged 1 to 5 years had blood lead levels greater than or equal to 10 µg/dL. (Raymond et al., 2014). reported a random sample survey used to assess the relationship between lead-contaminated house dust and blood lead levels in urban children aged 12 to 31 7 months who lived in the same house for at least 6 months. The children's mean blood lead level was 7.7 µg/dL. After controlling for other significant covariates, the estimated percentages of children with blood lead levels at or above 10 µg/dL were 4%, 15%, and 20%, respectively.
Thus, lead-contaminated house dust was a significant contributor to lead intake among urban children with low-level blood lead elevations. This means that a significant proportion of children may have blood lead levels of at least 10 µg/dL at dust lead levels significantly lower than current standards. . United States are exposed to lead. According to Dewalt et al. (2015), lead-based paint hazards exist in 23.2 million homes in the United States. Dewalt and his colleagues also estimated that 37.1 million homes contain lead-based paint in some form.
Exposure Assessment
Lead can enter the human body through inhalation and then spread throughout the body once it reaches the lungs. It can also enter the body through ingestion, such as when a person drinks or eats foods contaminated with lead. The time it takes for lead to be absorbed by the body, on the 8 other hand, is dependent on the concentration of lead in the dust; however, it is thought that lead takes six months to raise blood lead levels and become a concern in young children. Since children often use their hands to eat, they are more likely to ingest lead from their environment.
Children inhale contaminated dust particles from lead-painted woods and batteries that are being burned. Children can also be exposed to lead through lead deposited in the soil due to industrial air pollution. Most children are exposed to lead through dust contaminated with lead and paint chips from lead-based paints (Hauptman et al., 2017). Acute lead poisoning occurs when leadcontaminated particles are exposed for only a short period. In contrast, chronic lead poisoning occurs when lead-contaminated particles are exposed for a long period.
Although lead is no longer used in household products, it is still used in hunting and shooting equipment. When a hunter kills an animal, the carcasses decompose into the surrounding environment (Arnemo et al., 2016). Rainwater washes the lead in the animal carcasses into the water stream, posing a threat to the environment as a whole (Arnemo et al., 2016). The lead stored in the carcasses enters the food chain, causing lead poisoning in wildlife and agriculture (Trinogga et al., 2019). Lead can then enter the human body through contaminated agricultural products. Additionally, people who eat game-shot animals may be exposed to lead ammunition (Buenz et al., 2018). Lead can be absorbed into the leaves of plants when they are exposed to it.
Exposure Monitoring
The lead concentration in whole blood is one of the most important biomarkers in the body used for measuring lead. The federal government usually regulates blood lead levels. The lead concentration in whole blood (BPB) is the primary biomarker used to determine blood lead levels (Barbosa et al., 2005). This biomarker increases as the amount of lead in the blood increases.
Ecological Impacts
Lead air pollution can accumulate in the environment, causing environmental damage. When lead enters the environment, it eventually ends up in the air, water, and soil. This contributes to pollution, especially in urban areas (US EPA, n.d). The major causes of ecological impacts, according to the US EPA (2019), are waste streams discharged into water bodies and mining. In addition, reduced plant growth and reproduction as well as neurological effects in vertebrates are contributors of lead on the ecosystem, wildlife, and agriculture (EPA, n.d). In this case, lead can spread and impact the environment in a variety of ways. As a result, the best solution for controlling this dangerous toxic element is prevention.
Community Organizations
Many community organizations deal with childhood lead poisoning across the United One intervention is to increase the capacity of healthy and safe housing. NCHH has improved the capacity of the systems at local, state, and federal levels to provide safe and healthy housing for these communities (NCHH, n.d). Again, it increased the awareness of the 11 community members about getting healthy and safe housing. To obtain effective results, NCHH hires qualified scientists who conduct in-depth research and provide science-based evidence to get solutions for getting healthy and safe housing to prevent childhood lead poisoning and other housing-related environmental health problems (NCHH). NCHH then converts this researched evidence into a practical experience that is utilized by the local, state, and federal levels and other public health experts including healthcare providers (HHS, n.d).
One specific strategy by NCHH and the National Safe and Healthy Housing Coalition, called "Find It, Fix It, Fund It" has more than 150 growing members. This program is intended to eliminate lead poisoning hazards while providing lead surveillance and home follow-up services to eliminate lead poisoning hazards (NCHH, 2016). This program also supports other allies and
Recommendations
The exposures to lead hazards disproportionately impact children of color and children from low-income families. Until this aspect of lead exposure is addressed, all of the efforts that are being made will fall short of their ultimate goal. The environmental justice, socioeconomic status, and ethnicity problems are core foundations of why America has failed to solve this totally preventable environmental health problem (Kerpelman et al., 2020). There are tens of millions of homes that have lead paint in the U.S, and they pose a threat of irreversible damage to children (Kerpelman et al., 2020). In addition, landlords and sellers often refuse to eliminate lead paint or even test for it on their properties, and the government agencies fail to design and implement effective laws and enforceable regulations (Kerpelman et al., 2020). While not directly stated, the data can be used to make a case for the fact that the reason why America ignored this totally preventable public health problem is that lead poisoning affects children of 14 color and low-income children. Therefore, the current policies and regulations are not working for children of color and children with a lower socioeconomic status. In addition, Kerpelman et al. (2020) said that the current state laws have failed to address this unfortunate environmental public health crisis. Racial and economic discrimination can be the primary drivers of why the government levels do not resolve this easily preventable public and environmental health problem (Kerpelman et al., 2020). Furthermore, Kerpelmal et al. argued that childhood lead poisoning is a denial of constitutional and civil rights.
Some states have policies on childhood lead screening, but thus screening is not conducted regularly; therefore, the best policy option should be designing and implementing new state laws mandating monthly childhood blood lead testing and regular home-based follow-up lead services which are legally enacted. States should also monitor how the healthcare system performs the screening of children's blood lead screening and should provide home-based-up services to ensure that the children are getting blood lead screening and living in healthy and safe houses. The states should produce laws that eliminate the lead paint of older houses which are mostly lived in by minority and low-income children. Also, the states should develop and implement policies that monitor whether contractors follow the rules and regulations of the Environmental Protection Agency. In addition, the states could provide adequate parental education on environmental household interventions to reduce the lead exposure such as removal of lead dust or home remediation work (Nussbaumer-Streit et al., 2020).
Federal laws that can provide oversight of the state laws could be the ultimate solutions for this unfortunate ongoing environmental injustice. Since this issue is constitutional and civil rights, the best solution is to pass effective federal laws and force the states to enforce these laws.
These laws will require the states to develop effective policies and regulations mandating monthly inspections of rental apartments and other residentials that are lived by the communities of color and low-income communities. However, these would require the landlords and sellers to eliminate lead hazards from their properties. Furthermore, local, state, and federal agencies should take their constitutional responsibilities seriously and enact effective policies and regulations on lead poisoning and not only that they must enforce these policies and regulations; otherwise, like what is happening now, if all levels of the government do not enforce these policies and regulations, the landlords and sellers may not remove lead hazards from their properties. Therefore, effective enacted laws and enforced regulations are needed, and more importantly, enforcement is the most crucial action needed because the current state laws and regulations have failed because of lack of enforcement (Kerpelmal et al., 2020).
Childhood lead poisoning hazards need effective policies and regulations that are enforced; otherwise, these above-mentioned organizational efforts are temporary and not permanent solutions. However, the states should update their policies and make sure that childhood lead screening is available to any child regardless of economic status or location. The key types of legislation needed to prevent lead-poisoning are 1) Laws that create penalties for hazardous materials 2) Laws the create incentives for landlords to remove paint 3) Laws that create uniform testing and monitoring services 4) Laws the allocate funding for the removal of lead hazards In addition, the nonprofit community organizations and the community leaders should step up and demand their constitutional and civil rights from their elected officials; otherwise, the suffering of children of color and the children from low-income families will continue and pass to the next generations. However, the community organizations mostly depend on individual and other community funders, but these little resources are not meeting the needs of actions of lead poisoning.
The communities should organize themselves and call on their elected officials and pressure them to pass effective laws and regulations against lead poisoning hazards and then enforce these laws and regulations. Also, the communities should put more pressure on the landlords, sellers, and contractors to eliminate lead poisoning hazards from their properties.
Similarly, renters should call for independent lead inspectors and demand their units should be tested for lead exposure before moving in. In addition, the communities should take their role and prevent childhood lead poisoning by taking all the necessary preventive measures. The household members should clean their houses or rental units on a weekly basis to eliminate entryway floor lead-dust from outside . They should also keep shoes out of reach of children, because shoes from outside can harbor lead-contaminated soil.
It is important to consider the needs of landlords, especially those with only a few units, and low-income property owners. These regulations and penalties should be designed to provide resources and minimize the hardship on low-income communities. Any penalties, in the interest of equity, should be sliding scale and those who cannot afford lead abatement should be given subsidies in the form of direct payments to licensed contractors or tax subsidies. The cost of lead removal is considerable, and it is imperative that solutions to lead-poisoning prevention not prove a further burden to the very communities most impacted by lead contamination.
Several resources are needed to implement the proposed action, and they should come from both the state and federal governments. The state and federal governments should allocate a budget to implement the proposed action. If the federal government provides enough resources to the states and the states provide enough resources to lead poisoning prevention programs, childhood lead poisoning is 100% preventable and needs enough resources, effective laws, and honest enforcement (CDC, 2019). These resources will make possible localized preventive programs that can conduct both primary and secondary prevention. For example, if the levels of the government provide enough resources, the above-proposed actions can conduct primary prevention by eliminating environmental hazards before the children get exposed to them. In addition to that, if the levels of the government provide enough human and financial resources, the above-proposed action can expand the existing local secondary prevention by ensuring that every child gets monthly blood lead testing and follow-up services.
The funding for these programs should come from fines imposed on landlords with more than 10 properties if they fail to implement abatement strategies and from landlord licensing fees. These funds can be used as both a carrot and a stick. The fines will incentivize compliance for those most able to afford abatement, while providing funds to those least able to afford it.
Additional funding could come from fines and taxes levied against corporate lead producers such as battery manufactures or lead ammunition manufacturers. Adding enforcement actions and substantial fines would encourage good behavior while also providing funds to abate the lead pollution that already exists.
Conclusion
Levels of lead poisoning in the United States of America continue to fall year after year.
However, there is a dangerous amount of minority and low socioeconomic children that are still exposed to dangerous levels of lead every day. Whether it be infants ingesting lead paint chips, dangerous air pollutants or lead sources brought into the home, these kids are in danger.
While there have been many attempts over the years to tackle the problem of lead poisoning, especially in children, many of those attempts fall short. One of the major reasons why those attempts fall short is the lack of money being spent on these at-risk communities.
Many laws and policies are only implemented in new construction or in areas that can afford to replace dangerous lead containing hazards. Poor and minority communities are often left without the resources to remove these hazards and even when laws exist, they are often not enforced against landlords or sellers.
Lead poisoning is a relatively simple issue to tackle, as once these hazards have been removed, they are gone forever. It would take a large undertaking of local, state, and federal agencies, creating and enforcing laws and financial opportunities to remove these dangers. This cost is greatly outweighed by the fact that it will cause a reduction in children being exposed to lead poisoning for generations to come. This means that the local, state, and federal governments will not only save millions of dollars in health care costs, but our children will be able to live and breathe without fear of being poisoned by the homes they are supposed to feel safe in.
|
v3-fos-license
|
2018-05-20T16:06:21.916Z
|
2013-06-01T00:00:00.000
|
21745915
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/rbcs/a/CgXKxDhRsNQSsCQD6mwK3GK/?format=pdf&lang=en",
"pdf_hash": "66554263065629474163b9c9ddb667ef615d3192",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1084",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"sha1": "66554263065629474163b9c9ddb667ef615d3192",
"year": 2013
}
|
pes2o/s2orc
|
ESTIMATION OF SOIL MOISTURE IN THE ROOT-ZONE FROM REMOTE SENSING DATA ( 1 )
Field-based soil moisture measurements are cumbersome. Thus, remote sensing techniques are needed because allows field and landscape-scale mapping of soil moisture depth-averaged through the root zone of existing vegetation. The objective of the study was to evaluate the accuracy of an empirical relationship to calculate soil moisture from remote sensing data of irrigated soils of the Apodi Plateau, in the Brazilian semiarid region. The empirical relationship had previously been tested for irrigated soils in Mexico, Egypt, and Pakistan, with promising results. In this study, the relationship was evaluated from experimental data collected from a cotton field. The experiment was carried out in an area of 5 ha with irrigated cotton. The energy balance and evaporative fraction (Λ) were measured by the Bowen ratio method. Soil moisture (θ) data were collected using a PR2 Profile Probe (Delta-T Devices Ltd). The empirical relationship was tested using experimentally collected Λ and θ values and was applied using the Λ values obtained from the Surface Energy Balance Algorithm for Land (SEBAL) and three TM Landsat 5 images. There was a close correlation between measured and estimated θ values (p<0.05, R2 = 0.84) and there were no significant differences according to the Student t-test (p<0.01). The statistical analyses showed that the
empirical relationship can be applied to estimate the root-zone soil moisture of irrigated soils, i.e. when the evaporative fraction is greater than 0.45.Index terms: standard relationship, SEBAL, energy balance, evaporative fraction, latent heat flux.
INTRODUCTION
Soil moisture is widely recognized as a key variable in numerous environmental studies related to meteorology, hydrology, and agriculture (Ahmad & Bastiaanssen, 2003;Vischel et al., 2008;Mattia et al., 2009;Kong et al., 2011).For hydrological and agricultural purposes, the estimation of soil moisture is crucial since it controls the quantity of water available for vegetation growth (Cook et al., 2006), as well as the deep aquifer recharge (Seneriviratne et al., 2006;Kjellström et al., 2007;Lam et al., 2011); and soil saturation, which controls the partitioning of rainfall between runoff and infiltration, and sediment transport (Vivoni et al., 2007;Ávila et al., 2011).In meteorology, several climate studies have indicated that surface-atmosphere energy transfer, the atmospheric circulation and precipitation are significantly affected by spatial and temporal variations of soil moisture, which controls evapotranspiration by its influence on evaporation and water availability to plants and influences the partitioning of latent and sensible heat as well (Savenije, 1995;Grayson et al., 1997;Entekhabi et al., 1999;Cook et al., 2006).Soil moisture is also fundamental in the biogeochemical cycle of CO 2 , since an ecosystem can switch from a CO 2 sink to a CO 2 source, according to the soil water availability (Cabral et al., 2011).
The high spatial and temporal variability of soil moisture caused by the heterogeneity of soil texture, topography, vegetation, and climate in the natural environment makes soil moisture difficult to measure (Kong et al., 2011).A complete description of spatial and temporal variability of soil moisture requires frequent and multiple three-dimensional measurements (Scott et al., 2003).Due to operational problems, these measurements become virtually unviable.However, the spatial and temporal variability of soil moisture can be determined by the significantly modernized remote sensing techniques, especially based on data obtained by (active and passive) microwave sensors or satellite images (Moran et al., 2002;Su et al., 2003;Wang et al., 2007;Vischel et al., 2008;Crow et al., 2008;Pauwels et al., 2009;Pierdicca et al., 2010).However, data measurements by microwave sensors limit estimates of soil moisture to the surface layer (± 5 cm) (Ahmad & Bastiaanssen, 2003;Crosson et al., 2005).On the other hand, an empirical relationship between the evaporative fraction (Λ), defined as the ratio between latent heat flux and available energy (net radiation minus soil heat flux) (Shuttleworth et al., 1989) and soil moisture (θ), was developed by Bastiaanssen et al. (2000) based on data from two large-scale climate-hydrology studies, investigating soil moisture-evaporationbiomass interactions.This was the First ISCLCP (International Satellite Land Surface Climatology Project) Field Experiment FIFE (Sellers et al., 1992).The other study was the ECHIVAL Field Experiment in Desertification-Threatened Areas EFEDA (Bolle et al., 1993).Scott et al. (2003) modified this relationship by the standardization of θ with saturated soil moisture (θ sat ), called the relative soil moisture content: (1) The relative soil moisture content θ/θsat (-) ranges from 0 (totally dry soil) to 1.0 (full saturation).As proposed by Ahmad & Bastiaanssen (2003), equation ( 1) is denominated standard relationship and can be applied to a wide range of soils.Scott et al. (2003) showed that this equation, without any modification, could be directly applied to irrigated soils of the Lerma-Chapala basin in Mexico, while Ahmad & Bastiaanssen (2003) showed that this method requires no calibration and can be comprehensively applied without soil data to irrigated areas in the region of Rechna Doab in the Indus River Basin.Mohamed et al. (2004) applied the method without previous calibration studies to the spatial variability of evaporation and moisture storage in the swamps of the Upper Nile, Egypt.
The application of the standard relationship using Λ provided by remote sensing satellites resulted in spatially distributed θ values for greater depths than those covered by microwave sensor data.The spatial variability of θ became possible due to the spatial variability of Λ obtained from energy balance from remote sensing data by algorithms such as SEBAL (Bastiaanssen et al., 1998a, b), S-SEBI (Roerink et al., 2000;Sobrino et al., 2007), and SEBS (Su, 2002).The extrapolation of θ to plant root zone is viable since θ sat values represent the moisture conditions in this layer (Scott et al., 2003;Ahmad & Bastiaanssen, 2003;Mohamed et al., 2004).This article aims to evaluate the performance of the standard relationship (Equation 1) to estimate the soil moisture of an irrigated area in the Brazilian semiarid region, and apply the standard relationship to Λ by SEBAL and TM -Landsat 5 images.
MATERIALS AND METHOD
The study was carried out on the Apodi Plateau, near the state border between Rio Grande do Norte and Ceará, in the northeastern region of Brazil.The experiment was conducted at the Experimental Station of EMPARN (Agricultural Research Organization of Rio Grande do Norte), in the county of Apodi (5 o 37' 37" S; 37 o 49' 54" W, 130 m asl).
According to Thornthwaite (1948), the regional climate is semi-arid, mean annual pluvial precipitation is 920 mm, concentrated between March and June, and the mean air temperature ranges from 23.5 o C (August) to 28.3 o C (April).The soils of the study area were classified as Cambissoil (Embrapa, 2006) with a sandyclay-loam texture, according to the USDA (United States Department of Agriculture) classification (with sand, silt and clay contents of 57, 9 and 34 %, respectively).A more detailed description of the study area was published by Bezerra et al. (2012a).The soil moisture at field capacity and permanent wilting point, besides the van Genuchten-Mualem parameters (van Genuchten, 1980), which are representative for the root zone of cotton, are presented in table 1.
The trials were carried out in an irrigated 5.0-ha area, where cotton cultivar BRS 187 8H was planted in the dry seasons of 2008 and 2009, irrigated by sprinkler irrigation Cotton was sown in a row spacing of 0.9 m at a within-row density of 10 plants m -1 , with a total of approximately 133,000 plants ha -1 .The energy balance and evaporative fraction (Λ) were estimated and soil moisture measured.
The energy balance of cotton was expressed by means of bulk energy and heat fluxes (Perez et al., 1999;Teixeira et al., 2007;Yunusa et al., 2011): where Rn is net radiation (W m -2 ), measured by a net radiometer model NR-LITE (Kipp & Zonen, Delft, The Netherlands), G is the soil heat flux (W m -2 ), measured 0.02 m below the surface using soil heat flux plates (model HFP01SC-L Hukseflux Thermal Sensors, Delft, The Netherlands).LE and H are the latent and sensible heat fluxes (W m -2 ), respectively.LE was derived from the energy balance equation (Equation 2) and the Bowen ratio concept (Bowen, 1926): where β is the Bowen ratio, which was obtained through the following equation:
Soil parameter Value
θ FC (cm 3 cm -3 ) 0.32 θ WP (cm 3 cm -3 ) 0.13 θ res (cm 3 cm -3 ) 0.07 θ sat (cm 3 cm -3 ) 0.40 where γ is the psychrometric constant, and ∆T and ∆e are the gradients of temperature and actual vapor pressure, respectively, measured at two levels above the crop canopy, by psychrometers with copperconstantan thermocouples (type T).Data were measured every 5 s and averages recorded every 20 min on a CR3000 data logger (Campbell Sci, Logan, UT, USA).
The standard relationship (Equation 1) was evaluated by comparing θ and Λ measured in the trials.The q values were estimated by the standard method (Equation 1), using Λ, obtained by the BREB (Bowen Ratio-Energy Balance) method and compared with the results of the PR2 -Profile Probe.The significance level of differences between these results was analyzed by the determination coefficient (R 2 ), root mean square error (RMSE) and Student's t-test (p<0.01),according to the following equations (Wilks, 2006): The standard relationship (Equation 1) was applied using the Λ values obtained from three TM -Landsat 5 images.The images for path/row 216/064 were provided by the Brazilian Institute for Space Research (INPE), for November 01, November 17 and December 19, 2008.The steps to obtain the energy balance by SEBAL involve radiometric calibration and the calculation of albedo, thermal emissivity, surface emissivity, longwave radiation (incoming and outgoing), and finally the values of Rn, G, H, and LE for satellite overpass time.LE was calculated as a "residual" of the surface energy balance equation.These steps were described in detail by Bastiaanssen (2000), Bezerra et al. (2008), Santos et al. (2010), andBezerra et al. (2012b).
The energy balance components provided by SEBAL were validated using data obtained by the BREB method, although SEBAL had been validated for irrigated soils in the semi-arid region of Brazil (Bezerra et al., 2008;Folhes et al., 2009;Teixeira et al., 2009;Santos et al., 2010).An error analysis was performed using the Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE), as given by the equations (Wilks, 2006): where e k and m k are the k th of n pairs of estimated and measured values.
RESULTS AND DISCUSSION
Figure 1 shows the relation between θ/θ sat and Λ based on the field observations and the standard curve (Equation 1).The root mean square error (RMSE) between θ estimated by standard relationship and field data was 0.02 cm 3 cm -3 for Λ, ranging from 0.56 to 0.96.Ahmad & Bastiaanssen (2003) found RMSE values of 0.05 cm 3 cm -3 under wheat-rice rotation in the Rechna Doab region of an irrigation system in the Indus River Basin, in Pakistan, for Λ values ranging from 0.48 to 0.94.Similar values were found by Scott et al. (2003) in irrigated soils of the Lerma-Chapala basin, in Mexico.
According to Ahmad & Bastiaanssen (2003), the deviation from the standard curve could be associated to (1) instrumental errors (both for θ and Λ), (2) the different scales between θ and Λ, and (3) the empirical character of equation 1.The difference between the observation scales of θ and Λ is a relevant factor, because while the θ values were locally measured and representative for a reduced area, Λ values were derived from observed meteorological variables from a range of hundreds of meters.The range of the BREB method should be hundreds of meters and uniform, to provide a sufficient distance to establish an equilibrium boundary layer (Allen et al., 2011).
The θ values measured and estimated by the standard curve were compared (Figure 2).There was a close correlation (p<0.05) with the coefficient of determination of 0.84 and there were no significant differences according to the Student t-test (p<0.01).Consequently, the standard curve (Equation 1) can be applied to soils of irrigated cotton on the Apodi Plateau with typical values expected for irrigated soils (RMSE = 0.02 cm 3 cm -3 , Λ = 0.56 -0.96).Under other conditions, e.g., in rainfed agriculture, under water stress caused by insufficient irrigation, in areas of native vegetation and/or for bare soil in dry periods, considerably lower Λ values are expected, due to uncertainties in the q estimates.The range of errors between measured and estimated θ values are plotted in figure 3, showing differences of -0.034 to 0.05 cm 3 cm -3 .The errors were predominantly > 0.010 cm 3 cm -3 , for Λ > 0.60 (Figure 3).In the Doab Rechner area of an irrigation system in the Indus River basin, Ahmad & Bastiaanssen (2003) observed that these errors tended to increase systematically in absolute numbers as Λ increases.In this study, no error tendency was observed at Λ < 0.5, since the experiment was limited to the irrigated area.
The assessment of the standard method consisted of calculating soil moisture from Λ on a pixel-to-pixel basis, using TM-Landsat 5 images and SEBAL.The θ sat of the root-zone was obtained by the gravimetric method at 0.40 cm 3 cm -3 .Three clear-sky images were used.These images were acquired 33, 49 and 81 days after cotton emergence.The overlay images covered the experimental area of EMPARN, which consisted of irrigated fields, pasture, native vegetation, and bare soil.
The SEBAL validation (Table 2) consisted of the comparison of energy fluxes, estimated by SEBAL and computed by BREB, to calculate Λ.The MAE was less than 20 W m -2 (Table 2) and MAPE of Rn and LE less than 3 %, indicating full reliability.On the other hand, the greatest uncertainty associated to SEBAL was verified in the G estimates (MAPE > 20 %).This was however considered a minor problem by Bastiaanssen et al. (1998b), because microscale soil heat flux measurements are representative of a very small influence sphere and therefore incompatible with the size of one Thematic Mapper pixel anyhow.Moreover, uncertainty decreases with increasing scale (Bastiaanssen et al., 1998b(Bastiaanssen et al., , 2000)).According to Bastiaanssen et al. (2000), the error (1-ha resolution) varies from 10 to 20 %.For an area of 1000 ha, the error is reduced to 5 % and for farmland regions of 1 million ha the error becomes negligibly small.
Figure 4 shows a scatter plot of SEBAL-estimated energy fluxes versus field measurements.The high agreement between the two approaches was evidenced by the determination coefficient of 0.99, confirming SEBAL as an appropriate tool to calculate energy fluxes at the Earth's surface on a spatial scale.
The spatial distribution of soil moisture in the root zone at the Experimental Station of EMPARN on the Apodi Plateau (Figure 5) was calculated for three dates, i.e., a) Nov-01, b) Nov-17 and c) Dec-19 of 2008.Soil moisture was highest in the root zone of the irrigated cotton field (highlighted) (θ around 0.32 cm 3 cm -3 ).The images acquired on Nov-01 and Nov-17 (Figure 5a,b) coincided with the irrigation dates.Note that the soil moisture of the entire cotton field was at field capacity.The image of Dec-19 (Figure 5c) did not coincide with the irrigation event.For this reason, the soil moisture in part of the cotton field was below field capacity (θ around 0.25 cm 3 cm -3 ).Our results demonstrate that the methodology applied in this study can detect the effect of irrigation in the selected area, to maintain the soil more humid than in the surrounding area.since soil saturation is representative for the moisture conditions in the plant root zone.
2. Applications of the standard relationship are more reliable for irrigated soils or with sufficient soil moisture, i.e.Λ greater than 0.45.
3. For other conditions, such as bare soil, native vegetation and pasture during dry periods, the standard relationship should be tested experimentally.
and m k are k th estimated and measured values, respectively, m and e are, respectively, the measured and estimated means, and N m and N e are sample sizes.
Figure 2. Scatter plot between θ θ θ θ θ values, measured and estimated by the standard method.
Figure
Figure 5. Map of soil moisture in the root zone on (a) Nov-01, (b) Nov-17 and (c) Dec-19 of 2008 at the Experimental Station of EMPARN, Apodi Plateau (cotton field indicated by the dotted square).
|
v3-fos-license
|
2023-04-30T15:01:46.710Z
|
2023-04-27T00:00:00.000
|
258415386
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4395/13/5/1243/pdf?version=1682594490",
"pdf_hash": "a3b649c891efb349507cc57fc6fec604a1b77a10",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1086",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "4ea033cd5e00ef29127fdcb69c35119b123df6bd",
"year": 2023
}
|
pes2o/s2orc
|
Evaluating the Effects of Flooding Stress during Multiple Growth Stages in Soybean
: Flooding is becoming an increasing concern for soybean ( Glycine max [L.] Merr.) production worldwide due to the sensitivity of most cultivars grown today to flood stress. Flooding can stunt plant growth and limit yield, causing significant economic loss. One sustainable approach to improve performance under flood stress is to develop flood-tolerant soybean cultivars. This study was conducted to evaluate soybean genotypes for the response to flood stress at three critical growth stages of production—germination, early vegetative growth (V1 and V4), and early reproductive growth (R1). The results demonstrated that stress imposed by flooding significantly affected soybean yield for each growth stage studied. The average germination rate over the various treatments ranged from 95% to 46%. Despite the poor germination rates after the extended flood treatments, the flood-tolerant genotypes maintained a germination rate of >80% after 8 h of flooding. The germination rate of the susceptible genotypes was significantly lower, ranging 58–63%. Imposing flood stress at the V1 and V4 growth stage also resulted in significant differences between the tolerant and susceptible genotypes. Genotypes with the highest level of flood tolerance continually outperformed the susceptible genotypes with an average 30% decrease in foliar damage based on visual scoring and a 10% increase in biomass. The yield of the tolerant genotypes was also on average 25% higher compared to the susceptible genotypes. These results suggest that breeding for flood tolerance in soybean can increase resiliency during crucial growth stages and increase yield under flood conditions. In addition, the genotypes developed from this research can be used as breeding stock to further make improvements to flood tolerance in soybean.
Introduction
The United States (US) is consistently among the top three largest soybean-producing countries in the world [1].Between 1961 and 2018, soybean production within the US increased by 570%-from 18.97 million tons to 123.66 million tons, respectively [2].As such, soybeans are one of the main cash crops in the US, with approximately 83.1 million acres planted in 2020, and 1.6 million acres in North Carolina [3].
Flooding is just one of many natural occurrences with which producers must contend.Over the last 40 years, flooding alone has cost the US an estimated USD 161.6 billion in damages [4].A study performed in 2019 concluded that between 2001 and 2016 more than 20 million hectares of soybeans in the US were lost due to damage caused by excess field moisture and flooding [5].
Excess water can lead to the reduced yield of many crops, as the standing water and water-logged soils deprive plants of the necessary light, oxygen, and carbon dioxide required for growth [6].Symptoms of flood stress in soybeans can range from reduction in nitrogen fixation within the root nodules, reduction in net photosynthesis, reduction in photosynthesis and chlorophyll synthesis-related genes, chlorosis and necrosis of the leaves, defoliation, stunting, and the most severe-plant death [7][8][9][10][11][12].
In areas prone to flooding, the yield loss from this environmental stressor can be just as detrimental as drought.The coastal plain region of North Carolina is the largest soybean-producing region of the state, and yield is often hampered by flooding.The soil of this region comprises a Portsmouth fine sandy loam, approximately 3.2% organic matter, and is situated on a high water table-making standing water at some point during the growing season a common event.In other regions in the nation, such as the Mississippi Delta, flooding during the early vegetative growth stages can cause a 25% reduction in yield [13][14][15][16][17].
The growth stage at which flooding occurs can have a significant impact on the plants' ability to respond and adapt to stress.Germination is a crucial stage, as it establishes the plant stand, and ultimately the yield potential.A study conducted by Wu et al. [17] evaluated germination under various flooding treatments.They concluded that flooding significantly affects germination rates and does not depend on genotype, flood tolerance, or yield potential.However, if flooding occurs after germination and the establishment of plant stands, the cultivar does play a more significant role in stress response and yield [18].
Scott et al. [18] demonstrated that plants exposed to temporary flooding during the early vegetative stages were more likely to produce a higher yield than plants exposed to flooding during the reproductive stages.Those exposed during the vegetative stages were able to recover much of the nitrogen (N) and potassium (K) lost three weeks post-flooding; however, recovery never reached non-flooded control levels.As a result, there was still a yield loss when compared to the control plots, but it was not as significant a loss as at the reproductive stages.The total yields for the V1, V4, and R1 flood trials were 88%, 83%, and 44% of the control yield, respectively [18].
Yield is also negatively impacted by the duration of flooding.Extended flooding at the V1 and V4 growth stages has been shown to significantly suppress root growth [9].In the same study, the plant's growth stage, when exposed to extended flooding, also had a significant impact on root nodulation.Root nodulation was completely inhibited under flooding at the V1 stage and never recovered; however, those exposed at V4 could resume nodulation after the flooding had been removed [19].Sallam and Scott [19] also observed that extended flooding at early vegetative stages caused stem cracking and reduced plant height.While the V4 trial only experienced 33.3% stem cracking compared to the 50% observed for V1, overall, the cracking at V4 was more severe than at V1.
Flooding can be unpredictable and occurs at different times throughout the growing season.Its impact on yield depends on the growth stage at which it occurs and for how long the fields remain under flood conditions.This study aimed to evaluate the performance of newly developed genotypes at various growth stages and durations of flooding in the North Carolina coastal plain region and to identify breeding lines that may be beneficial in developing soybean cultivars with improved flood tolerance.Previous studies have focused either on modeling [10], field screening at only one or two growth stages [9,14,[18][19][20] or at germination [17].However, to the knowledge of this article's authors, no study has evaluated more than two stages of development in the field.In addition, many of the genotypes identified as exhibiting flood tolerance are derived 12.5-50% by pedigree from exotic plant introductions and have not been previously reported.
Experiment I and II: Flooding Response at the V4 and R1 Growth Stages
In 2019, 2020, and 2021, two field studies were planted at the Tidewater Research Station (TRS) near Plymouth, NC (35 • 51 52.9 N, 76 • 39 25.9 W).This location consists of a Portsmouth fine sandy loam with approximately 3.2% organic matter, a flat landscape, and a high water table-all ideal for implementing flood treatments.Experiment I consisted of 55 genotypes ranging in maturity from maturity groups (MGs) VI-VII.The genotype selections were based on previous flood stress observations made across multiple years and testing environments.In 2019 and 2020, the 55 selected genotypes were planted in a single row measuring 3 m in length with a row spacing of 7.62 cm.
Experiment II was grown in 2020 and 2021, with 15 genotypes evaluated, of which 5, 5, and 4, respectively, had previously been identified as tolerant, moderately tolerant, and susceptible to flood stress (Table 1).Of the 15 genotypes evaluated, 12 are derived from >12.5% wild soybean (Glycine soja Sieb.and Zucc.) by pedigree.In 2020, each genotype was grown in 3-row plots measuring 6.1 m in length with a row spacing of 7.62 cm.All data were collected from the center row.In 2021, each genotype was grown in 4-row plots measuring 3 m in length with a row-spacing of 5.08 cm.Each genotype was planted in a randomized complete block design (RCBD) with four replications for each experiment.Berms were constructed around each experiment to control the flooding.2 † Visual ratings on provided on a 0-to-9 scale: 0 = no damage, 1-2 = slight yellowing, 3-4 = minor yellowing, 5 = moderate yellowing and canopy defoliation, 6-7 = extensive yellowing and defoliation, 7-8 = severe chlorosis, and 9 = >95% severe chlorosis and plant death.‡ Flooding was induced at the V4 vegetative growth stage, identified when the fourth trifoliate leaf unfolds.§ Flooding was induced at the R1 reproductive growth stage, identified as when flowering starts.
The plots were each subjected to 4-6 cm of standing water for approximately 7 days.Visual ratings, on a scale of 0-9, were recorded 7 d and 14 d after the flood was released, a rating of 0 being no visual symptoms and 9 indicating that ≥95% of plants were dead.Ratings of 1, 3, 5, and 7 indicated no damage, slight yellowing of leaves, moderate yellowing and defoliation of the canopy, and extreme yellowing and defoliation, respectively.Ratings were recorded at the vegetative (V) 4 stage and reproductive (R) 1 stage.The plant was considered at the V4 stage when the 4th trifoliate leaves were fully developed.The R1 growth stage was defined as when flowering began at any node on the main stem.
Experiment III: Early Development Evaluation under Flooding Stress
In 2021, an additional study was conducted at the TRS near Plymouth, NC, in which eight blocks were planted.Berms were constructed around each block using a 3-point inverted disc plow mounted to a tractor.Each berm measured 0.75 m in height, and 1 m in width.Within each block, each genotype was planted within a randomized complete block design (RCBD) with four replications.A total of 100 seeds were planted for each genotype in a single row measuring 3 m in length and a row spacing of 5.08 cm.Four of the blocks were used to measure flood response at germination and the remaining four to measure response at the V1 growth stage.In the same field, two control tests were grown, with no flooding and no berm construction.
To evaluate response at germination, the berms were flooded to 4-6 cm of standing water, three days after planting.At germination, four flooding treatments were imposed: 8 h, 16 h, 24 h, and 36 h.Germination rates were recorded 14 days (d) after the release of flooding and were determined by counting the number of emerged seedlings from the 100 seeds that were planted.
To evaluate the effects of flood stress at the V1 growth stage, 15 genotypes were planted within the four remaining blocks with surrounding berms.The plants were considered to be at the V1 growth stage when leaves were fully developed at the unifoliate node.Upon reaching this stage, the berms were flooded to 4-6 cm above the base of the plant and the levels maintained for 3 d, 6 d, and 10 d.Genotypes were evaluated and plant height in cm recorded upon reaching the V5 growth stage.Plant height was defined as the distance from ground level to the apical meristem.Flood scores were also taken at the V4 and R1 growth stages using the same 0-9 visual rating scale as described previously.The dry biomass at the R1 stage was then calculated from 15 plants in each row.
Statistical Analysis
Flooding treatments and genotypes were considered fixed effects; all other effects were treated as random.Statistical analyses were performed with SAS version 9.4 (SAS Institute Inc., Cary, NC, USA).Analysis of variance (ANOVA) and least square means (LSMEANS) were performed using PROC MIXED.Fischer's protected least significant difference (LSD) was used to calculate significant differences among different treatments with a confidence level of p ≤ 0.05.Pearson correlation coefficients were calculated on an entry means basis to assess the relationships among yield, maturity, seed size, and visual flood ratings for flooded and non-flooded treatments.
In the 2021 field trial the 15 genotypes had a mean V4 flood score 0.8 lower than that of 2020 (Tables 3 and 4).The mean R1 flood score for 2020 was 0.3 lower than 2021.NC-Dunphy showed less stress injury in 2021 at the V4 stage with only a 3.9 rating compared to 6.0 in 2020.N10-792, which had one of the lowest V4 ratings in 2020, performed better in 2021 with a rating of 4.5 and 3.8, respectively.In 2021 on average each genotype performed more poorly at the R1 stage than the V4 with the mean rating at V4 being 5.0 and R1, 5.5.For the genotype N05-7380, the rating between the two years showed the least difference with 2020 V4 and R1, being 4.7 and 4.6 in comparison to 2021's rating of 4.3 and 4.6.Statistically, N10-792 had the greatest yield, 2479 kg/ha and, numerically, the lowest flood score (3.4) under flooded conditions in 2021 (Table 4).In addition, N10-792 had a seed yield (4844 kg/ha) that was statistically similar to the highest yielding genotype, N8002 (4912 kg/ha), under non-flooded field conditions.In comparison, N11-7620, had the lowest recorded yield of 1250 kg/ha under flooded conditions and the highest recorded flood score at R1. N11-7595 also exhibited a flood score of 6.5 at R1 and had the second lowest yield, numerically, under flooded conditions (1458 kg/ha).N11-7433 yielded 2150 kg/ha under flooded conditions, statistically similar to N05-7380 (1928 kg/ha), N11-352 (1908 kg/ha), and NC-Dunphy (2076 kg/ha).In contrast, N11-7433 (3615 kg/ha) yielded statistically less than N05-7380 (4871 kg/ha) and NC-Dunphy (4307 kg/ha).
Pearson correlation coefficients among five parameters, including yield, maturity, seed size, and visual flood ratings for flooded and non-flooded treatments, are reported in Table 5.Most notably, there was a strong negative correlation between the visual ratings taken at the flooded V4 growth stage and flooded yield (r = −0.79,p < 0.01) and the visual ratings at the flooded R1 growth stage and flooded yield (r = −0.75,p < 0.01).This validates the visual scores that were used to rate plots for flood tolerance because a higher score indicates that more flood damage was observed.A strong positive correlation (r = 0.89, p < 0.01) was also observed between visual ratings taken at the flooded V4 growth stage and flooded R1 growth stage, suggesting most genotypes exhibit flood tolerance when flooding occurs at multiple growth stages.1.00 † Yield was reported as kg/ha.‡ Maturity was determined as days after October 1, where October 1 = 1.§ Seed size was reported as g per 100 seeds.
The biomass for each genotype under the four flood treatments are recorded in Table 7. Within the control group, the tolerant check N05-7380 and N8002 were significantly higher recorded biomass compared to the other genotypes tested.N8002, had the largest recorded biomass of 32.2 g and was significantly similar to N05-7380-31.2g.The biomass for each of the 15 genotypes decreased following each extended treatment (0 d, 3 d, 6 d, and 10 d).N05-780 maintained the most biomass out of the genotypes tested, with a mean loss of 23%.The flood tolerant checks, N8002 and NC-Dunphy, had a mean biomass loss of 29% and 26%, respectfully.Genotype N07-15307 had the most significant recorded mean loss of biomass, 35%.
Discussion
Previous research has shown that the growth stage soybeans are exposed to flooding does have a significant impact on the severity of damage to plant growth and development [18,21].In addition, a model developed to project the response of soybean to future climate scenarios showed that intense rain events had a greater negative impact on yield than a 25% increase in rainfall distributed over 1-3 months [22].This further emphasizes the need to improve flood tolerance in soybean.
In this study, the mean flooding score was numerically higher, indicating more severe damage, at R1 than at V4 in 2021.However, in 2020, the opposite was observed.Previous research has shown that soybeans are more sensitive to flooding at the R growth stages compared to the V growth stages [23].While the level of tolerance at any particular growth stage is important, the authors of this manuscript conclude that the overall performance of a genotype across growth stages is the best method to select for tolerant genotypes.This is further supported by the strong positive Pearson correlation coefficient between the flooded V4 and R1 ratings reported in this study.
Since during a growing season it is unknown when flood damage could occur, it is desirable to select for genotypes with a broad tolerance to flooding across multiple growth stages.Flood tolerant QTLs have been previously identified in exotic PI lines [20], and from this study, genotypes with exotic PI pedigrees that exhibited flood tolerance at germination and the V4 and R1 growth stages were identified.The PIs used to develop the genotypes found to be tolerant in this study have not been previously evaluated for flood tolerance and will be investigated in the future to identify new QTL for flood tolerance in soybean.While PI 471938 was only described as moderately flood tolerant, one of the tolerant experimental lines, N05-7380, and a tolerant cultivar, N8002, are each 25% derived from PI 471938 by pedigree.Other experimental lines with a similar exotic pedigree may also show promise for increased flood tolerance.
No genotype had a germination rate ≥80% when exposed to flood treatments longer than 8 h.However, the genotypes N05-7380, N10-792, N11-10295, and NC-Dunphy did exhibit a germination rate >80% after 8 h of flooding, which is the minimum seed rate for certified soybean seed, as set by the Association of Official Seed Certifying Agencies (AOSCA).Maintaining a germination rate of at least 80% has been shown to produce an efficient plant stand for the maximum yield potential [24].
Genotype N05-7380 consistently performed well and had the largest percentage of germination (59%) under the most severe flood treatment (32 h).While well below the minimum requirement of 80%, this demonstrates some promise as a breeding line for improved flood tolerance at germination.N05-7380 also had the largest recorded biomass of the experimental lines tested under the 0 d of flooding and continued to have the largest recorded biomass under each of the flood treatments, resulting in a mean biomass loss of only 23% after 10 days of flooding.This was statistically significant when compared to the biomass of all tested genotypes, with the exception of N10-792 and N8002 (Table 6).N05-7380 statistically outperformed NC-Dunphy-a tolerant check-by maintaining 8% more yield and 3% more biomass (Tables 4 and 6).N05-7380 also had the second largest recorded control yield of 4871 ka/ha.The control yield of N8002, a tolerate check, was not significantly greater than N05-7380, however, the yield of N05-7380 under flooding was significantly greater than N8002 under flooding (Table 4).
N10-792, a genotype that also performed well in the germination trial, had a mean biomass loss of only 26%.N10-792 had the highest yield under flooded conditions, low visual scores, and high yield under non-flooded conditions.The performance of N10-792 makes it an excellent line for breeding programs to use for increasing flood tolerance while maintaining high yields under non-flood conditions.The development of flood-tolerant cultivars has recently been reported as an unfeasible approach to improving flood tolerance in soybean [12].Thus, the results of this study are very promising and greatly contribute to improving soybean performance under flood stress.
Conclusions
The response of N05-7380 to flooding was the most consistent across the three experiments conducted.Its low visual stress ratings, high germination rate after 8 h flooding, and its yield under flooding, were equal to-or greater than-several of the cultivars used as flood-tolerant checks.Other experimental lines that were identified as flood tolerant also demonstrated an increased performance under multiple flood treatments at various growth stages compared to the susceptible genotypes.As such, N05-7380, N10-792, and other tolerant genotypes show promise for use as breeding lines in the future development of flood-tolerant cultivars by both public and private breeders.Breeding for flood tolerance is complex and requires identifying diverse genotypes across a wide range of maturity groups.Soybean maturity is used to classify soybeans and indicates the growing region best suited for growing a particular maturity group.Classification of soybean maturity is based on the period of time from planting to maturity because soybean is a photoperiod-sensitive crop.The maturity and exotic pedigree of the genotypes identified in this study offer new germplasm not previously reported.
Author Contributions: E.F. and B.F. conceived the work, contributed to the concept, design, and data collection.B.F. contributed to the statistical analysis of the data.E.F. wrote the manuscript.R.P., J.D. and C.S. supervised, improved, revised, and reviewed the article.All authors have read and agreed to the published version of the manuscript.
Funding: We would like to thank the United Soybean Board (2220-172-0154) and North Carolina Soybean Producers Association (1109-2021-2271) for providing financial support.We also thank the support staff of the Soybean and Nitrogen Fixation Unit and the Tidewater Research Station for their assistance with field management.Mention of trade names or commercial products in this publication is solely for the purpose of providing information and does not imply recommendation or endorsement by the USDA.The USDA is an equal opportunity provider and employer.
Table 1 .
Visual ratings of flood stress injury to fifty-five genotypes evaluated in Plymouth, NC at the Tidewater Research Station in 2019 and 2020.Flooding was imposed at the V4 and R1 growth stage.Individual ratings were reported for each growth stage and combined across growth stages.
Table 2 .
Descriptive characteristics of fifteen soybean genotypes evaluated for flood stress response in 2020 and 2021 at the Tidewater Research Station in Plymouth, NC.
† Mod Tolerate † A dot (.) indicates no exotic germplasm was used in the development of this line.
Table 3 .
LS means of 15 genotypes in flooded and non-flooded (control) treatments at Plymouth, NC in 2020.The flooding treatment consisted of 4-6 cm of water above ground level for ~7 days.Yield data were not collected in 2020 due to excess rainfall during harvest.
Table 4 .
LS means of 15 genotypes in flooded and non-flooded (control) treatments at Plymouth, NC in 2021.The flooding treatment consisted of 4-6 cm of water above ground level for ~7 days.Flooding was induced at the V4 vegetative growth stage, identified when the fourth trifoliate leaf unfolds.‡Flooding was induced at the R1 reproductive growth stage, identified when flowering starts.§ Maturity was determined as days after October 1, where October 1 = 1.
Table 5 .
Pearson correlation coefficients for 15 genotypes in flooded and non-flooded (control) treatments at Plymouth, NC in 2021 for yield, maturity, seed size, and visual flood scores.
Table 6 .
Comparing seed germination rate means under four different flooding durations and a control test without flood stress of 15 genotypes at Plymouth, NC in 2021.
Table 7 .
Biomass (g) recorded for 15 genotypes evaluated under flood stress for 3 to 10 days and a control test without flooding (0 days) at Plymouth, NC in 2021.
|
v3-fos-license
|
2021-11-03T15:09:43.212Z
|
2021-10-30T00:00:00.000
|
240449057
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/20930/19480",
"pdf_hash": "14c59ac1cfc32710ad433c97b81a7c8db546c713",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1087",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "cf0dc6df8bfeb89a77d7a660cdd57b11f5aeba72",
"year": 2021
}
|
pes2o/s2orc
|
Stability evaluation of quail egg powder obtained by freeze-drying
This study aimed to produce quail egg powder by freeze-drying and to evaluate its stability in different types of flexible packages (low-density polyethylene, polypropylene and pigmented polypropylene) in high relative humidity (approximately 81%) at 25 °C during 59 days. The packages were evaluated for water vapor permeability and freezedried egg was characterized as to bulk density and hygroscopicity (initial time), and moisture, water activity, pH and color (until the end of storage). GAB, BET and Peleg sorption isotherm models were adjusted to the experimental data to predict monolayer moisture content in the powdered eggs. The freeze-dried quail eggs presented a little oscillation in color coordinates, reduction in pH, and increase in moisture content and water activity during storage for all packages used. No evaluated packaging was sufficiently effective as a moisture barrier. GAB and BET models fitted better to the experimental data for the freeze-dried quail egg, and the estimated monolayer moisture values were 0.0333 and 0.0227 g H2O/g solids, respectively. The powdered quail egg has industrial potential, however, it is susceptible to significant changes throughout storage when exposed to high relative humidity and conditioned in the tested packages. Commercially, as this product can be sold in regions with different temperatures and relative humidity, it is essential to consider the use of preservatives or anti-wetting agents.
Introduction
The quails are small birds (100-200 g, in adulthood) which belong to the Galliformes order and Phasianidae family, and the Coturnix genus is the most used for captive breeding worldwide, specially, Coturnix coturnix japonica (Japanese quail), that is the most common quail regarding egg production. The Japanese quail requires a relatively small breeding area, reaches sexual maturity early, has high egg-laying rate (about 300 eggs during your reproductive period), is resistant to disease, and easy to handle. Therefore, its breeding can be an excellent income-generating opportunity (Chełmońska et al., 2008;Shanaway, 1994;Thélie et al., 2019).
The quail egg performs an important role in human food due to its high nutritional potential, being a source of proteins, amino acids, minerals, and vitamins. Its composition is similar to the chicken egg (Arthur & Bejaei, 2017; United States Department of Agriculture [USDA], 2021a; USDA, 2021b), however, the production and consumption of quail egg in the world are still very low compared to chicken eggs (Food and Agriculture Organization of the United Nations [FAO], 2021).
The quail egg weighs an average of 9 g, approximately 1/5 the weight of a medium chicken egg (USDA, 2021a;USDA, 2021b). Therefore, in several recipes the use of the quail egg becomes inconvenient. In addition to the small size, the greater fragility of the quail eggshell (Sun et al., 2019) is also an inconvenience, resulting in significant losses along the production chain.
The quail egg processing can be applied, offering convenience to consumers, in addition to promoting a longer shelf life, adding value to the product (Arthur & Bejaei, 2017) and avoiding losses. A good example is the dehydration of the quail egg by freeze-drying, which is a mild method because it does not use high temperatures. In this process, the previously frozen food is subjected to a vacuum, and the present water changes directly from the solid state to steam (Jayaraman & Gupta, 2006;Lechevalier et al., 2013;Sokhansanj & Jayas, 2006). In addition to choosing a suitable dehydration method, it is essential that the powder produced is correctly stored to presents good stability throughout the storage period. Therefore, the choice of packaging is essential since it must protect the food from external factors, such as moisture and oxygen.
Considering that quail breeding can be a good source of income and given the small world production of quail eggs compared to chicken eggs, there must be an incentive for quail breeding and processing its eggs, which have great nutritional and industrial potential. However, there is a lack of literature about the production and characterization of quail egg powder. In this way, further studies about the processing, stability and packaging of powdered quail eggs must be developed, promoting a new world trend.
Several studies have already been developed to evaluate the interference of packaging on the stability of powdered Research, Society andDevelopment, v. 10, n. 14, e184101420930, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i14.20930 3 foods, such as apple peel (Henríquez et al., 2013), macaúba palm (Oliveira et al., 2015) and sweetened yogurt (Seth et al., 2018) throughout storage. Moura et al. (2008) evaluated the effects of different packages on the internal quality of fresh Japanese quail eggs. However, concerning the quail egg powder, there is no still publication with this same purpose.
This study aimed to produce quail egg powder by freeze-drying, to characterize and evaluate its stability in different types of flexible packages (low-density polyethylene, polypropylene and pigmented polypropylene) in high relative humidity (approximately 81%), during 59 days.
Methodology
The present study consists of experimental, applied and quantitative research (Gil, 2008;Köche, 2011;Pereira et al., 2018). Methodological support was provided by the third, fourth and fifth authors. All experiments were carried out in the laboratories of the Food Engineering course, at the Federal University of Uberlândia, campus Patos de Minas, MG, Brazil. As this is a quantitative research, the experimental data were treated statistically, as detailed in item 2.8.
Obtaining and preparing eggs
The Japanese quail eggs (Coturnix coturnix japonica) were obtained in the local market. For the production of powdered quail eggs, first, they were broken, homogenized and filtered, to remove fragments of shells, chalazae, and membranes. After homogenization, the samples were pasteurized in an Ultratermostatic Bath SL -152/18 (Solab, Piracicaba, SP), at a temperature of 60 ºC for 3.5 min.
Moisture of liquid egg
The moisture of the liquid quail egg was determined by the gravimetric method, using an oven with forced air circulation model Q314M252 (Quimis, Diadema, SP), at 60 ºC for 72 h (adapted of Vilela et al., 2016). Three replicates and three repetitions were performed.
Freeze-drying
After pasteurization, the samples were frozen in an Ultra Freezer CL 520-86V (ColdLab, Piracicaba, SP) at -80 ºC for a minimum period of 12 h. In sequence, the samples were freeze-dried in a freeze dryer L101-Liotop® (Liobras, São Carlos, SP) for 24 h, and then, defragmented using a domestic mixer to obtain the powder.
Powder characterization after freeze-drying
The egg powder was characterized as moisture, bulk density, and hygroscopicity. The moisture was determined by the gravimetric method, as previously mentioned (item 2.2), until constant weight. The bulk density was determined by mass ratio to the volume, expressed in g/cm 3 . The hygroscopicity was measured according to Tonon et al. (2008), to the unpacked egg powder and egg powder packaged in LDPE, PP and PP-P packages. The samples were weighed, disposed in a desiccator at 25 ºC and relative humidity (RH) of approximately 75%, conditioned by NaCl saturated solution and weighed again after 7 days.
The hygroscopicity of the powders was expressed as a percentage of water adsorbed by the mass of dry solids. All of these analyzes were performed in three repetitions.
Water vapor permeability of packages
The LDPE, PP, and PP-P packages were evaluated for water vapor permeability (WVP), as Ortiz et al. (2017) performed, with modifications. The packages were placed in stainless steel capsules, sealed with rubber rings and screws, with an internal relative humidity of 2%, conditioned by the presence of dry CaCl2. The whole set was stored in a desiccator with 75% RH, conditioned by NaCl saturated solution at 25 °C. The capsules were weighed for 7 days, at 24 h intervals. For this analysis, two repetitions were performed. WVP was calculated using Equation (1).
Where: WVP = water vapor permeability (g.μm/m 2 .h.Pa); K = water vapor permeability rate (g/h); t = thickness (μm); A = permeation area (m 2 ), P = water vapor pressure at 25 °C (Pa); RH1 = equilibrium relative humidity/water activity outside the capsule; RH2 = equilibrium relative humidity/water activity inside the capsule. The permeation rate (K) was found from linear regression of the water gain data plotted by time, corresponding to the slope of the line.
Powder characterization during storage
Powders were evaluated during storage as moisture, water activity, pH and color. For this, samples of 3 g were placed in different packages. The LDPE and PP packages were sealed by heating in a sealer model M-300T (Barbi, Itu, SP). At the same time, the PP-P already had its adhesive layer for sealing due to its limitation concerning sealing by high temperature. All were stored for 59 days in BOD TE-371 (Tecnal, Piracicaba, SP) at 25 °C, with 80.99 ± 0.28% RH, obtained by (NH₄)₂SO₄ saturated solution (Greenspan, 1977).
Water activity and moisture analysis
The water activity was obtained by reading directly on the AquaLab LITE equipment (Decagon Devices, São José dos Campos, SP). The moisture was determined by the gravimetric method, as previously mentioned (item 2.2). Three repetitions were performed for both analysis.
pH analysis
The pH was determined by a pHmeter model mPA-210 (MS Tecnopon, Piracicaba, SP) previously calibrated, after diluting the powder in distilled water in a proportion of 1:10 (w/v) (Instituto Adolfo Lutz, 2008). Three repetitions were performed.
Color profile
The color of the powders was determined by a Konica Minolta CR-400 colorimeter (Konica Minolta, Ramsey, NJ) with a viewing angle of 0° and illuminant C, previously calibrated. The colors, measured on the CIE L*a*b* scale, were expressed in terms of luminosity (L* = 0: black and L* = 100: white) and chromaticity (-a* = green and + a* = red; -b* = blue and + b* = yellow). The global color change (ΔE*) was calculated to express the overall color difference of the samples during the storage period (in relation to the day 0), defined by Equation 2. Three repetitions were performed.
Sorption isotherms
The sorption isotherms were determined by the static gravimetric method at 25 °C. Eight saturated saline solutions were prepared: LiCl, CH3COOK, K2CO3, Mg (NO3)2, KI, NaCl, KCl and BaCl2, for approximate values of relative humidity of 11.3%, 22.6%, 43.2%, 52.9%, 68.9%, 75.3%, 84.3% and 90.2%, respectively (Dufour et al., 1996;Greenspan, 1977). For each of the relative humidity, 1 g of freeze-dried quail egg was used. The samples were weighed at regular intervals until they reached equilibrium. The mathematical models of GAB, BET, and Peleg, shown in Table 1, were adjusted to experimental data. The routine program was performed by the software Matlab R2013a®️ (Mathworks Inc., USA). For each saline solution, three repetitions were performed. Table 1. Mathematical models used for adjustment of the experimental data for freeze-dried quail egg.
Model Equation X is the equilibrium water content (dry basis), Xm is the monolayer water content (dry basis), aw is water activity, C e K are model constants and K1, K2, n1 and n2 are model parameters estimated using non-linear regression by Matlab software. Source: Modified from Conceição et al. (2016).
Statistical analysis
The influence of the storage time and the material used in the packaging of the quail egg powder was evaluated through analysis of variance (ANOVA) and mean comparison test (Tukey) at 95% confidence level, using the Statistica 7.0 software (StatSoft Inc, Tulsa, OK, USA). The results will be presented as mean ± standard deviation (SD).
Moisture of liquid and powdered egg
The liquid quail egg moisture obtained in the present study was 73.12% (± 0.91), a value close to 74.35%, provided in the quail egg composition table of the USDA (2021a). After freeze-drying, the egg presented moisture of 1.56% (± 0.78), a value close to that obtained by Velioğlu (2019), which was 2% for freeze-dried whole quail egg. Koç et al. (2011a) produced powdered whole chicken egg by spray-drying at an inlet temperature of 171.8 ºC and reported a 2.02% of moisture after drying.
Bulk density
Bulk density is directly associated with the transport and storage capacity, as it expresses the volume occupied per unit of mass of powder (Fitzpatrick, 2013). The bulk density of the powdered quail egg was 0.299 g/cm 3 (± 0.009), close to the value obtained by Koç et al. (2011a), which was 0.305 g/cm 3 for spray-dried chicken egg powder. Table 2 shows the results of water vapor permeability of packages and powder hygroscopicity.
The moisture gain by a hygroscopic powder can considerably damage its technological properties (Henríquez et al., 2013). Therefore, powdered foods must be stored in packages that offer proper protection against the water passage.
All packages performed a barrier role, significantly reducing water absorption with no difference concerning the materials used in packaging. Despite the lower water vapor permeability of the PP-P package compared to the others, it was not sufficient to promote a significant difference in hygroscopicity. Koç et al. (2011a) found a hygroscopicity value of approximately 5% for the unpackaged powdered whole chicken egg, produced by spray-drying, exposed for 90 minutes to an atmosphere of 75.3% RH at 25 ºC. According to these authors, the highest absorption of moisture occurred in this interval.
Water activity and moisture analysis
Tables 3 and 4 show the results of water activity and moisture, respectively, for powdered quail eggs for the three packages during storage. Throughout the storage period, there was a significant increasing in water activity and moisture in powders for all packages, limiting their shelf lives. Maximum moisture of 5% (w/w) and water activity until 0.3 are generally quality requirements for dried foods intended for commercialization, since under these conditions, microbial growth and undesirable reactions are avoided (Henríquez et al., 2013;Sangamithra et al., 2014). The water activity and moisture values were already above these limits on the 7th and 17th day of storage, respectively. Therefore, despite being water barriers, the packaging used were not sufficient to guarantee the stability of the powdered egg totally during storage, under the established conditions of relative humidity and temperature.
It is also important to note that quail eggs did not have any additives to help extend their shelf life. Furthermore, because they are a product rich in nutrients and with low humidity and water activity, the potential gradient of moisture adsorption is high when subjected to a high RH, as in this study. Commercially, as this product can be sold in regions with different temperatures and RH, it is essential to consider the use of preservatives or anti-wetting agents.
In some periods, it was possible to notice that the type of packaging influenced the moisture gain by the powders.
However, on the last day of storage, there was no significant difference between the moisture values of the different samples. Research, Society andDevelopment, v. 10, n. 14, e184101420930, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i14.20930 Table 5 shows the results of pH values for powdered quail eggs for the three packages during storage. Powdered eggs, packaged in the three types of packages, showed a significant reduction in the pH value during storage.
pH analysis
However, the packaging material did not affect the pH profiles of the eggs, except for the 52nd day, where the pH of the egg packaged in the LDPE package was higher than the others.
One of the reasons for this drop in pH during storage, which was accompanied by the moisture gain of the powders, may have been the action of lipolytic enzymes, which promote the release of free fatty acids and phosphoric acid, and proteolytic enzymes, with the release of amino acids (Lieu et al., 1978). Another possibility is transforming of basic amines into less basic structures, and the formation of acids from the degradation of sugars in the Maillard reactions (Beck et al., 1990;Martins et al., 2001).
Although there are no recent studies that evaluate the pH of egg powder over storage, a similar behavior to that found in this study was reported by Lieu et al. (1978), that observed a reduction in the pH of powdered chicken egg (initial moisture of 5%) vacuum packed in PVC plastic bags, wrapped in aluminum foil, from 8.6 to 7.7 in 6 months of storage at 23.9 ºC. The storage relative humidity was not specified by the authors. Table 6 shows the color coordinates for powdered quail eggs for the three packages during storage.
Color profile
Research, Society and Development, v. 10, n. 14, e184101420930, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i14.20930 Although some variations of L*, a* and b* were observed in the storage period (there was no significant variation of b* for powdered eggs packaged in LDPE packages), there was no significant difference between the first and the last day of analysis concerning color coordinates. Research, Society andDevelopment, v. 10, n. 14, e184101420930, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i14.20930 10 The packages did not influence the values of L* and b*, while for a*, there was only interference from the packaging on the 45th day, being higher in the egg packaged in the PP package compared to the others. Chudy et al. (2015) reported an oscillatory profile in the luminosity (L*) of powdered eggs. The authors used dark storage of the egg powders for 24 months, at 20 ºC and with a maximum RH of 75%, packaged in polyester and polyethylene bags.
There are several possibilities for the color change of the powdered egg during the storage period. One of them is the Maillard reaction, which occurs due to reducing sugars, even in low concentrations, and protein , promoting the non-enzymatic browning of the egg. Another reaction associated with egg browning during storage is lipid oxidation (Chudy et al., 2015). With darkening, therefore, there is a decrease in the value of L*.
In a study carried out by , a relation was observed between the reduction of luminosity in hydrolyzed egg white and the gain in moisture. According to the authors, this fact can be justified by the increase in the motility of the components, with the moisture gain accelerating browning reaction rate. Other points to be considered are the change in light reflection promoted by the absorption of water by powder particles and degradation of carotenoids during storage. Wenzel et al. (2011) produced powdered egg yolk by freeze-drying and spray-drying and stored in plastic packages vacuum sealed, in the dark, for 26 weeks. At 20 ºC there was a significant reduction in the lutein and zeaxanthin content, the main carotenoids present in the egg, especially in the first 4 weeks. In addition, these authors also reported that there was isomerization of carotenoids, which may justify changes in a* and b* values. Therefore, it is difficult to associate changes in color coordinates during storage with just one reason.
The values obtained for the global color change of freeze-dried eggs from the 59th day concerning the day 0 (zero), for the PP-P, LDPE and PP packages were 3.531, 3.528 and 4.437, respectively, with no significant difference. Therefore, the pigment layer of the P-PP package, did not protect the powder egg against color change.
In a study by Koç et al. (2011b), powdered whole chicken egg produced by spray-drying had a global color change of 1.617 after storage for 180 days, in aluminum laminated polyethylene package, at 20 ºC and 50% RH. Compared to the values obtained in this study, the smallest global color change may have been due to the lower RH and the lower storage temperature. This may also indicate a possible greater barrier of packaging used by these authors against water vapor and oxygen. Another factor to be considered is that spray-drying occurs at high temperatures, differently from what happens in freeze-drying, which affects the final color of the powder and consequently the color change throughout the storage.
Sorption isotherms
The sorption isotherms describe the behavior of water binding to dried foods and are of great importance to estimate and even promote an increase in the shelf life of these products (Seth et al., 2018).
The isotherm obtained for the freeze-dried quail egg showed a sigmoidal behavior of type II, according to the classification of Brunauer (Rahman, 2009), as can be seen in Figure 1. The same behavior was reported by Koç et al. (2012), for powdered chicken egg produced by spray-drying. The GAB and BET models were the ones that best fit the experimental data since they presented higher values of R 2 and adjusted R 2 and lower RMSE, as can be seen in Table 7. According to the GAB and BET models, the moisture monolayer contents (Xm) for the freeze-dried quail egg were 0.0333 g H2O/g dry solids (~3.3 g H2O/g powder) and 0.0227 g H2O/g dry solids (~2.2 g H2O/g powder), respectively. Monolayer moisture refers to the amount of water strongly adsorbed on the particles' surface in the food. It indicates the ideal moisture for a maximum period of shelf life, being, very important for the stability of the product Tonon et al., 2009). Based on Xm values found for the two models with better fit, it is proven that the moisture values of the powders during storage were much higher than they should have been, for the product to have a maximum shelf life, especially in the last weeks.
Rao & Labuza (2012) obtained a good fit of the GAB model for spray-dried hen egg white (mean absolute percentage error < 5%), and the Xm value found was 0.062 g H2O/g dry solids. Koç et al. (2012), reported values of 0.062 g H2O/g dry solids (R 2 = 0.982 and RMSE = 1.494%) and 0.145 g H2O/g dry solids (R 2 = 0.997 and RMSE = 2.747%) for adjustments of the BET and GAB, respectively, for spray-dried chicken egg.
Conclusion
The powdered quail egg has industrial potential, mainly due to the product nutritional value, and its processing should be further studied. Through this study, it was possible to observe that, despite the good stability in the visual evaluation of freezedried quail eggs, the quail egg powder is susceptible to changes in storage when exposed to the high relative humidity and packaged in bags with a low water barrier. To promote more stable characteristics during the storage of freeze-dried quail eggs in high relative humidity, thicker packages or other strategies (like additives) should be used together to ensure an extension of the shelf life of this nutritionally rich product. The authors suggest for future research, that the stability (including microbiological quality) of powdered quail eggs obtained by freeze-drying, as well as by other drying methods, stored in packages with higher barriers to water permeability and under different conditions of relative humidity and temperature, be evaluated, in order to define packages that promote a longer shelf life for powdered quail eggs, for different experimental conditions.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-01-15T00:00:00.000
|
3968945
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0054440&type=printable",
"pdf_hash": "176a39c88c07b4a7d56fe7433edba76b459b72ba",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1088",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "034eead8919bf10b2366fd57ce1787c1eb08621b",
"year": 2013
}
|
pes2o/s2orc
|
A Polypeptide-DNA Hybrid with Selective Linking Capability Applied to Single Molecule Nano-Mechanical Measurements Using Optical Tweezers
Many applications in biosensing, biomaterial engineering and single molecule biophysics require multiple non-covalent linkages between DNA, protein molecules, and surfaces that are specific yet strong. Here, we present a novel method to join proteins and dsDNA molecule at their ends, in an efficient, rapid and specific manner, based on the recently developed linkage between the protein StrepTactin (STN) and the peptide StrepTag II (ST). We introduce a two-step approach, in which we first construct a hybrid between DNA and a tandem of two STs peptides (tST). In a second step, this hybrid is linked to polystyrene bead surfaces and Maltose Binding Protein (MBP) using STN. Furthermore, we show the STN-tST linkage is more stable against forces applied by optical tweezers than the commonly used biotin-Streptavidin (STV) linkage. It can be used in conjunction with Neutravidin (NTV)-biotin linkages to form DNA tethers that can sustain applied forces above 65 pN for tens of minutes in a quarter of the cases. The method is general and can be applied to construct other surface-DNA and protein-DNA hybrids. The reversibility, high mechanical stability and specificity provided by this linking procedure make it highly suitable for single molecule mechanical studies, as well as biosensing and lab on chip applications.
Introduction
Many experiments involving the manipulation of nucleic acids and proteins require multiple strong linkages that can be established in-situ, and can be used together and thus must be specific. For certain applications the molecules involved are immobilized on surfaces, either because the experimental setup requires fixing and controlling the position of the molecular ends or because the molecular phenomenon is measured using surface sensitive techniques [1,2]. An example of an experiment demanding such supramolecular structures at surfaces includes the binding of liposome-ssDNA hybrids to surface immobilized-DNA in order to detect single nucleotide polymorphism usingtotal internal reflection fluorescence (TIRF) microscopy [3]. Another example is the large-scale positioning of self-assembled functional DNA nanoarrays on surfaces [4], which have been used to construct arrays of quantum dots, proteins, and DNA targets. Supramolecular constructs that link micron-sized beads have been used to engineer molecular wires and to guide the assembly of nano and microstructures [5][6][7][8]. Metal wires have been fabricated by depositing metals on multi-protein and DNA constructs connecting the surfaces of two electrodes [9,10].
Single molecule techniques such as optical tweezers have enabled the kinetic and thermodynamic characterization of DNA and protein molecules, as well as their interaction [11][12][13][14].
In these methods, two ends of the molecule of interest typically are manipulated by linking them to surfaces, either directly or via molecular handles. Here molecular linkages are preferably established in-situ while still being able to sustain large forces over long timescales. Different classes of linkages have been used: Antibody-antigen linkages [13], the family of Streptavidin (STV)biotin linkages [13][14][15], covalent disulfide linkages [14] and covalent binding proteins (HaloTag [16] or SNAP-tag [17]). Each has its own strength and drawbacks. Antibody-antigen interactions are specific and diverse but affinities are affected by buffer condition, pH and temperature, and thus limit the experimental conditions that can be explored. Examples are Myc-AntiMyc and Dig-AntiDig. Moreover, many commercially available antibodies are polyclonal, causing variability in the force that the linkage can sustain. The Dig-AntiDig connection can be stable mechanically, and has therefore been used extensively to link DNA to surfaces [13,14]. However, this system is less suitable for interfacing to proteins, while as a steroid compound [18]. Digoxigenin is also prone to oxidation and thus can deteriorate over time [19]. Disulfide bonds are very strong but involve long preparation times (e.g. 24-48 hr for DNA-protein coupling [20]) and the molecules of interest must be resistant to redox reactions, which limits its applicability.
The biotin-STV interaction is one of the most broadly used, as it is strong and efficiently established. STV is one of the most stable proteins showing high resistance to temperature, urea, guanidine, and proteases [21]. This is in contrast to linkages such as HaloTag or SNAP-tag that unfold, aggregate and encourage nonspecific binding under these harsh conditions [15]. In the presence of SDS, Streptavidin begins to break up into monomers only at temperatures above 60uC [22]. Because of the usefulness of biotin-STV interactions, efforts have been made to engineer variants and further optimize this system. Avidin is a glycosylated and positively charged protein (at neutral pH) which usually appears as a tetrameric biotin-binding molecule. Neutravidin (NTV) is a deglycosylated form of Avidin which is developed to decrease non-specific interactions [23]. It has recently been reported that Traptavidin, a mutant of STV, dissociates biotin more than tenfold slower, has increased mechanical strength and improved thermostability [15].
StrepTactin (STN) is another, recently engineered version of STV which has high affinity to biotin and in particular to its peptide ligand (K d <1 mM), named StrepTag II (ST), which is 8 amino acids long (WSHPQFEK) [24]. STN has a tetrameric structure that provides four binding sites for ST. Additionally the binding can be reversed by adding Desthiobiotin which can in turn be removed by washing or dialysis. This feature has made the system popular for the purification and detection of proteins by affinity chromatography [24]. Interestingly, STN does have affinity for biotin [24] and ST can bind STV (Kd <72 mM) at the same surface pocket where biotin is complexed [25] while ST cannot bind Avidin (AV) [25,26]. Because the biotin binding pockets in NTV and AV have similar surface structures, one may expect that NTV -like AV -is unable to bind ST. It has been reported that the binding affinity of ST to STV can be further increased to nanomolar levels when using multiple tandem STs [27]. It is also shown that in protein purification, having multiple tandem STs improves the binding affinity to STN [24]. ST can be cleaved enzymatically, and the ST-STN interaction is resistant to reducing agents (DTT and mercaptoethanol), denaturing agents (urea 1 M), chelating agents (EDTA 50 mM) and detergents (SDS 0.1% and Triton X100 2%). ST is proteolytically stable, biologically inert and does not interfere with membrane translocation or protein folding [24]. The strength of the STN-ST linkage has been recently studied by Atomic Force Microscopy [28,29], in which one single ST was fused to a protein and STN was anchored to a surface via PEG-based [29] or long proteinbased [28] handles. The linkage showed an average dissociation force of 40 and 60 pN at pulling rates of 337 and 200 nms 21 , respectively [28,29]. It is unclear what the dissociation force is for STN that is immobilized directly on the surface, and for multiple ST binding to a single STN.
The properties of the ST-STN linkage show promise for use in optical tweezers experiments and biomaterial engineering. These applications typically require multiple linkages that are specific and strong, which ST-STN can potentially deliver. One challenge is to construct polypeptide-DNA hybrids, which would be required for such an approach. Oligonucleotides (6-16 mers) conjugated to a tripeptide have been used for PCR amplification to successfully construct hybrids of DNA with short polypeptides [30]. The feasibility of synthesizing oligonucleotides conjugated to long polypeptides, and using them to amplify DNA segments, remains unclear.
We present a straightforward method to efficiently construct end-joined molecular hybrids in a manner that is mechanically stable and specific. To increase the stability [24,27], our method uses a tandem two STs (tST)-STN linkage to couple two molecules A and B, where both A and B can potentially be either DNA or protein of arbitrary size. Here we demonstrate the coupling of Maltose Binding Protein to a 920 nm long dsDNA. We find that DNA molecules can be coupled well to the surface via tST-STN linkage. The linkage is more stable against applied force than the biotin-STV linkage and can be used in conjunction with biotin-NTV to stably tether DNA and to construct protein-DNA hybrids.
Materials and Methods
Design and synthesis of the oligo-peptides A tandem arrangement of two STs (tST: WSHPQFEKWSHPQFEK) was chemically synthesized and was linked to the primer (59GTC TCG CGC GTT TCG GTG ATG ACG GTG 39) from its 59 end via a linker (-Cys-SMCC-C6) (BioSynthesis Inc.). The product was purified by HPLC and characterized by mass spectrometry (Applied Biosystems Voyager System 2051).
Synthesis of dsDNA-tST
The 2553 bps DNA handles were generated by PCR using Taq DNA polymerase and pUC19 plasmid DNA (New England BioLabs) as template. 500 ng of handles were generated at a time using 50 ml of PCR reaction. The two types of handles (with and without biotin) were generated using the above oligo-peptide as a forward primer together with the primer 59 TA6GTA6CCGCT-CATGAGAC 39 as a reverse (6 is biotin-dT for biotinylated DNA and is ''T'' for non-biotinylated DNA). Polymerase chain reaction reagents for each 50 microliter reaction volume included: 1 unit of Taq polymerase (New England BioLabs), 5 ml of 10x PCR buffer (New England BioLabs), 10 pmol of the forward primer and 10 pmol of reverse primer, 5 ml of 2 mM dNTPs (Fermentas), and 50 ng of the plasmid DNA. The PCR profile was as follows: 1 min at 94uC, 30 cycles of 30 s at 94uC, 60 s at 52uC and 3 min at 72uC, finally followed by 10 min at 72uC and a 4uC soak.
Expression and purification of Maltose Binding Protein (MBP)
Two repeats of the sequence encoding the ST (tST) were introduced with PCR at the C-terminus of MBP sequence using plasmid pNN226 as a template. The inserts were ligated with HindIII/NdeI restriction sites to the pET3 vector to generate the expression plasmid. The correctness of the newly made vector was confirmed by double-strand DNA sequencing. Escherichia coli strain BL21.1 was used to express the MBP construct. The cells were grown at 37uC in LB medium containing 100 mg/ml ampicillin to OD 600 ,0.6-0.7. After induction with 0.5 mM IPTG the cells were further incubated O/N at room temperature and harvested by centrifugation at 5000 rpm, 4uC for 30 min. The cells were resuspended in cold MBP buffer (20 mM Tris-HCl (pH 7.4), 200 mM NaCl, 1 mM EDTA, 10 mM DTT) with 1x protease inhibitor. Lysozyme (Sigma Aldrich) was added to a final concentration of 1 mg/ml and the mixture was kept on ice for 20 min. The cells were lysed by tandem freezing (in liquid nitrogen until fully frozen) and thawing (at 37uC). A little-spatula tip of DNAase I was added to lysate and the mixture was kept on ice for 20 min. Freezing and thawing were repeated until the cloudy suspension became translucent. The extract was clarified by centrifugation (at 15000 rpm, 4uC for 20 min). The tST-MBP hybrid was purified from the crude cell extract using amylose resin affinity chromatography (New England BioLabs). The clarified extract (10 ml) was transferred to fresh amylose resin column (1 ml bead volume) and rocked gently at 4uC for 2 hr. Unbound material then were removed by centrifugation (at 2000 rpm, 4uC for 1 min). The resin was washed 3x with cold MBP buffer. The protein was eluted from resin by 2.5 ml elution buffer (MBP buffer, 10 mM matose).
Bead preparation
Carboxylated polystyrene beads (Polysciences Inc.) were covalently linked to protein (STN, NTV, STV and AntiDig) via Carbodiimide reaction (PolyLink Protein Coupling Kit, Polysciences Inc.). Briefly, 25 ml of 1% (w/v) 1.87 mm diameter carboxylated polystyrene microspheres were washed twice by pelleting at 13.2 rpm (for 10 min) in a microcentrifuge tube and resuspending in coupling buffer (400 ml in first wash and 170 ml in second washing) (PolyLink Protein Coupling Kit, Polysciences Inc.). Then 20 ml of the freshly prepared EDCA solution (20 mg/ ml; prepared by dissolving 1 mg EDCA in 50 ml coupling buffer) was added to the microparticle suspension and mixed gently endover-end. After that 20 mg of desired protein (STN, NTV, STV and AntiDig) was added and mixture was incubated for 1 hr at room temperature with gentle mixing. The mixture then washed two times in 400 mml storage buffer. Protein-coated beads were stored in 400 mml storage buffer at 4uC until use.
DNA-coated microspheres were made by mixing ,70 ng of dsDNA molecules and 1 ml protein-coated beads in 10 ml HMK (50 mM Hepes, pH 7.6, 100 mM KCl, 5 mM MgCl 2 ) buffer. After 30 minutes incubation on a rotary mixer (4uC), the beads were diluted in 400 ml HMK buffer for use in optical tweezers experiments.
Optical tweezers experiments
The optical tweezers setup has been described elsewhere [13,31]. Detection of forces on the trapped beadwas performed using back focal plane interferometry. Forces were recorded at 50 Hz. Trap stiffness and sensitivity were determined to be169624 pN mm 21 and 2.7460.24 V mm 21 respectively. A piezo-nanopositioning stage (Physik Instrumente) was used to move the sample cell and micropipette at a speed of 50 nm s 21 .
The beads were trapped in a flow chamber consisting of three parallel streams in laminar flow: one containing STN-coated beads; one containing NTV-coated beads with the DNA construct and a central buffer channel in which the measurements were conducted. Structure of the resulting molecular tether is schematically depicted in Figure 2a.
Polypeptide-DNA hybrids
To construct DNA molecules linked to polypeptide (tST-DNA), we used a primer covalently linked to the polypeptide. The PCR conditions were optimized to efficiently amplify the DNA from the template plasmid. By using gradient PCR, and testing several polymerases (Taq Polymerase and Phusion) and different PCR conditions, we found that comparatively long annealing and extension time (1 and 3 min per cycle respectively) allowed efficient amplification, resulting in a final yield of about 500 ng. The resulting construct was then characterized by agarose gel electrophoresis ( Figure 1b) and later tested with an optical tweezers assay (Figure 2a).
Polypeptide-protein hybrids
To synthesize a protein-polypeptide hybrid, we chose Maltose Binding Protein (MBP) as our model protein. MBP is a protein with a variety of applications in biotechnology and biological research, widely used to prototype a variety of biosensing platforms [32]. It is also a model protein for folding and export studies and is commonly used as fusion partner in protein biochemistry [13,33].
The tST-MBP hybrid was constructed as described before. The hybrid was then tested with SDS-PAGE (Figure 1c), which showed a molecular weight between 37-50 kDa that corresponds well to the molecular weight of tST-MBP (,42.5 kDa).
Protein-DNA hybrids
Here we aimed to optimize the specific formation of a hybrid between MBP and DNA using ST-STN linkages (MBP-tST-STN-tST-DNA). Tetrameric structure of STN provides four binding sites for STs which in principle could allow for the formation of MBP-DNA complexes with different stoichiometries. It has been shown that for STV family, a 1:1 stoichiometry can be successfully achieved by using excess amounts of ligand (e.g. biotin) or receptor (e.g. STV or AV) [34,35]. To make this construct we first mixed STN (1 mg/ml) and tST-MBP (3 mg/ml) in 10:1 ratio. Unbound STN was removed by amylose column purification. tST-MBP bound to amylose column was then eluted with maltose. SDS-PAGE (Figure 1d) showed two bands for eluted sample, with one corresponding to tST-MBP and one to STN only, thus showing that STN had successfully been bound to MBP. The previously constructed tST-DNA was then mixed with a large excess of the MBP-tST-STN hybrid (.30-fold molar excess) in order to favour binding of a single DNA molecule to each MBP. Agarose gel analysis showed a band distinctly above from tST-DNA, consistent with the formation of a MBP-tST-STN-tST-DNA hybrid ( Figure 1e). As expected, MBP-tST-STN-tST-DNA hybrid shows a significantly reduced mobility as compared to tST-DNA due to its larger size and higher molecular weight. The successful formation of the complex hybrid also confirms the chemical structure of the constituting hybrids synthesized in the previous steps and the specificity of the linkages involved (Figure 1e and S1).
Binding specificity
In many experiments, different specific linkages are typically required. For instance, when molecules are tethered between two beads in optical tweezers, each end is often attached with a different linkage. If the binding in these linkages would not be specific, both ends would bind to the same bead. Here we consider the two linkages tST-STN and biotin-NTV. To test whether NTV binds specifically to biotin and not to tST, NTV-coated beads were incubated either with tST-DNA or with biotin-DNA. After 30 min, beads were removed by centrifuging and supernatants were loaded onto an agarose gel (Figure 1f). The results showed that biotinylated DNA bound the beads efficiently, as no DNA could be detected in the supernatant. In contrast, all of the input tST-DNA remained in supernatant, showing no affinity to the beads. These results indicate that NTV binds selectively to biotin and not to tST, which is a central requirement for instance for efficiently tethering tST-DNA-biotin constructs between STNand NTV-coated beads.
Mechanical stability
To measure the mechanical stability of the linkage between tST and surface-bound STN, we pulled on a single synthesized DNA-tST-STN hybrid using optical tweezers. First, we immobilized tST-DNA-biotin constructs on NTV-coated beads by incubation for biotin-NTV linkage while keeping the tST-end free (Figure 2a). The NTV beads were titrated with varying amount of tST-DNAbiotin so that only few DNA constructs were linked to one bead. Next, the tST-STN linkage to beads coated with STN was established in-situ. Pulling curves showed overstretching at 65 pN, which indicated the presence of a single tether, and showed the tST-STN linkage was able to sustain such forces without breaking (Figure 2b). The measured DNA stretching curves did not display additional steps that might have arisen from STN unfolding or its detachment from surface.
Next, we performed a quantitative comparison of the mechanical stability of the tST-DNA-biotin and the biotin-DNA-Dig constructs. The latter is often used in optical tweezers studies in conjunction with STV-and AntiDig-coated beads [14,20]. Note that in general, NTV-coated beads have advantages compared to STV-coated beads, given the higher affinity of NTV for biotin [23]. To compare the STN and Dig linkages, we performed pulling experiments on (NTV)biotin-DNA-Dig(Antidig) and (STN)tST-DNA-biotin(NTV) constructs, where the brackets indicate the two beads.
We considered a tether was established when the connections could sustain 20 pN. Connections that broke below 20 pN were disregarded (a maximum of 20% of tethers broke below 20 pN). The constructs were then stretched and relaxed multiple times with a displacement speed of 50 nm/sec to just beyond the DNA overstretching regime at about 65 pN, until the connection broke (N = 111 for the tST construct, N = 230 for the Dig constructs). We monitored the fraction of tethers able to sustain DNA overstretching, and distinguished first and subsequent pulls. Overall, we found quite similar results for the two constructs, with about 80% of the tethers able to sustain overstretching ( Figure 2c). These data suggest that the tST-STN linkage has similar stability against applied force as incubated Dig-AntiDig in the first pull.
The stretching experiments indicated a number of additional points. For instance, for the tST-STN construct, subsequent pulls show a slight increase in the fraction of times the tether survives overstretching (Figure 2c, from 77% to 87%). A possible explanation for this increase could be the proposed bimodality of the ST-STN interaction [28]. The origin of this bimodality is believed to lie in the interaction of a single ST with a single or multiple sites on STN, where the latter is supposed to be somewhat less stable. Next, we performed additional experiments on (STV)biotin-DNA-Dig(AntiDig). These constructs showed an ability to sustain overstretching only in 40% of the cases, about half of what was found when using NTV and AntiDig beads. Thus, the biotin-STV linkage was significantly less stable than the biotin-NTV linkage, consistent with the significantly lower equilibrium binding constant for biotin-NTV [23].
When comparing all three bead-tether-bead constructs, additional observations can be made. First, the biotin-STV linkage makes the third of these constructs weaker than the first two. Thus, the biotin-STV linkage is less stable against applied force than the tST-STN linkage. The comparison also suggests that the biotin-STV linkage is less stable than the Dig-AntiDig linkage, as the latter contributes to the second construct that is very stable. This finding may be surprising, as biotin-STV is considered to be among the most stable linkages. To address this issue we hypothesised that the way in which the linkage is established could be important to stability in these experiments. Linkages can either form by incubation in bulk, during which there is a lot of time (order hour) and the molecules have many degrees of freedom. Linkages can also be formed in-situ within the tweezers apparatus by bringing the beads together, during which there is less time and fewer degrees of freedom. The former could yield more stable linkages than the latter.
To test this, we performed experiments where the Dig-AntiDig connection was formed in-situ, and contrasted this with earlier results where this connection was formed by bulk incubation. In this experiment, Dig-DNA-biotin molecules were incubated with NTV-coated beads, and the Dig-AntiDig connection was formed in-situ within the tweezers. Compared to the bulk-incubated Dig connection, the results indeed showed a significant reduction in the fraction of tethers that survived overstretching: a 34% reduction in the first pull and a 25% reduction in the second pull ( Figure S2). To further investigate this issue we measured the time at which the tethers broke during sustained overstretching. For incubated Dig-AntiDig linkages, 7% of tethers broke in less than a second, while for in-situ established Dig-AntiDig linkages, 67% of tethers broke within that time ( Figure S3). The same unbinding time has reported for fishing Dig-AntiDig connection where DNA molecules were bound to the STV-coated beads [36]. Thus, the Dig-AntiDig connection is significantly weaker when established in-situ. The type of AntiDig antibodies used may also affect stability. Polyclonal AntiDig antibodies are often used in single molecule pulling experiments [20], which could well bring significant variability in stability. The rupture force for a monoclonal AntiDig antibody was reported to be less than 20 pN for the pulling rate used in our study [37]. By incubation there may be a bias towards stronger Dig-AntiDig junctions. Importantly, in the experiments on the (STN)tST-DNA-biotin(NTV)construct (Fig. 2C), the tST-STN linkage was formed in-situ, showing that this linkage is not only stable but can also be formed rapidly.
Finally, a construct consisting of (STN)biotin-DNA-Dig(Anti-Dig) was able to sustain overstretching also in about 40% of the cases in the first pull. In these experiments, none of the tethers could sustain 65 pN in the second pull. The observed binding of STN to biotin does also illustrate the limitations of the specificity in this system: ST shows stable binding specifically to STN and not to NTV, but biotin binds stably both to STN and NTV, though more so to the latter. However, our protocol shows that these limitations can typically be overcome in practice, by first establishing the connection to biotin, which is less specific, and only then form the connection to tST which is specific.
In order to probe the difference in stability between biotin-DNA-Dig and tST-DNA-biotin tethers more exhaustively, we tested for the ability to sustain high forces for long periods of time. In this experiment, tethers were first stretched to 65 pN and those that survived the first pull were kept under constant force of , 60 pN until they broke. Figure 3a illustrates a case of (STN)tST-DNA-biotin(NTV) handle that could survive this load for an hour. In the second pull, the handle was stretched to 60 pN (in less than 1 min) and kept under force feedback for 60 min without breaking. Next, it was relaxed (Figure 3a, in between 60:00 and 60:30 min:sec) and showed a characteristic cycle of DNA overstretching (Figure 3a, in between 60:30 and 61:30 min:sec). The tether broke after additional 22 pulling cycles. Importantly, the fraction of strong tethers resisting more than 10 min at 60 pN in the second pull was found to be significantly higher with tST-STN as compared to Dig-AntiDig (Figure 3b). Thus, the tST-DNAbiotin handle is able to withstand high forces for longer than the biotin-DNA-Dig handle.
Conclusions
We have presented a simple procedure to specifically attach a protein to a DNA molecule, using STN-tST linkages. The method is rapid and straightforward, and can be established in-situ within biologically relevant buffers. Binding of the DNA-tST construct to surface immobilized STN shows high mechanical stability, and can readily tolerate forces as high as 65 pN for tens of minutes. The engineered linkage can be used as a reliable linker for optical tweezers studies of proteins and nucleic acids, both in constant pulling rate and force modes [38][39][40].
The motivation to use STN to end-join two molecules was based on reported high rupture forces (40 pN and 60 pN) [28]. We found that the average rupture force was beyond the overstretching transition of 65 pN for the ST-STN linkage studied here, which may be due to the dual ST repeats or other experimental differences. The specificity, stability, and rapid in-situ formation of the STN-tST complex allows it to be used in combination with other well-used linkages that can also be stably formed in-situ, such as NTV-biotin. Dig-AntiDig linkages of similar stability can be formed, but they require bulk incubation. Thus, choice of linkage depends on the precise application and formation possibilities. We find that tST-STN is more stable against applied force than the commonly used biotin-STV linkage. Moreover, we show that tST-STN can be used for surface attachments as well as for linkage between DNA and protein molecules, which has not been achieved for Dig-AntiDig linkages. Because of the high stability of STN, this complex could potentially also be used in a broad thermal range and harsh conditions.
We have shown that constructing tST-DNA hybrids is straightforward using PCR amplification, making our method suitable for broad applications. For single molecule studies, the presented approach could be applied in combination with other peptide-DNA hybrids. For example, halo tags-DNA hybrid could be constructed as a handle and be linked covalently to halogenasecoated beads. Similarly, a peptide substrate to ubiquitin ligase could be used to generate peptide-DNA hybrid and then be linked to the protein ligase-coated bead. The reversibility of the ST-STN reaction, using Desthiobiotin [24], will make the ST-STN linkage also highly suitable for biologically inspired soft matter systems, where reversibility could open up new possibilities. Figure S1 The specificity of tST-STN interactions. (a) 1% Agarose gel showing protein-DNA hybrid (shown in Figure 1e) does not form in the absence of ST. Unlabeled DNA (25 ng) was mixed with a large excess of unlabeled MBP (3 mg) and STN (1 mg). The mixtures were incubated for 1 hour in 4uC and then loaded in to the 1% agarose gel. In contrast to tST-DNA, unlabeled DNA does not bind STN. In lane 2, the band appears exactly where DNA band appears in lane 1, indicating that DNA and STN do not form a complex. A gel analysis on the mixture of tST-MBP and DNA also results in a band at the same location as DNA alone. This experiment confirms that for the formation of the hybrid shown in Figure 1e, specific tST-STN interactions are required. (b) SDS-PAGE analysis illustrates that STN does not bind to MBP in the absence of ST (Experiment A) and cannot be eluted from amylose resin by maltose (Experiment B). Unlabeled MBP (0.15 mgr) was added to the STN (0.25 mgr) and the mixture was incubated for 1 hour in 4uC (Experiment A). The complex was then combined with amylose resin (for 2 hour in 4uC) and the resin subsequently was washed with maltose. SDS-PAGE showed STN band in master mixture (MBP+STN) (1) and supernatant sample (2), but no band was detected for eluted sample at the same location (3). This confirms that MBP-STN complex does not form specifically, in the absence of ST linkage. This process was repeated with the pure STN solution (Experiment B). The result shows that STN molecules which bind the column nonspecifically cannot be eluted by maltose. Overall, these control experiments indicate that the eluted STN molecules in Figure 1d, were linked via tST to MBP and confirm the chemical structure of the synthesized MBP-tST-STN complex. (TIF) Figure S2 Fraction of (NTV)biotin-DNA-Dig(AntiDig) tethers that resisted 60 pN in first and second pull, compared between different methods of Dig-AntiDig establishment. Connections can either form by incubation in bulk or in-situ within the tweezers apparatus by bringing the beads together. In blue bars, Dig-DNA-biotin molecules were incubated with NTV-coated beads, and the Dig-AntiDig connection was formed in-situ. In purple bars, Dig-DNA-biotin molecules were incubated with AntiDig-coated beads, and the biotin-NTV connection was formed in-situ. The statistics show a reduction in the fraction of survived tethers (both in the first and second pull) when Dig-AntiDig linkage formed in-situ. (TIF) Figure S3 Histogram of unbinding time of a tethered (NTV)biotin-DNA-Dig(AntiDig) held at overstretching, compared between different methods of Dig-AntiDig establishment. Linkages can either form by incubation in bulk or in-situ within the tweezers. In blue bars, Dig-DNA-biotin molecules were incubated with NTV-coated beads, and the Dig-AntiDig connection was formed in-situ within the tweezers. In purple bars, Dig-DNA-biotin molecules were incubated with AntiDig-coated beads, and the biotin-NTV connection was formed in-situ. The statistics show most of the in-situ formed Dig-AntiDig connections broke immediately (blue bars), while a few number of Dig-AntiDig linkages which formed by incubation (purple bars), broke within that time.
|
v3-fos-license
|
2021-04-30T13:17:51.420Z
|
2021-04-01T00:00:00.000
|
233450308
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fclim.2021.620100/pdf",
"pdf_hash": "f3eb4f0b656c922f4b5e3669d05026fa189b83de",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1091",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "93af4345502d94684955ab9f4b9566f42caa1dc1",
"year": 2021
}
|
pes2o/s2orc
|
Development of Privacy Features on Anecdata.org, a Free Citizen Science Platform for Collecting Datasets for Climate Change and Related Projects
The Anecdata website and its corresponding mobile app provide unique features to meet the needs of a wide variety of diverse citizen science projects from across the world. The platform has been developed with the help of continuous feedback from community partners, project leaders, and website users and currently hosts more than 200 projects. Over 8,000 registered users have contributed more than 30,000 images and over 50,000 observations since the platform became open to the public in 2014. From its inception, one of the core tenets of Anecdata’s mission has been to make data from citizen science projects freely accessible to project participants and the general public, and in the platform’s first few years, it followed a completely open data access model. As the platform has grown, hosting ever more projects, we have found that this model does not meet all project needs, especially where endangered species, property access rights, participant safety in the field, and personal privacy are concerned. We first introduced features for data and user privacy as part of “All About Arsenic,” a National Institutes of Health (NIH)/National Institute of General Medical Sciences (NIGMS) Science Education Partnership Award (SEPA)-funded project at MDI Biological Laboratory, which engages middle and high school teachers and students from schools across Maine and New Hampshire in sampling their home well water for analysis of arsenic and other heavy metals. In order to host this project on Anecdata, we developed features for spatial privacy or “geoprivacy” to conceal the coordinates of samplers’ homes, partial data redaction tools we call “private fields” to withhold certain sample registration questions from public datasets, and “participant anonymity” to conceal which user account uploaded an observation. We describe the impetus for the creation of these features, challenges we encountered, and our technical approach. While these features were originally developed for the purposes of a public health and science literacy project, they are now available to all project leaders setting up projects on Anecdata.org and have been adopted by a number of projects, including Mass Audubon’s Eastern Meadowlark Survey, South Carolina Aquarium’s SeaRise, and Coastal Signs of the Seasons (SOS) Monitoring projects.
INTRODUCTION Citizen Science and Evolution of Anecdata
Citizen science, or the involvement of citizens in scientific research, is an effective strategy for expanding capacity for science and fostering the use of science in decision-making about complex problems (Wals et al., 2014;Dillon et al., 2016). Anecdata.org is an online platform developed at the MDI Biological Laboratory's Community Environmental Health Lab for the collection of observational data from citizen scientists that is uniquely designed to enable project leaders and participants to utilize their data to enact change (Disney et al., 2017).
The development of Anecdata started in 2014 to provide a data management system for several citizen science projects run by the Community Environmental Health Lab, ranging from bay monitoring to seagrass restoration. Until then, the projects used a combination of Microsoft Excel sheets and Access databases to store data, which became very prone to errors as projects scaled up and made it difficult for the administrative and research team to effectively share data with collaborators and community members in a timely fashion.
Over the years, Anecdata evolved as an online platform for citizen science data collection, aggregation, and analysis through continuous feedback and suggestions from community partners who reached out to our team to host their projects. The development of the platform has followed an Agile software development methodology as defined in the Agile Manifesto, where features are developed by prioritizing and valuing "individuals and interactions over process and tools, working software over comprehensive documentation, customer collaboration over contract negotiations, and responding to change over following a plan" (Hazzan and Dubinsky, 2014).
Today, Anecdata is freely available to citizen groups and community partners around the world. As of publication, it is home to more than 200 projects, where more than 8,000 registered users have contributed over 30,000 images and more than 50,000 observations. Anecdata also continues to serve as a key platform for projects at the Community Environmental Health Lab, especially "All About Arsenic," a 5-year National Institutes of Health (NIH)/National Institute of General Medical Sciences (NIGMS) Science Education Partnership Award (SEPA)-funded project that focuses on building data literacy among middle and high school students while engaging them in sampling their home well water for arsenic and other contaminants and sharing their findings within their local and regional communities. This is the project that provided the impetus for the development of data privacy features on Anecdata.
Privacy Features on Anecdata
Managing a large repository of online citizen science datasets opens many avenues for developing best practices for citizen science digital data management, including ensuring privacy of certain data types. A high-level overview of data management in citizen science includes individual research topics such as data acquisition, data quality, data infrastructure, data security, data governance, data documentation, data access, data services, and data integration (Bowser et al., 2020).
Many projects on Anecdata have informed the development of new functionalities on the platform. "All About Arsenic" 1 is the first project where we systematically developed three such features: (1) "geoprivacy," so that sample site coordinates could be obscured; (2) "private fields," so that certain data fields could be concealed from public view; and (3) "participant anonymity," so that the identity of the person who originally registered a sample is not revealed. These features, defined in Table 1, subsequently became available for all projects on Anecdata. Although most data are available for the public to view and download, fields that have been marked as private are only available to project administrators.
Privacy features are critical components of many citizen science projects where protecting the privacy and security of individual participants is essential. Incorporating these features in the design and development of the citizen science platform allows project leaders to support their project participants in making informed and safe decisions about their personally identifiable information (Bowser et al., 2014).
There are multiple reasons why "All About Arsenic" project participants want their personal information obscured. The potential health impacts of arsenic exposure raise issues of medical privacy. In addition, high levels of arsenic in well water could affect property values. A study on the effect of elevated arsenic levels in well water on home values in two Maine towns showed no significant negative impact after 2 years (Boyle et al., 2010). However, a later survey of private well owners in Maine revealed the belief that mitigating arsenic in well water would increase the value of their homes (Flanagan et al., 2015). The relationship between well water quality and negative impact on home values has been documented in other parts of the nation as well (Guignet et al., 2016).
The new privacy features stemming from the "All About Arsenic" project are now available and accessible to all projects on Anecdata and provide vital functionality for groups that are crowdsourcing a wide variety of information that requires data privacy. For these projects, datasets (which are considered privileged) can be downloaded by project administrators but not the general public.
One of the first projects to adopt new data privacy features after they were introduced on the Anecdata site was Coastal Signs of the Seasons (SOS) Monitoring. This program is an offshoot of a New England-wide phenology program that engages citizen scientists in observing 19 upland and coastal indicator species with two main objectives. The first is to characterize the biological effects of climate change through the collection of phenology data and the second is to empower citizens to become a part of the solution to climate change by participating in research comparing the current timing of life cycle events for individual species with historically documented events such as leaf-out, flowering, and gamete production (Stancioff et al., 2017). Other climate change and related projects on Anecdata soon followed suit and adopted privacy features.
While individual projects may have policies that adhere to laws and ethical standards (Guerrini et al., 2018), technology platforms such as Anecdata have a role to play in promoting ethics in citizen science by building in features that provide options and support for privacy controls at both the individual and project levels (Bowser et al., 2014).
As we enter an era where citizen science and open science receive greater recognition, we can celebrate that information is more freely available to everyone for use in advocacy, to promote environmental improvements, to enhance human health, to protect wildlife, and more. At the same time, there are concerns about data quality, stewardship, privacy, security, and control (Bowser et al., 2020), particularly in the case of data that relate to human health (Majumder and McGuire, 2020). Anecdata is in the company of several citizen science platforms that have aimed to achieve a balance between unrestricted public access to data and levels of privacy for project leaders and data contributors, such as CitSci.org (Wang et al., 2015;Lynn et al., 2019), Open Humans (Tzovaras et al., 2019), and iNaturalist (Bowser et al., 2014).
Anecdata supports location and user privacy features and provides the option for any additional data fields to be kept private. In this paper, we present our "All About Arsenic" project as a case study in data privacy and relate it to an early adopter of data privacy features on Anecdata.org, SOS Monitoring.
Anecdata Technology Stack Description
Anecdata is an online platform composed of a server-side data management system, a public Web interface, and mobile apps for iOS and Android. Anecdata was designed to manage and publicly share our project data at the Community Environmental Health Lab. It is freely available for others to use for projects that serve the public good. While originally envisioned for use with environmental and conservation data, it is now being used by project leaders and participants to collect and share a range of dataset types, including public health, and city planning.
Both the website and the mobile app exchange data with the Anecdata server using the same application programming interface (API) endpoints, which send and receive structured data such as lists of observations, chat messages, or user profiles. The Anecdata server is written in PHP using the CakePHP framework and uses the MariaDB relational database for data storage. The Anecdata website and mobile app are both developed in TypeScript using the Angular framework. The mobile app additionally uses the Ionic framework to provide a native user experience and interface with the device's hardware. By using Angular across all platforms (both mobile and website), the shared code reduces the overall development time when introducing new features. All features developed for one project can be easily replicated across and made available to all projects on the platform.
Data Collection Schema on Anecdata
The sequence of steps for setting up a new citizen science project or getting involved in an existing project on Anecdata is depicted in Figure 1. For everyone, the first step involves creating a user account with an email address and password. A date of birth column is captured during user account registration to ensure that all users are above the age of 13, as US federal law requires that anyone using online platforms collecting personally identifiable information be at least 13 years of age.
Projects, in the context of Anecdata, are pages that have been created by one or more project administrators with the purpose of gathering observations to fill a data need. Data are shared with these projects by participants in the form of observations. Project administrators use the project designer tool to enter information about their project's goals, protocols, and other essential information for project participants. This generates a custom project page from an established project page template ( Figure 2). The data schema of a project can be customized to suit the needs of the project using the "datasheet" designer tool.
The "datasheet" designer tool provides an interface for creating a list (rows and columns) of named datasheet fields that participants use to enter data and offers multiple base data inputs that project administrators can choose, including text inputs, numbers, yes/no checkboxes, controlled-vocabulary dropdown menus, date, time, and geospatial coordinates. The datasheet designer also offers templates for common use cases such as litter cleanups, animal observations, water quality monitoring, and collecting biological specimens in the field.
The structure of the Anecdata datasheet system allows for the entry of two categories of data:
1.
Parent fields, which are fields that pertain directly to all data on the datasheet. In the case of the "All About Arsenic" project, examples of these fields include the name of the student's school, the name of the legal guardian, and well type.
2.
Child fields, which are repeating blocks of questions that allow the participant to log multiple entries. In the "All about Arsenic" project, students may submit multiple water samples (pre-and post-filtration, or from different locations in the house, such as the kitchen sink and outside faucet), and the child fields on the datasheet pertain to an individual water sample. Examples of these fields include the sample vial ID number, where the sample was collected in the home, the type of filtration system used (if any), and additional comments.
Every time participants visit an Anecdata project page and begin a new observation, they are presented with a blank datasheet with the data fields that project administrators have designed. After all data have been entered and saved, the observation becomes publicly visible on the Anecdata website.
The "All About Arsenic" project workflow provides project participants with the option to share their private data with a state agency; in Maine, well water analysis and associated metadata are shared with the Maine Center for Disease Control (CDC), and in New Hampshire, they are shared with the New Hampshire Department of Environmental Services (DES). Before entering any data into the "All About Arsenic" data form on Anecdata, the project participants encounter a disclosure question that requires them to provide or deny permission for the sharing of their private data (exact latitude and longitude, parent and student first name and last name, and home address).
Participants fill out the datasheet to register the spatial coordinates of where their sample was collected, indicate whether the sample was filtered, and share other related metadata ( Table 2). The well water samples are brought to school and then shipped by teachers to the Community Environmental Health Lab where the labels on each tube are cross-checked with teacher log sheets and sample registrations on Anecdata. Cross-checked batches of well water samples from one or more classrooms are sent to the Trace Element Analysis Core (TEAC) at Dartmouth College for analysis of 14 variables including antimony, arsenic, barium, beryllium, cadmium, chromium, copper, iron, lead, manganese, nickel, selenium, thallium, and uranium.
Datasets for each batch are returned to the Community Environmental Health Lab from TEAC in Excel file format. Using a unique uploader feature on Anecdata, the analytic results are aligned with the metadata in the "All About Arsenic" project. Teachers alert students when sample results are ready for viewing. Parents and students use a sample lookup tool on the "All About Arsenic" project website 2 to retrieve their well water test results. When they enter their sample number, a pop-up display informs them of whether their sample data are available or not; if results are available, the user is automatically redirected to the Anecdata observation page for their well water test results. We added a data validation feature to the "All About Arsenic" project, which displays the maximum contaminant level (MCL) for each analyte next to the result, highlighting samples that are below the EPA MCL in green and those that are at or above the MCL in red. The complete dataset for each sample can be downloaded as a PDF so that each family has a record of its individual water sample results and associated metadata. These customized features were developed for the "All About Arsenic" project and are now available options for other related projects on Anecdata.
Development of Data Privacy Features on Anecdata
From a data management and privacy standpoint, the implementation of the "All About Arsenic" project posed several challenges because at the time, Anecdata had an open model whereby all observations were visibly linked to the participant who shared them. We recognized that for the purposes of this project, we needed to protect the locations of participants' homes as well as make sure the identities of sample registrants were protected. We developed a way to obscure this information while retaining the ID of the original observer, so they can update their sample registration later if needed.
We addressed the issue of participant privacy by obscuring the account that registered a sample and questions on the sample registration that would require personally identifiable information. By obscuring this information, effectively making it inaccessible upon public download, we anticipated that more individuals would feel comfortable about participating in the "All About Arsenic" project or other projects with similar data privacy needs.
Development of the Anonymity Feature
In order to make observations anonymous, the first step was to add a Boolean variable to the project's settings, called anonymize, which defaults to false in all projects unless otherwise selected by a project administrator. The Anecdata software checks this variable when saving new observations:
1.
When anonymize is false, it stores the ID of the currently logged-in user with the observation data as it normally would.
2.
When anonymize is true, a special account called @Anonymous is displayed as the creator of the observation (Figure 3). We also add a record to a table with two values, post_id (the ID of the observation) and user_id (the ID of the currently logged-in user). Data from this table are never displayed directly from any of Anecdata's API endpoints.
When retrieving an observation from the Anecdata API, we set an additional edit variable in the payload returned by the server that informs the user interface whether to display an Edit button that the user can use to correct any mistakes they may have made. For every observation displayed to the user, the Anecdata server-side software checks multiple conditions and sets edit accordingly (Table 3).
The benefit of this approach is that instead of needing to filter observations every time they are read from the database to ensure that the link to the originating user's account is removed, we simply never store the link at all in the standard table of observations and only refer to the original table when we need to check access permissions for the purposes of editing an observation.
Development of Spatial Privacy Feature
Our approach to spatial privacy, also known as geoprivacy, is similar to the spatial privacy model used by iNaturalist for the protection of endangered species (iNaturalist, 2019). While the exact coordinates of observations are available to project administrators, the publicly available coordinates are obscured by adding a random floating-point number between -0.1 and 0.1 to the latitude and longitude (Table 4). This random number is stored when the observation is saved and not generated each time the observation is read from the database, thereby preventing users from guessing where an observation is by refreshing the page repeatedly to deduce the exact coordinates.
The first step in implementing this feature was adding a new Boolean switch on projects, geoprivacy, which defaults to false unless otherwise chosen by a project administrator.
All Anecdata observations are located spatially using lat and lng decimal columns to store latitude and longitude in the database. We added two new columns, private_lat and private_lng, to store the exact unobscured coordinates of every observation.
We then added a function to the Anecdata server-side software that checks when saving an observation whether the corresponding project's geoprivacy is true or false (Figure 4).
The result of this is that all observations in the project have publicly displayed latitudes and longitudes that are (+/-) 0.1 degrees away from their actual location. These can be thought of as "boxes of uncertainty" on a map ( Figure 5).
Development of the Private Fields Feature
A key privacy concern in the sample registration process for the "All About Arsenic" project is protecting the identities of participants. We needed to collect the names and home addresses of participants and keep these data private while keeping other aspects of their sample registrations, such as sample number, well type, and sampling date, public.
To implement this, we added an additional column to our table of datasheet template fields called private. In order to prevent a data breach, fields that have been marked as private are not saved to the standard fields table that all other data are saved in, but rather a separate table that is not normally accessed while viewing and analyzing data.
When a user or project administrator edits an observation or when a project administrator downloads a privileged dataset, the Anecdata software checks the user's privileges before running a separate data query that loads all the private fields from a separate table and displays them on the data entry form or in the export CSV file as if it were any other column. This approach is similar to the one we use for anonymity; instead of marking data as private and actively removing them every time observations are accessed, we store it in a separate table and only include them when the data endpoint explicitly needs it and we have ascertained that the user has access privileges.
After privacy features were developed and available to all project administrators on Anecdata, numerous projects began to adopt these features. In order to understand how these features were helpful, we asked project leaders for feedback. A feedback survey was emailed to all project administrators who had signed up to receive updates from our team. The feedback survey was sent via email to project administrators in line with Agile principles for providing a sustainable means for the users of privacy features to reflect on how they could be made more effective and efficient (Hazzan and Dubinsky, 2014). The following three questions were asked of 200 project administrators:
1.
Can you comment on how privacy features such as geoprivacy and/or anonymity have been helpful in your work?
2.
How satisfied are you with the current privacy features on a scale of 1-5? (1, low-5, high)
3.
What can we do to improve the privacy features on Anecdata?
RESULTS
While we designed privacy features with our "All About Arsenic" project's needs in mind, many other projects on Anecdata are now using these same features. Since privacy features were introduced with the "All About Arsenic" project in 2018, 22 additional projects have begun using one or more privacy features (Table 5). Of these projects, 10 are using private fields, 15 are using geoprivacy, and five are using the participant anonymity features.
Climate change-related projects using private fields include "MaMA (Monitoring and Managing Ash) Monitoring Plots Network" 3 in which participants monitor ash trees on an annual basis to determine mortality due to the invasive insect, emerald ash borer, and "Great Green Crab Hunt," 4 which involves monitoring coastal New England habitats for the invasive green crabs. Projects using geoprivacy to obscure the exact coordinates of observations include Mass Audubon's "Eastern Meadowlark Survey," 5 which collects observations of meadowlark presence and absence at 434 sites across Massachusetts, and the University of Maine's "Coastal SOS Monitoring" project, 6 which collects phenology data on rockweed as an important climate change indicator along the coast of Maine.
We note that participant anonymity is not used as frequently as other privacy features, accounting for only five of the 23 projects on Anecdata that are utilizing these features. Three of the five are school-based such as our "All About Arsenic" 7 project, which engages secondary school students in collecting private well water samples for analysis of arsenic and other contaminants, the "Dartmouth Dragonfly Mercury Project," 8 which involves students in collecting dragonfly larvae from streams for mercury analysis, and "NASA's Lower the Boom" project, 9 which enlists high school students in collecting measurements of background noise samples to determine how quiet supersonic jetliners would have to be in order to not cause a disturbance when flying across the continental US. Without the anonymity feature, locations could be deduced even with the geospatial privacy feature in place, such as Mass Audubon's "Barn & Cliff Swallow Nesting Sites" project, 10 which asks local birders to identify farm buildings and other structures that may be used by nesting swallows. Given that some project participants might identify their own farm buildings, participant anonymity is as necessary as geoprivacy in order to protect the location of these nesting sites.
Feedback on Privacy Features
We requested feedback from over 200 project administrators, 22 of whom (aside from our own "All About Arsenic" project) are currently using privacy features. We wanted to know how helpful the features were, the level of satisfaction with the features, and suggestions for improving the features. We received feedback from 11 project administrators over a 2-week period. Based on our analysis of project leader feedback on privacy features, we learned that these features are useful for reasons that we did not necessarily anticipate. We also learned about barriers and challenges for Anecdata users. Of the 11 respondents, seven currently use privacy features, two do not use the features because they are too restrictive, one does not use the features for reasons that were not stated, and one has intentions to use features in the future. One respondent commented, "I think the fact that you are asking is pretty stellar." Three themes emerged from the feedback. These themes relate to protection of endangered species and their habitats, privacy of students involved in school-based projects and other project participants, and maintenance of property owner privacy and rights. Figure 6 depicts how a suite of complementary privacy features can help to address multiple concerns across multiple projects with common reasons for wanting to preserve data privacy.
Flexibility for Privacy Settings
We learned from project administrator feedback that more flexibility is needed in privacy controls. Several project leaders indicated that they would like a higher level of control in project settings that allow them to set the degree to which location data are obscured: "For our measurements, it would be good if indeed you wouldn't see the actual house or garden where a measurement was taken but the current rounding of the GPS coordinates is too much. If it would be possible to choose a certain level of geoprivacy and the coordinates could for example also be rounded to two decimals that would be better." "We use Anecdata for our precipitation measurements... However, we realized that there could be some privacy issues. Activating the geoprivacy feature doesn't help in our case, since precipitation can change over small spatial units. Long story [short], it would be very handy to have the option of geoprivacy with different rounding options." In addition, a respondent suggested using avatars or nicknames instead of names as an alternative to having "anonymous" as the default designation in the participant privacy feature. This could also be useful if only some people need or would want to have their names obscured on the project page.
Communicating Data Privacy Features to Project Participants
A conversation with the "Coastal SOS Monitoring" project administrator informed us of the process used to make property owners and data collectors aware of the importance of data privacy, especially as it relates to location of sampling sites on private shorelines. Project leaders or participants inform property owners that their site location is not shared and that no one can access the participant data portal without permission from a project coordinator. This gives many coastal property owners a sense of security that their site location will be obscured and kept confidential by the Anecdata system. One concern for coastal private property owners who give permission for volunteers to access the shoreline adjacent to their property is that other people will then view their property as open and accessible to the general public. Information about data privacy is provided to participants in both their in-person and online trainings. While "Coastal SOS Monitoring" project data are shared with scientists studying climate change as it relates to coastal ecology, site locations are not revealed. Privacy features can address different kinds of issues that come up related to private property. Based on feedback from project leaders using the Anecdata platform, it is clear that a formal usability study on privacy across this broad range of projects will help us to better understand why data privacy features are being used and how they can meet the growing needs for data privacy by various citizen science projects.
Technological Solutions to Human Errors
Early in the "All About Arsenic" project, non-obfuscated latitude and longitude data were inadvertently uploaded to our private arsenic platform on the Tuva data literacy website. During the time the actual coordinate data were accessible, it would have been possible for a student or other project member to use the mapping feature on the Tuva website and determine the well water quality status at points on named streets and possibly deduce the homeowner's identity. However, since there are no property lines on Tuva maps, it was unlikely that points could be correlated with individual households. Nonetheless, this made it clear to us that we needed to address this potential for error.
In order to address this, we updated the standard CSV downloader used by all projects on Anecdata to include a toggle switch for administrators that lets them switch between downloading their publicly available dataset and their privileged dataset. In order to help prevent the inadvertent sharing of datasets after they have been downloaded, privileged dataset downloads have their filenames prefixed with "admin" and the headers of all private columns are prefixed with "private".
DISCUSSION
We recognize the role that technical platforms play in ensuring that citizen science projects are undertaken in responsible and ethical fashions that ensure privacy and/or anonymity of participants, permissions by participants for disclosure of data in private fields, and location privacy where necessary. When these features are made available, then project leaders setting up projects on these platforms can be guided toward more ethical projects by virtue of these available options.
"All About Arsenic" is an example of how metadata privacy can be achieved in an otherwise public-facing project. By combining geospatial, anonymity, and private field features in this project, with an option for providing permission for full disclosure of all project data, we have made it possible for this emerging citizen science dataset to affect change at the community level, protect public health, and inform public health policy. We have anecdotal reports of families installing well water filtration systems to deal with high arsenic levels in their drinking water. We are planning a follow-up study with all participants to determine actions taken in response to receiving well water test results and receiving informational materials and/or attending community outreach events hosted by students involved in the project.
In analyzing the data from those participants who gave permission to share their private data with the Maine CDC or New Hampshire DES, we noted that a higher percentage of people who did not provide permission to share their data did not know the source of their drinking water as compared to those who did provide permission to share their data (Figure 7). We are interested in pursuing the link between participant confidence in their data reporting and their willingness to have their private data shared. There may be information or features on Anecdata that could be provided to project participants that would increase their confidence in their data reporting and sharing.
Additional features that were developed for Anecdata resulted from addressing challenges related to the "All About Arsenic" project, such as ways to safely export data for use on other platforms like Tuva without disclosing information in privileged datasets. Though these features were created for the "All About Arsenic" project, all current and future projects have access to them as well.
Power of Public Data
Data collected by citizen scientists have power to effect change when there is broad access to the data (Garbarino and Mason, 2016). Researchers can download the data and use it to guide their own research. In one example, a researcher at Maine Medical Center used the "All About Arsenic" dataset when they could not find the information that they needed in the Maine CDC's Environmental Tracking Network dataset. In particular, the lead data in the well water dataset informed this researcher about the scope of lead problems across Maine and New Hampshire, and findings were incorporated into a grant application. In another example, staff from South Carolina Aquarium were able to use data collected as part of their "Litter-Free Digital Journal" project to testify to the city council in Folly Beach, South Carolina, leading to a ban on Styrofoam and reusable plastic bags.
Future Directions
Anecdata is committed to offering features to ensure that citizen scientists have access to the data that they submit and that they can act on it when necessary. In an expansion of our thinking about "closing the citizen science data loop" (Disney et al., 2017), we plan to add improved data visualization, mapping, and communication features and a civic action toolkit to Anecdata to facilitate the use of data for improving public health, addressing issues like climate change, and informing public policy. Along these lines, we plan to add new types of spatial data collection (line and polygon) and new ways to interact with and map project data. Adding tools and new Web map functionality will allow project leaders and users to study the patterns of their project directly in the project without needing external software or accounts. We believe that this work is important, as maps can be strong communication tools especially for visual learners and communicators.
As we develop features for Anecdata, one of the key concerns is to ensure that their Agile development happens in a manner to support the privacy needs of all stakeholders involved in citizen science projects. The data access, visualization, and communication needs of the project administrators, the citizen science participants, and the general public need to be properly researched to ensure the right balance of privacy features for individual stakeholders across projects.
Our vision is for Anecdata to provide the tools needed to assist users with engaging throughout the citizen science project cycle (Figure 8) not only in data collection and visualization but also in communicating with each other to make data-driven decisions and participate in civic action that leads to impactful and lasting change.
Although privacy is clearly an important feature for many projects, as evidenced by the rapid adoption of new privacy features by projects on Anecdata, project leaders should consider the extent to which data need to remain private. There will always be tension between data privacy and openness (Anhalt-Depies et al., 2019). The question emerges, what is the motivation for privacy of particular data types, and in what instances does it really confer any benefit to the parties involved, the place where data are being collected, or the species being documented. In the case of climate change, there is a lot at stake for the future of landscapes, habitats, and species. In trying to protect species by obscuring their location, for example, specific areas of concern (such as those impacted by flooding) may not be addressed. In these types of instances, the need for privacy must be balanced with the need for openness of data.
Our collective experience with the development of privacy features has led us to explore ways to promote scientific data management and stewardship through adherence to principles of findability, accessibility, interoperability, and reusability (FAIR) (Wilkinson et al., 2016). Along these lines, we have facilitated collaborative efforts by the Anecdata community to provide translation of Anecdata into multiple languages to improve its accessibility across diverse geographic locations worldwide (Figure 9). Anecdata also has a "CSV Data uploader" feature available for project administrators that allows them to format and upload legacy data (from old datasets such as Excel sheets and databases) directly into Anecdata and make them interoperable and reusable with existing datasets. Anecdata provides APIs to researchers upon request that allow them to easily access and reuse anonymized datasets across projects.
Additional research and development efforts are currently underway to ensure that we enhance findability (search), accessibility, interoperability, and reusability of datasets across all projects on Anecdata, while ensuring that "private fields" and sensitive data (like personal information and geolocation) are only accessible to project administrators (or organizations) running citizen science projects on Anecdata.
Even though some projects may choose to make data open to the public for moving data to action, adequate information and support in terms of privacy, safety, and security of sensitive information must be provided to project administrators at regular intervals to further the Agile development of the Anecdata platform to meet privacy needs of various projects.
The development of privacy features for the "All About Arsenic" project set the stage for other projects to use privacy features across various local contexts and in support of different needs. Our journey into the development of privacy features showcases a genuine need for investment of time and effort into a usability study to help improve privacy features on Anecdata, which we plan to implement as a "next step" for Anecdata. We anticipate that continued development and refinement of key privacy features will be essential to supporting the diverse projects currently on Anecdata and those that will use Anecdata in the future. By providing an array of refined options for data privacy, Anecdata may be able to serve as a platform for a myriad of data collection projects that would benefit from but otherwise not be amenable to a citizen science approach.
FIGURE 1 |.
How Anecdata works. Anecdata is a citizen science platform that welcomes new projects, some of which are open to new participants joining. Anyone can download non-private data for analysis and interpretation, share the data with others, and use the data to plan actions aimed at effecting change at any societal level.
Condition
Can the user edit the observation?
The user is an administrator in the observation's project Yes The user created the observation (the observation's user_id is the same as the logged-in user) Yes The user created the observation (There is a record in anonymous_post_owners with a post_id matching the observation's ID and a user_id matching the logged-in user's ID)
|
v3-fos-license
|
2017-09-09T02:18:50.671Z
|
2015-09-30T00:00:00.000
|
4651983
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/bjos/v14n2/1677-3217-bjos-14-02-00100.pdf",
"pdf_hash": "c9a9d91bc1330a50ab3a7e0b39d28d1bbc6c801a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1095",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"sha1": "c9a9d91bc1330a50ab3a7e0b39d28d1bbc6c801a",
"year": 2015
}
|
pes2o/s2orc
|
Influence of chemical degradation and abrasion on surface properties of nanorestorative materials
Aim: The aim of this in vitro study was to investigate the synergistic effect of chemical degradation (erosion) and three-body abrasion (mechanical degradation) on the surface roughness (Ra) and hardness (KHN) of two nanorestorative materials and two conventional materials. Methods: Disc-shaped specimens (5 mm in diameter, 2 mm thick) of Filtek Z350 TM and TPH Spectrum TM composites and Ketac Nano TM and Vitremer TM light-curing glass ionomer cements, nanomaterials and conventional materials were prepared according to the manufacturer’s instructions. After 24 h, polishing procedures were performed and initial measurements of Ra and KHN were taken in all specimens. The specimens were divided into 12 groups (n = 10) according to material and storage media: artificial saliva, orange juice, and Coca-Cola®. After 30 days of storage, the specimens were submitted to mechanical degradation and re-evaluated for Ra and KHN. Data were tested for significant differences by repeated-measure three-way ANOVA and Tukey’s tests (p<0.05). Results: Erosion and abrasion wear significantly decreased hardness of all materials. Only Filtek Z350 roughness, however, was not affected by erosion and abrasion. All materials showed a significant increase in surface roughness after erosion and abrasion, except for Filtek Z350. After chemical and mechanical degradation, the KHN of all samples had decreased significantly. After mechanical degradation, the acidic drinks (Coca-Cola® and orange juice) were more aggressive than artificial saliva to all materials. Conclusions: A synergistic effect was observed by the increase in roughness for all materials, except for Filtek Z350; hardness values decrease for all materials, regardless of whether they were nanofilled or not. The RMGICs were more susceptible to degradation than the composites, considering both hardness and roughness surface parameters.
Introduction
The application of nanotechnology to dental materials was introduced in past few decades.In addition to improved optical properties, nanomaterials present better mechanical behavior 1 , since the nanometric size of particle allows incorporating greater amount of filler load in the restorative materials 2 .Nanofillers and nanofiller "clusters" are combined to improve mechanical properties, as threebody wear resistance.The nanofiller components also provide superior aesthetics and excellent polishing, with higher gloss and smoother surfaces than other resin-101 101 101 101 101 modified glass ionomers (RMGICs), while offering fluoride release similar to that of a conventional RMGIC.
A new RMGIC has been introduced for operative dentistry recently: Ketac Nano.This material contains nanofillers and clusters of nano-sized zirconia/silica that result in a highly packed filler composition.It is important to compare this material to a traditional RMGIC and a nanocomposite in order to establish whether the nano-ionomer shows a behavior similar to that of ionomeric and composite materials, thus predicting its mechanical and chemical properties.
Although it is possible to improve the material physical properties by incorporating nanofillers into restorative materials, it should be considered that the restorative materials are constantly subject to thermal, mechanical, and chemical challenges on the oral environment.De Paula et al. 3 (2011) found that nanotechnology incorporated in restorative materials, is important for the superior resistance to biomechanical degradation.
Those challenges can negatively influence the material properties by causing degradation of the matrix in resin influences the degradation of resin composites and glassionomer restorative materials 4 .Soft drinks may contain several different types of acid that contribute to their low pH value 5 .A study reported by Jensdottir et al. 6 (2004) have found carbonated drinks, especially carbonated cola drinks, to be associated with erosion.An in vitro study has shown, however, that fruit juices may also be erosive, due their high titrability 7 .The erosive attack can induce matrix and filler degradation of restorative materials, and also potentially jeopardize the clinical performance of these materials 8 .Not only erosive attack can jeopardize the restorative materials surface, but also the abrasion process produced by oral hygiene methods can adversely affect the surface characteristics of restorative materials 1 .This process may interfere with both health and Table 1.Table 1.Table 1.Table 1.Table 1.Materials tested in this study.
aesthetics, since rough surfaces may predispose teeth to biofilm accumulation.De Paula et al. 3 (2011) have found that nanomaterials, when exposed to a cumulative effect of biofilm/ abrasion, shows superior resistance to biomechanical degradation in comparison with conventional restorative materials.It may therefore, be hypothesized that toothbrush abrasion and erosion caused by an acidic diet have a synergic effect on the substance loss of dental materials.
In this way, restorative materials are in a constant process of degradation in the oral cavity, and nanotechnology has been investigated for its possible application to the materials as a way to minimize the cumulative deleterious effects of this process.The aim of this in vitro study was to investigate the synergistic effect of chemical degradation (erosion) and three-body abrasion (mechanical degradation) on the surface roughness (Ra) and hardness (KHN) of two nanomaterials and two conventional materials.
Specimen Preparation and Initial Analysis
Four different types of tooth-colored restorative materials were tested in this study (Table 1): two RMGICs (Vitremer and Ketac Nano, 3M ESPE, St. Paul, MN, USA) and two composites: Filtek Z350 (3M ESPE), and TPH Spectrum (Dentsply, Caulk, USA).Thirty specimens of each material were manipulated according to the manufacturer's instructions.Materials were inserted into plastic molds with internal dimensions of 5 mm diameter and 2 mm thickness.The top surface of the fulfilled mold was covered by a polyester strip and pressed flat by a glass slab.The top surface of all materials was cured according to the manufacturer's cure times using an Elipar Trilight curing light unit (3M ESPE), with a mean light intensity of about 800 mW/cm 2
Influence of chemical degradation and abrasion on surface properties of nanorestorative materials
Braz J Oral Sci. 14 All specimens were maintained at 100% relative humidity and 37 °C for 24 h.Then, the surfaces were wetpolished with on a sequence of waterproofed silicon carbide paper (600-, 1200-, and 2000-grit) and ultrasonically cleaned (Ultrasonic Cleaner, model USC1400, Unique Co, São Paulo, SP, Brazil) in distilled water for 10 minutes to remove polishing debris.The specimens were randomly distributed into 12 groups (n=10), according to material and storage medium: artificial saliva (control), orange juice (Minute Maid, Coca-Cola), and Coca-Cola® (Table 2).
Before erosion testing, specimens were analyzed for surface roughness and Knoop hardness.For surface roughness testing, the specimens were analyzed using a Surfcorder SE1700 instrument (Kosaka Corp, Tokyo, Japan), with cutoff length of 0.25 mm, at a tracing speed of 0.1 mm/s.The mean surface roughness values (Ra, mm) of each specimen were obtained from three successive measurements of the center of each disk in different directions (total length analyzed of 3.750 mm) 9 9 9 9 9 .Then, hardness tests were carried out by a Knoop indenter (Shimatzu, Tokyo, Japan) and a 50 g load, 15 s dwell time.Three readings were taken for each specimen, and the mean KHN was calculated.
Erosion -Storage in acidic drinks
All specimens were immersed individually in 4 mL of storage solutions: Coca-Cola® (pH 2.49), orange juice (pH 3.23) and artificial saliva (pH 7.00), for 30 days.The solutions were weekly changed and pH-tested by a portable pH meter (Orion Model 420A, Analyzer, São Paulo, SP, Brazil).In all cases, the pH electrodes were calibrated immediately before use, by standard buffer solutions at pH 4.0 and 7.0.At the end of the storage period, the specimens were ultrasonically washed for 10 min.
Three-body Abrasion Test
After erosion, the tooth-brushing test was performed in all specimens at 250 cycles/min, for 30,000 cycles with a 200 g load.Colgate Total dentifrice (Colgate Palmolive Co., São Bernardo do Campo, São Paulo, Brazil) diluted in distilled water (1:2) was used as an abrasive third body.The specimens were ultrasonically washed for 10 min, then dried and evaluated for roughness and hardness.Surface roughness readings were made on each specimen perpendicular to the brushing movement 10 .
Statistical analysis
Data were evaluated by the PROC LAB from SAS in order to check the equality of variances and confirm a normal distribution.Hardness and roughness data were submitted to repeated-measure three-way ANOVA and Tukey's test with a significance level of 5%.
Results
Regarding roughness, there was significant interaction between the factors "materials" and "erosion/abrasion effect" (p<0.0001), and also between "storage solution" and "erosion effect" (p<0.0001).There was a significant difference among the three factors (p<0.0001).It was not observed any significant interaction between "materials" and "storage solution" (p=0.2372).The means and standard deviations of surface roughness of each material after erosive/ abrasive challenge are presented in Table 3.
Regardless of the storage solution, both composites (Filtek Z350 and TPH Spectrum) presented similar roughness values (p>0.05) and significantly lower roughness values than glass ionomer cements, both before and after erosive challenge/abrasion.There was no significant difference in roughness values between Ketac Nano and Vitremer, in all storage conditions (p>0.05).In addition, when different storage solutions were compared concerning each material after erosive challenge and abrasion, it was observed that there was no statistically significant difference in surface roughness for TPH composite.However, the orange juice was more aggressive than the artificial saliva for Filtek Z350, Ketac Nano, and Vitremer, increasing the surface roughness.In all cases, however, the cumulative effect of erosive challenge plus abrasion roughened the specimens of all materials, except the Filtek Z350 surface.
Table 4 shows the means and standard deviations of the Knoop hardness of each material after erosive/abrasive challenge.There was significant interaction among the three factors (p=0.0062).There was no significant interaction between the factors "materials" and "storage solution" (p=0.6294), or between "materials" and "erosion/abrasion
Before erosion/abrasion challenge, it was observed that both composites (Filtek Z350 and TPH Spectrum) presented similar or significantly higher values than the RMGICs, which also presented similar values between them.Regarding erosion/abrasion effects on each material' surface, exposure to any storage solutions produced significantly lower hardness values for all materials tested.It was also observed that the storage solution influenced the materials: The acidic drinks (Coca-Cola ® and orange juice) were more aggressive than artificial saliva to all materials.In addition, composites presented significantly higher hardness values than ionomeric materials after chemical/abrasion degradation.
Discussion
Wear of a dental material involves various processes, such as abrasion and erosion.On exposure to dental biofilm Table 3. Table 3. Table 3. Table 3. Table 3. Surface roughness mean (standard deviation in parentheses) (µm) of restorative materials submitted to erosion/abrasion challenge.Table 4. Table 4. Table 4. Table 4. Table 4. Knoop hardness mean (standard deviation in parentheses) (KHN) of restorative materials submitted to erosion/abrasion challenge.
Capital letters indicate comparison among storage solutions (horizontal).Lowercase letters demonstrate comparison among materials (vertical) within each storage solution and each erosion condition (before or after).Asterisks represent a significant statistically difference between erosion effects (before and after).Groups denoted by the same letter/symbol represent no significant difference (p > 0.05).
acids, food-simulating constituents and enzymes, resin-based restorative materials can be softened.Consumption of certain beverages, such as coffee, tea, soft drinks, fruit juices, and alcoholic beverages, may affect the aesthetics and physical properties of composite resins 11 .Usually after consuming beverages and foods, people brush their teeth to prevent caries development, exerting mechanical forces on enamel/restorative material surface 12 .The wear resistance of composites and RMGIC is greatly influenced by the size and shape of the filler particles.According to De Paula et al. 3 (2011) and de Fúcio et al. 13 (2012), the greater the size of filler particles, the greater the amount of material lost.
This study evaluated the cumulative effect of erosion and abrasion in composites and RMGIC.Higher roughness values were observed for RMGIC than for composite resins before the erosion/abrasion challenge.The differences observed at baseline among materials regarding their means of surface roughness are mainly related to differences in their filler particle size, shape, volume, and distribution, and to their interaction with the organic matrix, allowing better polishing characteristics for the composites 14 .Also, those results may be occurred through the handling of RMGICs, since they are in a powder: liquid or paste: paste formulation
Influence of chemical degradation and abrasion on surface properties of nanorestorative materials
Braz J Oral Sci.14(2):100-105 and air can be trapped in the material structure, resulting in surface bubbles and exposure of porosities after finishing/ polishing procedures.
Similar roughness values between the nanofilled and conventional materials were observed before erosion/abrasion challenge, for both the composite and RMGIC groups.Cavalcante et al. 15 (2009) have demonstrated, however, that nanofilled composites present lower roughness values and better polishing characteristics than do hybrid composites, thanks to the presence of nanofillers.Most likely, the resinous matrix of the materials used in this study was not totally removed by initial finishing/polishing procedures, leaving a matrix layer over the fillers.
The erosive/abrasive challenge affected surface roughness of TPH Spectrum, but it was observed that there was no statistically significant difference in surface roughness for TPH composite, concerning storage solutions.The ethoxylated version of the Bis-GMA (Bis-EMA) existing in the composition of TPH Spectrum matrixes probably contributed to their hydrolytic and biochemical stability, by the hydrophobicity of this monomer.Yap et al. 16 (2000) have also showed that the surface roughness of a Bis-EMAbased composite is not affected by acidic beverages.Bis-EMA shows a decreased flexibility and increased hydrophobicity due to the elimination of the hydroxyl groups, when compared with composites formulated with Bis-GMA 17 .Hence, the reduction in water uptake may be partially responsible for the chemical stability of composites that contain Bis-EMA.
For the other materials (Filtek Z350, Ketac Nano, and Vitremer), orange juice resulted in higher surface roughness values than did saliva and Coca-Cola®, indicating that solutions produced different effects in materials.There are two ways to quantify the acid content of a beverage include pH and total or titrable acidity.Barbour and Shellis 18 (2007) have shown that fruit juices may also be potentially erosive, because of their high content of titrable acid.It was shown that, the higher the value of titrable acidity, the greater were the erosion effects.Coca-Cola ® contains phosphoric acid that has low titratiability, and has been shown to contain almost no carboxylic acid.
Only Filtek Z350 specimens retained similar roughness values before and after erosion and abrasion challenge.The biomechanical degradation resistance of nanocomposite Filtek Z350 is basically related to its chemical composition.With regard to filler particles, this material is formulated by a combination of nanosized particles with the nanocluster formulations 18 .The higher filler loading with smaller particle size provides a reduction in the interstitial spacing, which effectively protects the softer matrix, reduces the incidence of filler exfoliation, and enhances the material's overall resistance to abrasion 19 .When the nanocomposite undergoes toothbrush abrasion, only nanosized particles are plucked away, leaving the surfaces with defects smaller than light wavelength 1 .
Another parameter used in this study to measure the surface changes caused by erosion/abrasion was Knoop hardness.According to the present results, both composites (Filtek Z350 and TPH Spectrum) presented higher hardness values than the RMGICs before and after the erosion/abrasion challenge.The different constitution of organic matrices and higher filler loading, could explain the behavior of these materials.In addition, the initial characteristics of hardness are not affected by the presence of nanofillers in the different materials studied.
After erosion/abrasion, all materials showed a significant reduction of hardness for all storage solutions.This reduction appears to have originated from hydrolysis 20 .According to Sarkar 21 (2000), corrosive wear begins with water absorption that diffuses internally through the resin matrix, filler interfaces, pores, and other defects, accelerated by the solution's low pH.Moreover, the RMGICs showed a greater loss of hardness than the resin composites after erosion/ abrasion.Thus, the chemical degradation rates of different materials depend on their hydrolytic stabilities, which are mainly related to the resin matrix.As the resin matrix of composites is known to absorb a small percentage of water 22 , composites were more degradation-resistant than hydrophilic materials, such as RMGICs 23 .In addition, the storage solutions may promote dissolution near the glass particles, which could be the result of dissolution of the siliceous hydrogel layer of RMGICs 24 .On the other hand, the acid could also attack the resin (to a lesser extent), softening the methacrylate-based polymers, possibly by leaching the comonomers, such as triethylene glycol dimethacrylate (TEGDMA), and thus decreasing the surface hardness of these materials 4 .This process is also emphasized by abrasion challenge 15 .Abrasion commonly takes place through a gradual removal of the softened organic material.This removal eventually leaves the fillers unsupported and susceptible to exfoliation 25 , which may have had a part in reducing the hardness of all the materials.
It can be concluded that, according to the chemical composition of the material and storage medium, a synergistic effect can be observed by the increase in roughness for all materials used, except for Filtek Z350; hardness values decreased for all materials, regardless of whether they were nanofilled.RMGIC is more susceptible to degradation than are composites, in both hardness and roughness surface parameters.
This study showed that restorative materials might undergo degradation when exposed to acidic solutions and abrasive wear.However, an in vitro study presents some limitations, and thus in vivo studies should be performed to confirm these results in the oral environment.
|
v3-fos-license
|
2020-10-06T13:36:15.505Z
|
2020-09-30T00:00:00.000
|
222166332
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/9/10/3175/pdf",
"pdf_hash": "16ae7fe926afb60b43b8280e0cacff44c83ab908",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1096",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ef87b35596de7a687884f3ded253056c737f2dd4",
"year": 2020
}
|
pes2o/s2orc
|
Falls from Height. Analysis of Predictors of Death in a Single-Center Retrospective Study
Falls from height (FFH) represent a distinct form of blunt trauma in urban areas. This study aimed to identify independent predictors of in-hospital mortality after accidental or intentional falls in different age groups. We conducted a retrospective study of all patients consecutively admitted after a fall in eight years, recording mechanism, intentionality, height of fall, age, site, classification of injuries, and outcome. We built multivariate regression models to identify independent predictors of mortality. A total of 948 patients with 82 deaths were observed. Among the accidental falls, mortality was 5.2%, whereas intentional jumpers showed a mortality of 20.4%. The death rate was higher for increasing heights, age >65, suicidal attempts, and injuries with AIS ≥3 (Abbreviated Injury Scale). Older patients reported a higher in-hospital mortality rate. Multivariate analysis identified height of fall, dynamic and severe head and chest injuries as independent predictors of mortality in the young adults’ group (18–65 years). For patients aged more than 65 years, the only risk factor independently related to death was severe head injuries. Our data demonstrate that in people older than 65, the height of fall may not represent a predictor of death.
Introduction
Falls from height (FFH) represent a distinct form of blunt trauma in which rapid vertical deceleration determines injuries to the victim's body [1]. Falls account for almost 10% of all trauma admitted to the Emergency Department, mostly accidental (85%) [2] and in part intentional [3], the so-called "jumpers". Outcomes and injury patterns depend on multiple environmental factors like height of fall, speed at impact, landing surface as well as victim's characteristics such as age, position at impact, comorbidities, and intentionality of fall. As reported in the literature, suicidal attempts induce specific patterns of injuries [4] compared to accidental falls.
Various evidence available in the literature has already shown how increasing age and height of the fall correlate with worse outcomes [5,6]. The main aim of this study was to identify independent predictors of mortality in the study population and different age subgroups. As a secondary endpoint, the in-hospital mortality of patients of different age groups was evaluated.
Materials and Methods
The present study is a retrospective analysis of prospectively collected data of all trauma patients consecutively admitted after falling from height at ASST (Azienda Socio Sanitaria Territoriale) Niguarda Hospital, a level I Trauma Center in Milan, Italy, from October 2010 to December 2018. All details about trauma patients managed at Niguarda Hospital are collected in the Niguarda trauma registry, which is held by a Trauma Team consultant who is meant to keep it constantly updated, and is annually revised by the Head Department. The institution of trauma registry was approved by the Ethical Committee Milano Area 3 (record no. 534-102018). Given the retrospective nature of this study, specific board approval was not required.
Demographic data, Injury Severity Score (ISS), the height of fall, intentionality of fall, and survival outcome as stated by the Atlanta Trauma Registry Workshop guidelines [7] were retrieved from the registry. The height of fall was recorded in meters (m). The determination of height of the fall was obtained by directly asking patients and bystanders or was based on pre-hospital emergency medical staff (EMS) reports. Ground level falls were excluded from the analysis. Three different age groups were considered: pediatric age and teenagers (0-17 years), adults , and the elderly (≥65). We collected data on injuries of four anatomical sites according to the Abbreviated Injury Scale (AIS, 1998 version): head/face, chest, abdomen, and extremities. Critical head, chest, and abdominal defined by AIS ≥3 were identified. Injuries of extremities (long bones, pelvis) with AIS ≥ 2 were analyzed to also include the typical injuries related to low height falls. All cases with missing data or dead on the scene were excluded from further analysis.
Data were recorded in a computerized spreadsheet (Microsoft Excel 2016; Microsoft Corporation, Redmond, WA, USA) and analyzed with statistical software (IBM Corp. Released 2017. IBM SPSS Statistics for Windows, Version 25.0. Armonk, NY, USA, IBM Corp.). Categorical variables were explored with the χ 2 test or Fisher's exact test when appropriate. The distribution of continuous variables was assessed with normality tests and, since no variables had a normal distribution, differences among groups were evaluated with the independent sample Mann-Whitney test.
In order to identify independent predictors of mortality in different age groups, three multivariate regression models were built, providing adjusted odds ratios and 95% CI (Confidence Interval). All variables not contributing to the model (overfitting) were removed one-by-one from the model at each step based on the Nagelkerk R 2 value. Multicollinearity was preventively assessed by examining the variance inflation factor (VIF). Models' goodness of fit was explored with the Hosmer-Lemeshow test.
A two tailed p-value < 0.05 was considered statistically significant for all tests.
Results
Data of 948 patients were retrieved from the Trauma Registry. The median age was 42 years (Interquartile Range IQR 22-59). Six hundred eighty-two were males (71.9%) and 266 females (28.1%). The intentionality of the fall was obtained in all cases and was accidental in 733 cases (77.3%) and a suicide attempt in 215 cases (22.7%). The height of the fall was obtained or calculated for all patients with a median of 3 m (IQR 2-6). The median height was 2 m (IQR 2-4) for accidental falls ("fallers") and 7 m (IQR 5-9) for intentional jumps ("jumpers") (p < 0.001) ( Table 1). The hospital mortality was 5.2% (38/733) in fallers and 20.5% (44/215) in jumpers (p < 0.001), with an overall death rate of 8.6%. Table 1 summarizes the demographic and trauma-related characteristics of the study population. The comparison between fallers and jumpers showed a significatively greater proportion of females, ISS, chest AIS ≥ 3, abdomen AIS ≥ 3, and extremities AIS ≥ 2 injuries among jumpers as well as an increased median height of the fall.
As shown in Figure 1, the trend of deaths progressively rises for increasing clusters of heights. As shown in Figure 1, the trend of deaths progressively rises for increasing clusters of heights. Detailed results of comparisons between survivors and non-survivors are reported in Table 2.
A significantly higher proportion of deaths was recorded in the over 65 years group. Moreover, among non-survivors, we detected a greater height of the fall, an increased proportion of ASA 3-4 patients, a higher number of intentional jumpers, a higher rate of AIS ≥ 3 head, AIS ≥ 3 chest, AIS ≥ 3 abdomen, and AIS ≥ 2 extremity injuries. Similarly, we detected a significantly greater proportion of patients with ISS > 15 among non-survivors.
Variance inflation factor analysis pointed out significant multicollinearity (VIF > 2.5) between ISS and other variables meant to be entered in the multivariate models, therefore, it was excluded.
Height of the fall, ASA score, intentionality of the fall, head, chest, abdomen, and extremities injuries AIS ≥3 were selected to be entered in the regression models. Age was a further variable selected for the general population model. Detailed results of comparisons between survivors and non-survivors are reported in Table 2. A significantly higher proportion of deaths was recorded in the over 65 years group. Moreover, among non-survivors, we detected a greater height of the fall, an increased proportion of ASA 3-4 patients, a higher number of intentional jumpers, a higher rate of AIS ≥ 3 head, AIS ≥ 3 chest, AIS ≥ 3 abdomen, and AIS ≥ 2 extremity injuries. Similarly, we detected a significantly greater proportion of patients with ISS > 15 among non-survivors. Variance inflation factor analysis pointed out significant multicollinearity (VIF > 2.5) between ISS and other variables meant to be entered in the multivariate models, therefore, it was excluded.
Height of the fall, ASA score, intentionality of the fall, head, chest, abdomen, and extremities injuries AIS ≥ 3 were selected to be entered in the regression models. Age was a further variable selected for the general population model.
Multivariate analysis failed to identify any independent predictors of death in the pediatric population. Intentionality, height of the fall, severe head and chest injuries resulted in independent predictors of death in the general and young adults' group models. Furthermore, in the general population, age was identified as an independent predictor of mortality, with a 4% increase in the risk per year of age (p < 0.001; Odds Ratio-OR: 1.04; 95% CI: 1.03-1.06). Severe head injury was the only independent risk factor for mortality in the elderly group. Detailed results are reported in Table 3; Table 4.
All models were characterized by adequate goodness of fit. The overall predictive ability of all models was good, with remarkable results for the ones built for the general population (91.0%) and the young adults' group (93.0%).
Discussion
This study showed in a large population that death after a fall can be independently predicted in the general population by age, the height of fall, intentionality, and severe chest and head injuries. In the elderly, the height of the fall cannot be considered as an independent predictor of death.
In Western countries, FFH represents one of the leading causes of admission to the Emergency Department [8], accounting for about 10% of all trauma patients [2]. This type of trauma represents the most common mechanism of self-inflicted injury [3], especially in the presence of specific underlying conditions such as psychiatric disease, drug, or alcohol abuse with a subsequent altered mental status. In urban areas, FFH represents the most frequent method of suicide attempt [9]. Conversely, in developing countries this event is generally accidental, with predominant involvement of workers in the setting of building constructions [10] and in pediatric patients during play activities in spring and summer seasons [2,4,9]. Specific biomechanical parameters in vertical deceleration trauma such as height of the fall, age of the victim, type of ground surface [11], and intentionality represent variables that lead to defined patterns of injuries and different outcomes [4,12,13]. Nevertheless, a precise reconstruction of the events and the gathering of accurate information can be challenging when eyewitnesses or clear signs on the scene are lacking. For these reasons, in our registry, only data reported by patients, bystanders, or pre-hospital emergency medical staff (EMS) were considered truly reliable.
In our study sample, we observed 82 deaths with an overall death rate of 8.6%. The study published by Goodacre et al. described a mortality rate of 1.4% [12], while our results were more similar to those reported by Liu C.C. et al., with a mortality rate of 22.7% [14] and Velmahos et al., who reported a mortality rate of 9.6% among fallers from more than 6 m [15].
In the univariate analysis, we detected significant differences in proportions/distribution for most of the variables taken into account. The height of the fall has been identified as a risk factor of unfavorable prognosis, analogously to other authors [8,10,15], Alizo et al. reported a 90% probability of death for falls exceeding 21 m (seven building stories) [16]. Our results were consistent with the literature with a median height of 6 m for non-survivors compared to 3 m for survived patients (p < 0.001). As already reported by other studies [3,4,17], and even in our analysis, a significantly different distribution of mortality between accidental fallers (38/733-5.2%) and intentional jumpers (44/215-20.4%) after vertical deceleration trauma (p < 0.001) has been noticed, both in adult and elderly age groups. In contrast to the higher rate of suicidal falls among elder women reported in previous work by our group [18], the present study pointed out a predominance of fallers among young adult males and a slight predominance of elder men among jumpers (p < 0.001). As shown in Table 1, we recorded a higher percentage of critical (AIS ≥ 3) chest and abdominal, and AIS ≥ 2 extremities injuries among jumpers (p < 0.001). Conversely, considering cephalic injuries, there were no differences between the two groups. Moreover, major trauma (ISS > 15) prevailed among the jumpers, confirming that greater heights are preferred by jumpers resulting in more severe injuries and worse survival outcomes.
We observed a linear growth of death rate by increasing height of fall due to higher energy dissipation at impact. The only exception was represented by elderly jumpers for whom our analysis failed to demonstrate this kind of proportional increase. Furthermore, looking at both young adults and elderly groups, we did not observe any lethal deceleration trauma among intentional jumpers for heights < 3 m (results not shown). As already reported by other authors [19], intentional jumpers from lower heights generally adopted a feet-first pattern of landing.
On the other hand, in case of accidental fall for the same height cluster, the orientation of the body during airtime cannot be controlled, leading to worse injuries. Indeed, as already reported by others [3,4,6,14], the first body region hitting the ground depends on body orientation during the time of flight, height, and intentionality of the fall. Suicidal attempts generally show as "feet-first" pattern of landing [4,15], whereas head injuries occur in unintentional falls with no control of the body during the fall. Our results show the unlikelihood of critical head injury occurring after low level jumps (Table 1), resulting in low mortality rates among jumpers compared to fallers below 6 m. We failed to demonstrate a higher predominance of lethal head injuries in the two groups due to the small number of deaths among jumpers.
As reported by Demetriades [5], mortality is considerably higher in elderly patients (>65 years old), probably due to increased rigidity of anatomical structures and consequent worse dissipation of kinetic energy. This result was corroborated by our survival analysis using Fisher's exact test. These data confirm the idea that more aged patients are at increased risk of unfavorable prognosis [5], especially during the first hours, requiring the Trauma Team to keep high attention to evolving injuries while managing these frail patients.
To identify independent predictors of adverse survival outcome, we built four multivariate regression models. The general population model pointed out a 4% and 16% increase in the risk of death per year of age and meter of height, respectively. Suicidal attempts accounted for a nearly 3-fold increased risk of mortality. Patients sustaining severe head and chest injuries had a 7.6-fold and 2.7-fold increased risk of death, respectively. For the pediatric population, logistic regression analysis failed to detect any risk factors related to mortality. In the young adults' subgroup, all aforementioned variables (but age) included in the general population model were confirmed as independent predictors of death. Of note, severe head injuries were related to nearly 9-fold risk of death. Our results demonstrated that patients aged more than 65 years sustaining severe head injuries are at risk of worse survival outcomes regardless of height of the fall and intentionality. This interesting finding, in contrast with those reported by some authors [5], demonstrates that age alone is a clear warning for the full activation of an experienced trauma team.
Nevertheless, to better interpret the apparently contradictory findings regarding the height of the fall in the general and >65 year-old models, one consideration should be done. For elderly patients, despite the relevant proportion of deaths, the average height of the fall was considerably lower compared to the adult population. This can explain why multiple regression failed to identify the height of the fall as an independent predictor of death in the elderly population.
Liu C.C. et al. identified only severe head injury AIS ≥ 4 as an independent predictor of mortality, whereas chest injuries did not seem to play any role in survival outcomes [14]. Interestingly, odds ratios per meter height decreased when age increased; this result can be explained by two considerations. The first age group only had four deaths (2% of the group), with a mean fall height of 3.55 m. Deaths among young adults numbered 43 (7.3% of the group), and the average height was 5 m. Finally, 35 patients died among the over 65-year-olds (21.3% of the group), with an average height of 3.48 m. Therefore, especially considering the young adults and elderly groups (which accounted for 95% of total deaths), it is reasonable to think that thee ORs decreased due to the drop in height of the fall.
To our knowledge, our study represents one of the largest single-center representations of vertical deceleration injuries at a level 1 trauma center in Europe with the review of the data of 948 patients collected in a standardized registry during an 8-year time interval. Moreover, our results seem to be comparable to those reported by other studies in the literature, especially considering the height of the fall and elderly age as independent predictors of mortality. Finally, our results were supported by a strict statistical methodology that led to the realization of multivariate models with a remarkable predictive ability.
This study has several limitations, first of all its retrospective nature. Furthermore, it did not take into account out-of-hospital mortality, which accounted for a significant proportion of deaths after FFH [17]. Furthermore, the limited size of the pediatric population and the even smaller number of deaths precluded any chance of identifying independent predictors of mortality in this age group. Finally, in suicidal attempts, it was extremely hard to determine psychiatric history or substance abuse before the fall.
Conclusions
The present study demonstrated that age, height of the fall, suicidal attempt, and severe head and chest injuries represent independent predictors of mortality. Considering different age clusters, we noticed that for elderly patients (over 65 years), the height of the fall, even though correlated with a greater proportion of deaths may not represent a predictor of mortality.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-10-09T00:00:00.000
|
6083547
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12012-013-9231-1.pdf",
"pdf_hash": "8c364b5559c4bd2eceb6f6220e93167b22de1ef8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1097",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8c364b5559c4bd2eceb6f6220e93167b22de1ef8",
"year": 2013
}
|
pes2o/s2orc
|
Improvement of Functional Recovery of Donor Heart Following Cold Static Storage with Doxycycline Cardioplegia
Injury to the donor heart during cold preservation has a negative impact on graft survival before transplantation. This study aims to examine whether doxycycline, known as an MMP-2 inhibitor, has a positive effect on donor heart preservation via its antioxidant action when added to standard preservation solution. Hearts were obtained from 3-month-old male Wistar rats and randomly divided into three groups: hearts stored for 1 h at 4 °C (1) with doxycycline preservation solution (DOX cardioplegia) with low Ca2+; (2) with standard cardioplegia with low Ca2+; and (3) unstored hearts. All hearts were perfused in working mode, arrested at 37 °C, removed from the perfusion system, reattached in Langendorff perfusion system, and converted to working mode for 1 h. At the end of the storage period, hearts preserved in DOX cardioplegia had significantly less weight gain than those preserved in the standard cardioplegia. DOX cardioplegia-induced preservation resulted in significantly higher heart rates and better recovery quality during reperfusion in aortic flow compared to the standard cardioplegia group. Recovery in the left ventricular function and Lambeth Convention Arrhythmia scores during 1 h reperfusion were also significantly better in the DOX cardioplegia group. Biochemical data showed that DOX cardioplegia prevented an increase in MMP-2 activity and blocked apoptosis through increased activity of the pro-survival kinase Akt in the donor heart homogenates. DOX cardioplegia also led to a balanced oxidant/antioxidant level in the heart homogenates. This is the first study to report that cardioplegia solution containing doxycycline provides better cardioprotection via the preservation of heart function, through its role in controlling cellular redox status during static cold storage.
Introduction
The cardiac transplantation is a life-saving procedure for patients with severe heart failures. However, its clinical application remains limited due to lack of donor heart availability [1], and the method currently used for donor heart preservation, the static cold storage (?4°C), allows a very short preservation time of only 4-6 h outside the body [2]. These limitations have triggered a search for improved methods of preservation that could allow for prolonged storage of donor hearts. Although continuous machine perfusion of donor hearts has been proposed as an alternative to cold static storage, multicentered clinical investigations showed continuous machine perfusion to be an expensive technique, and due to its small market size, there is little commercial interest in developing its devices.
Continuous perfusion of harvested hearts with oxygen and metabolic substrates was reported to help maintain myocardial integrity during organ transport, therefore providing better support in preservation [3][4][5]. While there has been ample research into improving the quality of donor hearts and prolonging the preservation time, most previous studies had conflicting results due to the use of small animals with significantly different anatomies and physiologies from those of humans. Current preservation protocols use hypothermic arrest and simple storage, using a variety of crystalloid-based cardioplegic and preservation solutions [6,7]. These techniques limit organ procurement and safe storage time to 4-6 h. A recent study showed marked improvements in donor heart function after 8 h of cold static storage, using normokalemic, adenosine, lidocaine, melatonin, and insulin preservation solution for the isolated rat heart [8].
Cold storage is a simple, inexpensive, and reliable technique for preserving donor hearts during the ex vivo transport period [9]. However, several obstacles limit better preservation of donor hearts during the preservation interval [10]. Longer arrest times easily lead to donor heart damage and early graft dysfunction [11]. Furthermore, there is a well-established risk for primary graft dysfunction when using hearts from extended criteria donors. Depending on the duration of the ischemic period, ATP consumption, ion-homeostasis, and free radical-mediated reperfusion injury also affect the postoperative myocardial dysfunction [12]. Therefore, strategies for improved preservation are necessary, particularly for more effective longterm preservation of organs.
During the last decade, research focused on a group of enzymes known as matrix metalloproteinases (MMPs), which are important mediators in cardiovascular pathologies associated with enhanced oxidative stress. The MMPs are synthesized in a latent form and are activated by proteolytic or conformational changes similar to those induced by oxidative stress [13]. MMPs have also been shown to play significant intracellular roles, including the degradation of extracellular matrix components and long-term tissue remodeling [14]. In isolated perfused heart studies, MMP inhibition was shown to reduce ischemia/reperfusion (I/R)-induced troponin I-degradation and significantly improve the recovery of mechanical function [15].
The tetracycline class antibiotics have a distinct additional pharmacological property, independent of their antibacterial action, in relation to MMPs: Doxycycline (DOX), a member of the tetracycline family antibiotics, has been shown to inhibit both expression and activity of MMP-2 [13] and to preserve cardiac function against I/R injury in the heart [15]. In addition, recent reports further indicate that DOX directly inhibits the cysteine protease activity, and indirectly inhibits the serine protease activity through the inhibition of MMP-mediated degradation of endogenous serine protease inhibitors [13].
We have previously demonstrated that in vivo DOX treatment of diabetic rats preserved both cardiac and aortic functions due to its antioxidant-like action [16]. Therefore, in the present study, we hypothesized that cold static storage of the donor heart with DOX cardioplegia may prevent I/R-induced injuries, and thus preserve cardiac function, by prolonging the preservation period. We used an isolated perfused heart model, in which hearts were perfused in the working state and preserved in the modified Krebs-Henseleit solution for 1 h with either DOX preservation solution or standard preservation solution at ?4°C. This is the first study to report that cardioplegia solution containing DOX provides better cardioprotection via the preservation of heart function through its role in controlling cellular redox status as well as by blocking apoptosis through increased activity of the pro-survival kinase Akt in the donor heart homogenates during static cold storage.
Experimental Animals
All animals were handled in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, Bethesda, MD). The protocol of the study was approved by the Local Ethics Committee on Animal Experiments of the Ankara University (Approval no. 2009- .
Hearts of 3-month-old Wistar male rats weighing 250-300 g were used. Rats were housed under a 12-h/12-h light/dark cycle with food and water provided ad libitum during the experimental protocol.
Isolated Heart Storage
Isolated hearts were stored with an added low CaCl 2 (0.5 mmol/L) and/or added DOX (100 lmol/L), MMP inhibitor doxycycline, and gassed with a mixture of 95 % O 2 and 5 % CO 2 in modified Krebs-Henseleit buffer. Prior to the perfusion protocol, isolated hearts were preserved in the modified Krebs-Henseleit solution with iced packages for 1 h, with either DOX preservation solution or standard preservation solution at ?4°C.
Langendorff Perfusion of Isolated Hearts
Spontaneously beating hearts were perfused via their aortas at a constant pressure of 60 mmHg with Krebs-Henseleit buffer at 37°C after the storage periods. A water-filled latex balloon connected to a pressure transducer was inserted into the left ventricle through an incision in the left atrium and through the mitral valve, and the volume was adjusted to achieve a stable end-diastolic pressure (8)(9)(10)(11)(12). Heart rate (HR), arrhythmias (analyzed according to Lambeth Convention Arrhythmia scores), and left ventricle developed pressure (LVDP) were monitored on a polygraph. Coronary flow was measured with an in-line ultrasonic flow probe (Transonic Systems, Inc.) positioned proximal to the perfusion cannula. Weight gains of the hearts during preservation were assessed before and after the storage period. The hearts were maintained to a steady state of coronary flow. All hearts were stored at -80°C until protein analysis following the electrophysiological procedure was performed.
Preparation of Heart Homogenates
Frozen hearts were crushed at liquid nitrogen temperature and then homogenized in 50 mmol/L Tris-HCl (pH 7.4) containing 3.1 mmol/L sucrose, 1 mmol/L DTT, 10 lg/mL leupeptin, 10 lg/mL soybean trypsin inhibitor, 2 lg/mL aprotinin, and 0.1 % Triton X-100. The homogenates were centrifuged at 10,0009g at 4°C for 10 min. The supernatants were collected as cytosolic fractions, stored at -80°C, and then were used to measure MMP-2 (tissue inhibitor of matrix metalloprotein), phospho-Akt, Akt, Bcl-2 (an apoptosis inhibitor), and Bax (an apoptosis promoter) protein levels. Protein contents in homogenates were analyzed by using Bradford Protein Assay (Bio-Rad), and bovine serum albumin was used as a protein standard.
Gelatin Zymography
Gelatin zymography to measure MMP activity was performed as described previously [16]. Non-reduced proteins were loaded onto an 8 % polyacrylamide gel containing gelatin. Gelatinolytic activities were detected as transparent bands against the background of Coomassie blue-stained gelatin. Tissue homogenates (20 lg) were loaded onto gels to visualize MMP-2 activity. To quantify MMP-2 activity, zymograms were imaged by a Raytest camera attached to a computer with AIDA software (Germany). Gelatinolytic activity was identified using cell culture medium of LTK8 fibroblast cell line as a positive control. Zymograms were digitally scanned, and intensities of the bands were quantified using SigmaGel (Jandel) and reported as a normalized form with respect to their controls.
Western Blotting
Protein expression levels of MMP-2, phospho-Akt, Akt, Bcl-2, and Bax were determined by Western blot analysis. Equal amount of proteins from tissue homogenates were loaded and separated on 8 % sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) gels under reducing conditions. After electrophoresis (150 V, for 3 h, at 20°C), samples were electroblotted onto a PVDF membrane by wet transfer in Towbin buffer (25 V, for 2 h). The b-actin levels in the gel were identified as a loading control for MMP-2-specific antibodies. Immunoreactive protein bands were visualized by using an ECL plus detection system.
Measurement of Total Oxidant and Total Antioxidant Status in Heart Homogenates
Total oxidant and total antioxidant status in the heart homogenates were measured by using commercial kits (Rel Assay Diagnostics). Total oxidant status measurement is based on the oxidation of the ferrous ion-o-dianisidine complex to ferric ion by the oxidants present in the samples. The ferric ion makes a colored complex with xylenol orange in an acidic medium. The color intensity, measured spectrophotometrically, is related to the total amount of oxidant molecules present in the samples. The assay is calibrated with hydrogen peroxide (H 2 O 2 ), and the results are expressed in terms of lmol H 2 O 2 equivalent per liter. Principle of total antioxidant status assay is based on the oxidation of the reduced 2,2 0 -azino-bis (3-ethylbenzthiazo-line-6-sulfonic acid) (ABTS) molecule, which oxidized to ABTS ? using H 2 O 2 alone in acidic medium (the acetate buffer 30 mmol/L, pH 3.6). In the acetate buffer solution, the concentrate (deep green) ABTS ? molecules stay more stable for long time. While it is diluted with a more concentrated acetate buffer solution at higher pH values (the acetate buffer; 0.4 mol/L, pH 5.8), the color is spontaneously developed and slowly bleached. Antioxidants present in the sample accelerate the bleaching rate to a degree proportional to their concentrations. This reaction can be monitored spectrophotometrically, and the bleaching rate is inversely related with total antioxidant status (TAS) of the samples. The reaction rate is calibrated with Trolox standard (an analog of vitamin E) and is expressed as mmol Trolox equivalent per liter.
Statistical Analysis
Data were expressed as mean ± SEM. Statistical analysis (GraphPad Prism) was performed using Wilcoxon matchedpairs signed rank test, one-way ANOVA, or Mann-Whitney U test as appropriate. A p value of \0.05 was considered statistically significant.
DOX Preservation Solution Preserves Heart Function During Cold Static Storage
In order to investigate the effect of cold static storage method on myocardial edema, we recorded and compared the weights of hearts preserved in the DOX preservation solution (DOX cardioplegia group) with those of the standard preservation solution. As shown in Fig. 1a, the average heart weights following 1-h cold static storage were significantly less in the DOX cardioplegia group.
Recovery of aortic flow was monitored by measuring the aortic pressure of Langendorff-perfused hearts during the 1-h reperfusion period, following 1-h cold static storage with standard or DOX preservation solution. Figure 1b shows the time course of aortic pressure recovery during 1-h reperfusion period. The cold static storage with standard preservation solution induced about 50 % decrease in the aortic pressure measured and kept stable those depressed pressure during 1-h reperfusion period, whereas the aortic flow of the DOX cardioplegia group significantly recovered after the first 30 min of the reperfusion period. Figure 2 shows the left ventricular developed pressure (LVDP; 2a) and the rates of pressure changes (±dP/dt max ; 2b) at baseline (the value before cold static storage) and during reperfusion of donor hearts. The recoveries both in LVDP's and their derivatives (±dP/dt max ) were found to be significantly improved in the DOX cardioplegia group, compared to the standard preservation solution group.
Effects of DOX Preservation Solution on Recovery of Heart Rates and Lambeth Convention Arrythmia Scores
The time course of spontaneous heart rates of donor hearts during the 1-h reperfusion period following 1-h cold static storage with or without DOX preservation solution is given in Fig. 3a. The baseline values (or control values without any storage) of the heart rates of donor hearts ranged from 320 to 360 beats/min -1 . As shown in Fig. 3a, the recovery in the heart rate during the 1-h reperfusion period was better in the DOX cardioplegia group than the standard preservation solution group, even though the time course of these two groups overlapped during the first 30 min of the reperfusion period. Figure 3b shows the Lambeth Convention Arrythmia (LCA) scores during the 1-h reperfusion period (presented as the value in every 10 min). The LCA scores of donor hearts were recorded using bipolar ECG with two electrodes. The hearts spontaneously beat during reperfusion to mimic donor hearts. In the 1-h reperfusion period, the LCA scores were monitored and presented as mean (±SEM). Compared to those of the unstored group (data not given), the LCA scores of these two groups significantly increased during the reperfusion period. The LCA scores of the DOX cardioplegia group were significantly better, compared to standard preservation solution group. revealed marked gelatinolytic MMP-2 activity at 72 kDa (Fig. 4a, upper part) compared to those of the unstored hearts. Although Western blot analysis of the heart homogenates revealed a slight trend toward increased MMP-2 protein content after 1-h cold static storage either with or without DOX preservation solution, they were not statistically significant from that of the unstored group (Fig. 4b). As shown in Fig. 4c, while the ratio of MMP-2 activity to its protein level is significantly higher in the standard preservation solution group compared to that of the unstored group, it was found to be normalized by cold static storage using DOX preservation solution.
Doxycycline Reverts Storage-Induced Impairment of Survival Pathways During Cold Static Storage of Donor Heart
In order to test a possible positive contribution of the DOX preservation solution into apoptosis signaling pathway during the 1-h cold static storage of donor heart, we first measured the phosphorylation level of Akt (pAkt) with respect to its protein level in the homogenates. As shown in Fig. 5a, while the ratio of pAkt to Akt in donor heart homogenates kept in the standard preservation solution was markedly lower compared to that of the unstored ones, it was fully preserved in the DOX cardioplegia group.
We also examined another factor as a marker of survival pathway, Bcl-2/Bax ratio, a marker of apoptosis. As shown in Fig. 5b, DOX preservation solution did not have a significant effect on the increased level of Bcl-2/Bax ratio measured in the donor heart following 1-h cold static storage.
DOX Preservation Solution During Cold Static Storage of Donor Hearts Preserves Myocardial Total Antioxidant Capacity
A well-established method to demonstrate oxidative stress markers in any tissue is to measure both total oxidant status (TOS) and total antioxidant status (TAS) [16]. To explore the balance between TOS and TAS in donor hearts after Values are mean ± SEM for n = 7-8 rats/protocol. Asterisk indicates significant difference between the values stored with DOX cardioplegia and standard cardioplegia (p \ 0.05) by one-way ANOVA 1-h cold static storage, we measured their levels in the heart homogenates of all three groups. While TOS level was higher, TAS level was lower in the donor hearts after 1-h cold static storage with standard preservation solution compared to those of both unstored and the DOX cardioplegia groups (Fig. 6a, b, respectively). Therefore, this suggests that using DOX preservation solution during cold static storage of donor hearts can preserve the balance between TOS and TAS in donor myocardium.
Discussion
The present study demonstrated that donor heart preservation solution containing doxycycline (DOX cardioplegia) provides much better cardioprotection than standard cardioplegia during the 1-h reperfusion following 1 h of cold static storage. Better cardioprotection was evidenced by normalized left ventricular heart function and Lambeth Convention Arrhythmia scores, which were further supported by normalized MMP-2 activity, and partly with blockage of apoptosis through increased activity of prosurvival kinase Akt in the donor heart homogenates. Composition of the perfusion solution is one of the key factors for the success of the cold static preservation, which is still the most widely used technique to preserve donor hearts [17]. However, despite extensive research, the optimal preservation solution is yet to be defined. In fact, Demmy et al. [18] heart preservation solutions for perfusion in the USA alone. The use of suboptimal solutions, imperfect for minimizing certain important functional alterations in the donor heart, may lead to cardiac allograft dysfunction [19].
Oxidative stress-associated alterations in several intracellular pathways have been implicated in the pathophysiology of severe donor heart damage during reperfusion. As shown in earlier studies, a fundamental pathway for cardiac damage during reperfusion includes marked increases in the amount of superoxide anion, hydrogen peroxide, and possibly singlet oxygen production [20]. In line with these findings, it has been demonstrated that a Bretschneider's solution, developed as a cardioplegic solution in routine cardiac surgery, could effectively reduce energy requirements and prevent damages during reperfusion, due its role as a strong reducing agent of hydroxyl radicals and reactive oxygen species (ROS), leading to improved myocardial protection via controlling cellular oxidative stress levels [21,22]. Therefore, preventing or at least to controlling increases in the oxidative stress levels of donor hearts during cold storage seems to be a crucial part of heart transplantation in cardiac surgery. This present study reports an effective improved myocardial protection during reperfusion whereby donor hearts are stored in a cardioplegia containing DOX (DOX cardioplegia) for 1 h at ?4°C.
Previous research has indicated the need for greater understanding of the role of Akt's mechanism, one of the essential mechanisms for surgical and other clinical sciences, in the protection of heart preparations during ischemia-reperfusion [23]. Our present data demonstrated that DOX preservation solution played an important role in the apoptosis signaling pathway by increasing the phosphorylation level of pro-survival kinase Akt during 1-h cold storage of donor hearts. However, at the same time, dramatically increased Bcl-2/Bax ratio in the same homogenates could not be preserved. The Akt kinase regulates processes of cellular proliferation and survival, including inhibition of transcriptional functions of Forkhead box-O transcription factors (FoxOs) and contribution to cell survival, growth, and proliferation via FoxOs phosphorylation by Akt. It should be noted that heart failure continues to be one of the most important causes of morbidity and mortality due to increased cell death and limited capacity of myocyte renewing. Akt is considered the central regulator of cardiomyocyte survival after severe in vivo and in vitro ischemic lesions [24]. Akt activation (phosphorylation) has suppressed apoptosis induced by hypoxia in a variety of cellular models including ventricular myocytes [25] and reduced apoptosis and the size of the infarct area in hearts [26]. Bax, a pro-apoptotic protein, and Bcl-2, anti-apoptotic, participate in the intrinsic pathway of apoptosis with opposite roles. While Bax activation results from an increase in mitochondrial permeability, Bcl-2 levels are increased by growth factors and other survival signals. Akt has also been shown to have a critical role in activating a transcription factor of cAMP response element-binding protein, a positive regulator of Bcl-2 expression [27]. The ratio between pro-and anti-apoptotic factors is widely considered to be the main trigger initiating the apoptotic pathway. Our data show dramatic increase in Bcl-2/Bax ratio in the donor heart with DOX preservation solution during 1-h cold storage, and this does not fully support the above idea. However, cardioprotection with this preservation solution was made possible at least in part through an increase in Akt activation (phosphorylation). Further studies are needed to clarify precisely how a DOX Bars represent mean ± SEM, n = 5-6 homogenates/group/protocol. *p \ 0.05 versus CON group, p \ 0.05 versus Stand-C group by one-way ANOVA cardioplegia may protect a donor heart during cold static storage.
The increase in production of ROS and generation of oxidative stress, which may result from impairment of several intracellular signal transduction cascades, can cause modulation of MMPs in several cell types [28]. Concurrently, a decrease in endothelial NO availability is reported to induce a significant increase in the activity of MMPs [29]. Doxycycline, a member of the tetracycline family of antibiotics, does not only have antimicrobial mechanisms but also inhibits connective tissue breakdown [30]. Doxycycline with low dose usages has been demonstrated to exert antiinflammatory and antioxidant activity [31,32]. Accordingly, it has also been shown that doxycycline inhibited NO production and protected some tissues against doxorubicininduced oxidative stress as well as apoptosis in mouse model [33,34]. Moreover, minocycline, a semisynthetic derivative of tetracycline, showed a marked protective effect against oxidative stress-induced injury due to its antioxidant properties as a free radical scavenger [31,32]. Studies with doxycycline further demonstrated attenuation of protein aggregation in cardiomyocytes, improvement in the survival of a mouse model of cardiac proteinopathy, and inhibition of MMPs to be effective therapeutic interventions in the management of acute pulmonary thromboembolism [35,36]. Our present data with doxycycline are in line with these previously published data on preventive action of doxycycline performed in different pathological heart models.
Matrix metalloproteinases are a family of proteases best known for their capacity to proteolyse several proteins associated with extracellular matrix. Their increased activity contributes to the pathogenesis of several cardiovascular diseases including ischemia/reperfusion injury in the heart [15,37]. MMP-2, in particular, is now considered to be also an important intracellular protease, having ability to proteolyse specific intracellular proteins in cardiac muscle cells and thus reduce contractile function [36]. Doxycycline has been frequently used as an important MMP inhibitor independent of its antimicrobial property. In the present study, when we added doxycycline into heart preservation solution during cold static storage of donor heart, we observed a significantly better recovery process in the donor heart function during the reperfusion period compared to that of the standard perfusion solution. Furthermore, we obtained a balanced oxidant/antioxidant ratio and normalized MMP-2 activity in the heart homogenates stored with DOX cardioplegia. Therefore, our biochemical data indicate that this cardioprotection with doxycycline may have emerged due to not only its MMP-2 inhibitor action but also its strong antioxidant action [16].
A common problem with the cold storage preservation has been myocardial edema formation during reperfusion, which drives graft dysfunction and leads to failure. Buttler et al. [38] investigated the relationship between edema and cardiac dysfunction by inducing ischemia versus edema alone in isolated cardiomyocytes and Langendorff-perfused hearts. Edema-induced dysfunction was mild in both cellular preparation and at the whole organ level, which suggested a need for reappraisal of the edema-mediated dysfunction after cardiac surgery in the patients. In our study, a significant reduction in the myocardial edema was observed in hearts preserved with DOX cardioplegia during cold static storage. In line with our findings, Fert-Bober et al. [39] also showed that MMP inhibitors prevented edema formation by reducing damage to the endothelial barrier function of the cells.
Ventricular fibrillation is a serious ischemia-reperfusioninduced complication. Although short action potential period of rat heart seems to pose a disadvantage, ischemiainduced ventricular fibrillation rate is generally high [40,41]. The arrhythmia and ventricular fibrillation incidences usually increase after heart transplantation. In our study, mimicking heart transplantation preservation model, atrioventricular nodes of the hearts were left intact and they continued beating spontaneously during perfusion. During 60-min reperfusion period, the arrhythmia incidence and its period were significantly better in the DOX cardioplegia group, compared to the standard preservation solution group. This is the first data available in the literature about the relationship between MMP activation and arrythmia incidence in different heart preparations. These data suggest the need for further research into doxycycline's potential use as an antiarrhythmic drug in cardiac dysfunction in general. Indeed, using new chemical agents to improve cardioprotection in donor heart transplantation is a well-recognized strategy for cardiac surgery. An ideal preservation solution should provide prolonged, safe, and predictable preservation of donor organs. Supporting this hypothesis, Yang and Yu [42] obtained prolonged donor heart preservation with pinacidil, due to its cardioprotection with better energy preservation and improvement in the myocardial recovery after deep hypothermia and prolonged ischemic storage.
Limitations
The present study involves an experimental design performed under in vitro condition at the organ and its tissue level. Further research will be necessary to evaluate the effect on whole cardiac function under in vivo condition in terms of the dosage and time period of doxycycline use.
Conclusion
In conclusion, our study suggests that doxycycline is a good candidate for the heart preservation solution during cold static storage of donor hearts before transplantation. Doxycycline seems to contribute to the reduction in oxidants, thereby controlling oxidative stress levels in the stored hearts. Our findings point to its therapeutic potential, partly due to this antioxidant action, for protecting myocardium against oxidative stress-induced damage. We believe doxycycline may play a strategic role in improving the cardioprotection during reperfusion following ischemia, thereby contributing to prevention of heart injury, which presents a high risk of mortality and morbidity in the transplanted subjects [43,44]. The underlying mechanism of cardioprotection with doxycycline in donor heart preservation with cold static storage requires further investigations.
Conflict of interest No potential conflicts of interest relevant to this article were reported.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
|
v3-fos-license
|
2019-03-21T13:09:08.807Z
|
2007-07-01T00:00:00.000
|
84498505
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.scielo.br/j/rbz/a/vP5w9qz3rkfctFg8nPyx5ZM/?format=pdf&lang=en",
"pdf_hash": "c58d37a3dad7945eb46f0a461a0d52578572acc4",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1098",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "c58d37a3dad7945eb46f0a461a0d52578572acc4",
"year": 2007
}
|
pes2o/s2orc
|
Research priorities for grassland science : the need of long term integrated experiments networks
Grasslands have to be considered not only as a mean for providing foods for domestic herbivore but also as an important biome of terrestrial biosphere. This function of grasslands as an active component of our environment requires specific studies on the role and impact of this ecosystem on soil erosion and soil quality, quality and quantity of water resources, atmosphere composition and greenhouse gas emission or sequestration, biodiversity dynamics at different scales from field plot to landscape. All these functions have to be evaluated in conjunction with the function of providing animal products for increasing human population. So multifunctionality of grasslands become a new paradigm for grassland science. Environmental and biodiversity outputs require long term studies, being the long term retro-active processes within soil, vegetation and micro-organism communities in relation to changes in management programme. So grassland science needs to carry on long term integrated experimentation for studying all the environmental outputs and ecological services associated to grassland management systems.
Introduction
The grassland biome covers 36% of the earth's surface (Shantz, 1954), approximately equivalent to the forest area and the area of arable cultivation.It is important for research objectives to consider two types of grasslands, (i) the climaticallydetermined grasslands in areas where water availability is not enough to support forests (Lauenroth, 1979), and (ii) the anthropogenicallydetermined grasslands located in most of the temperate regions where the potential vegetation is forest and where herbaceous vegetation is maintained by domestic herbivore exploitation.Within this second type of grassland it is possible to distinguish long term naturalized grasslands from cultivated grasslands in cropping areas.The first is composed of semi-natural vegetation maintained in semi-equilibrium by given management systems such as pasture and hay making while the second are based on sown grassland in rotation with arable cropping systems.All these type of grasslands play an important role in the dynamics of atmosphere, hydrosphere and soil interactions driving global changes and environmental hazards, or adjusting to them.Grasslands therefore play a vital role in the structure and functioning of the overall landscape.They also contribute to effects on agronomic, social, environmental and economic activities at national, regional and catchment scales.
Nowadays, the objective of grassland science cannot be only to contribute to food production by optimizing vegetation -animal interactions in order to provide alimentary resources for domestic herbivore.Other objectives as contributions of grassland agro-eco-systems to and their response to global warming and other environmental problems at global or local scale, the roles of grassland area to landscape functioning in term of quantity and quality of water resources and in term of biodiversity dynamics have to be also clearly identified in research programmes.Developments in agricultural technology in the second half of the 20th Century resulted in impressive improvements in the technical efficiency of food production in specific sectors, but also contributed to serious declines in the stability of the world's land, water, environmental and biological resources (Dahlberg, 1979;van der Meer & van der Putten, 1995).At the same time, there has been much less positive impact on the productivity of small-scale mixed farming systems in developing countries (Serageldin, 2001).So either at the world scale and at regional or local scale it is necessary for grassland research to consider simultaneously different and contradictory objectives such as providing increasing quantity of foods to an increasing human population and in the same time contributing to reduce environmental hazards and to increase ecological services.
Grassland as an agro-ecosystem
Grassland ecosystems are composed of inseparable and interactive components: (i) a vegetation community or communities together with a varied population of herbivores, (ii) the physical and chemical components of the soil, (iii) a diverse soil microbial community and microfauna.
All these three components of the ecosystem are in interaction through feed back loops operating at different time scales.They are subjected to herbivory (above and below ground) and mineral recycling through herbivore ingestion and excretion or through additions of fertilizers and manures.Vegetation dynamics is dependant upon the C, N and other mineral cycles, but changes in vegetation modify in return biogeochemical processes.Many such systems contain legumes which contribute to cycling of N by biologically fixing N 2 .
These ecosystems are important for the quality of atmosphere and hydrosphere, for biodiversity, and as a major part of the ecology of the landscape.Grassland ecosystems have to be managed with multi-purpose objectives corresponding to the different functions assigned to grasslands: environment, biodiversity, landscape ecology, and agricultural production with socio-economic outputs (Dahlberg, 1979;1986).
In most of the European countries large land areas were formerly used in mixed farming systems where grasslands and crops interacted intimately, either spatially by exchanges of foods for animals and organic matter returns, or temporally through ley-crops rotations.Such a system provided sustainable farming in agronomic terms by insuring long term soil quality preservation, and also in economic terms by insuring diversification of incomes for farmers.The requirement to increase food production, because of increasing world population and political instability, much increased the emphasis on production from grasslands (Wahlen, 1952;Whyte, 1960), and the increased use of fertilisers and technical chemicals associated in some areas with the expansion of silage maize cultivation reduced the importance of grassland in mixed farming systems.Emphasis on technical efficiency led to increasing specialisation in research and production, with spectacular effects on the productivity of grassland systems (Humphreys, 1997), but resulted in the progressive de-coupling of research on alternative or complementary land uses, on the production and conservation elements of grassland use, and even of soil, plant and animal research in grassland studies (Wilkins, 2000).The resulting simplification and intensification of agricultural practice gave rise to problems of pollution, waste disposal and reduced biodiversity in the countryside.These are now major elements of concern in land use research.
In large parts of Europe intensification of cropping and dairy systems associated with the high prices for products has led to a specialisation of farms with the most fertile cultivated lands devoted to cereal cropping, and less fertile regions with smaller farm units devoted to intensive animal production.Each of these two systems now faces serious environmental problems, and this specialisation of land use at landscape level cannot be maintained.Because the specialisation of farming itself may be inescapable, new strategies of mixed land use by association of farm units specialised respectively in crop and animal production have to be promoted at landscape and regional levels in order to maintain a patchwork of cropping and grassland systems which contribute to a more sustainable agriculture.So new scientific questions emerge, concerning: (i) the roles for grassland at landscape level for ecological, agronomic and environmental objectives, (ii) the definition of types of grassland vegetation and management systems for optimising these roles, and (iii) the appropriate siting of grasslands within landscapes or catchments.These questions have never been addressed in the past, where research studies were concentrated upon either the field or the farm, but not at the landscape or territory level.
The concept of multi-functionality of agriculture provides a new framework for all disciplines in the sector of grassland research (Hervieu, 2002).Scientific objectives, methods of investigation and models have to be reconsidered with the aim of producing an integrative approach at a range of scales (field plot, farm, landscape, territory…) where the different functions can be evaluated.The multiple functions of grassland also demand a genuinely inter-disciplinary approach to research.To achieve such objectives it is necessary to produce integrated knowledge, new concepts and new tools at the different levels of organisation of grassland agro-ecosystems: (i) the field plot where the basic biogeochemical processes are acting, (ii) the farming system where coherent management procedures are combined, (iii) the landscape where multi-functionality, interactions between different land uses and overall impact can be evaluated and (iv) the region/ nation state where socio-economic and political factors become important.Ormerod et al. (2003) emphasise the important role of ecologists in dealing with major agro-environmental questions, but the issues are much broader than this.directly to their consequences to environmental fluxes have produced new methods and new concepts (Jarvis 1999), but have not yet permitted an integrated view of the dynamics of the grassland ecosystem because: (i) the individual fluxes (nitrate leaching, N 2 O and NH 3 emission, CO 2 sequestration or emission…) too often have been studied separately despite their great interdependency, and (ii) the characteristic turnover times of the different processes involved within the system are not well known despite recent advances in stable isotope methodologies (Murphy et al., 2003).Most of these fluxes are related to soil organic matter (SOM) dynamics through soil microbial activities which couple the different processes.Moreover, the residence times of C and N within the different chemical and physical compartments of SOM are relatively long, varying from 10 to more than 100 years (Balesdent & Mariotti, 1996).In consequence, some of the environmental outputs observed today could be the delayed consequences of changes in land use and management that occurred several years or decades ago.Similarly, changes in land use and management systems for restoring environment and biodiversity require more precise information on the time responses of the whole system: vegetation, animal, soil and microbial communities.The key scientific knowledge required is a functional and biochemical identification of the different fractions of the soil organic matter.To understand the process of SOM stabilisation, it is necessary to characterise the main biochemical compounds interacting with the soil matrix, and how C and N are protected from microbial activities.Questions on the storage of C in grassland soils and the equilibrium between C sequestration and C mineralisation are of high relevance for global change issues (Soussana et al., 2004).It is difficult to compare CO 2 links in grassland and forest biomes, because there is little information on the long term dynamics of SOM in grasslands.Modelling is the only tool for predicting long term evolution of C in soils, but for a realistic and mechanistic representation of the dynamics of the SOM system it is necessary to invest more deeply in characterising the different functional compartments by their molecular signature (Poirier et al., 2000;Rumpel et al., 2004).There is also little information on C
The role of grassland for regulating biogeochemical cycles, environmental fluxes and biodiversity at local scale.
The world's grasslands play an important role in regulation of the C cycle by storing ca.15% of the global organic C (Tate & Ross, 1997).The mean annual primary production of grassland is similar to that of forest (Körner, 1999), and given that more than two thirds of the annual grassland biomass production is allocated to below ground structures, accumulation of deep soil organic matter layers makes an important contribution to C sequestration in most grassland ecosystems (Körner, 2002).The role of grassland in regulating the N cycle is more complex because grazing animals recover, on average, only 7% of the N supply (Grignani & Laidlaw 2002).Nitrogen fluxes in grassland systems contribute directly to three environmental concerns: nitrate leaching, volatilization of ammonia, and nitrous oxide emissions.Research to relate operational management decisions in grassland systems sequestration in deep soil horizons.The great majority of studies concern only the storage of N within the 0-30cm soil layer, but, as demonstrated by Rumpel et al. (2002), 50% of C can be stored within deeper horizons with a higher residence time than for C stored in upper layers where microbial activity is high.This demonstrates that our knowledge on SOM and C-N cycles in grassland soil is fragmented.Due to the importance of grasslands as one of the largest biomes on the planet increase in basic knowledge on SOM long term dynamics must be placed at a high priority for grassland science.
Predicting plant species and community responses to grazing management is difficult (Herben & Huber-Sannwald, 2002).Diversity of grassland vegetation has long been described in terms of species number and botanical composition.Species-specific differences in life cycle phenology and growth form explain a part of this diversity (Sackville-Hamilton & Harper, 1989).More recently attempts have been made to explain diversity of grassland vegetation by functional traits (Lavorel & Garnier, 2002), either at leaf level or at root level.The advantage of such a functional approach in comparison to the botanical approach is to link vegetation diversity to the different functions the plants have to play within the ecosystem: primary production, litter quality and decomposition, competition for light and for soil resources, and interactions with herbivores.So different species, independently of their position within the phylogeny, may have similar functional traits, either in terms of ecosystem effects or in terms of response to change, allowing them to be re-arranged in similar functional groups.The interactions between plants and soil micro-organisms may critically affect plant growth, plant-plant interactions and potentially plant community composition and structure (Clay, 1990).In turn, changes in plant community composition could affect microbial communities and then the soil organic matter dynamics with more or less long term feed back (McGaig et al., 1999).So the analysis of vegetation dynamics in term of functional groups appears to be a very powerful tool for linking changes in vegetation with biogeochemical cycles and vice versa, and also for studying effects of herbivore behaviour on vegetation dynamics and vice versa.Spatial variations in patterns of grazing and excretion, and spatio-temporal changes in abiotic and biotic environments, affect the spatial pattern and structure of vegetation (Watkinson & Ormerod, 2001;Garcia et al., 2004).So studies in vegetation dynamics need to include information on temporal and spatial heterogeneity which is a dynamic component of the grazing ecosystem.
The need for integrated long term experiments
For all the reasons discussed above, it appears necessary to develop integrated inter-disciplinary programmes based on networks of long term experiments to complement and extend processbased research.With this approach: • a wide range of grassland ecosystems (climate, soil, vegetation) would be investigated; • contrasting managements would be applied as perturbations for long term observation of divergent evolutionary trajectories; • time course for evolution of relevant state variables for vegetation, soil and populations of organisms would be monitored; • plant-herbivore interactions and their consequences to biogeochemical cycles and vegetation dynamics would be analysed; • environmental fluxes from and to the atmosphere and to the hydrosphere would be regularly estimated and related to changes in state variables in relation to global changes; • key processes and interactions between compartments of the systems would be evaluated.
Such a network should be considered as a large scale field laboratory providing fundamental, mechanistic information on system function and resilience.To do this will require us to undertake key core measurements such as (i) identifying and characterizing the compartments of the soil organic matter playing a key role, (ii) quantifying some of the key internal fluxes and to monitor at the boundaries of the system fluxes towards atmosphere and hydrosphere, (iii) investigating the functional role of plant, microbial and soil fauna diversity with the aim of characterising the response of the whole system to the disturbance regime induced by contrasted management over the long term, (iv) simulating the long term evolution of the system under climate change scenarios The outcome of such a research programme must be a common integrated data base and information system allowing exchanges and communication between different research teams of different disciplines.Moreover, theoretical frameworks and simulation models to explain the evolution of each of the main agro-ecosystem in relation to management should be developed.The information and knowledge gained from this research will underpin scenario simulations to evaluate environmental hazards and impacts on functional biodiversity resulting from a wide range of contrasting land use and management systems.Ultimately, the scientific and technological outcomes would of great value in informing local, national and international policy makers of options available for developing sustainable farming and land use systems which are both environmentally benign and economically viable.
The objective should therefore be to set up a network of long-term experimental platforms with woodlands, permanent grasslands, mixed grassland-cropping systems and arable cropping systems.Sites which are in transition from eg longterm arable to permanent grassland or from grassland to woodland should also be included.The network would comprise, predominantly, existing long-term experiments but may require the establishment of a limited number of new experiments.Each of these experimental platforms should have its own scientific objectives, experimental design and monitoring and measurement technology according to its local site specific conditions.Nevertheless a common core of investigation and method should be established.
Whatever the agro-ecosystem, the site soil organic matter dynamics will play a central role in regulating the different environmental fluxes.Moreover, the SOM represents the memory of the system because this component of the soil has a long residence time (from 10 to more than 100 years).Therefore, it is necessary to identify the different compartments or pools within SOM and to characterize their functional properties using common protocols and analytical techniques.The different experimental platforms should be sampled regularly in order to analyse the dynamics of change of SOM quantity and quality and the changes in microbial communities and their activities under controlled and known conditions.A collection of core samples should be archived, such that further analyses can be made as new scientific questions arise and as new analytical techniques are developed in the future.The experimental plots must be large enough to (i) minimize the effects of materials between treatments, (ii) allow periodic sampling of soil (iii) establish micro-plots or sampling areas within existing treatments in order to hypothesis test short term process dynamics and to compare different soil-vegetation systems whose state variables may have changed significantly according to the treatment and (iv) allow for later division of plots to test alternative treatments and/or management strategies.
Thus, this network of research platforms should become an attractive resource for different research teams specialised in different disciplines to carry on their own research within a context where the historical management and evolution of the whole system have been carefully monitored.Hence a pluri-disciplinary approach or even an inter-disciplinary approach to the functioning of the agro-ecosystems would be possible through the interactions between the different research teams and between the different experimental sites would be encouraged.Such a network in Europe should greatly improve scientific exchange and stimulation of new scientifically sound concepts between the different countries and also should also enhance postgraduate development in a more international context.Such a network in Europe could ultimately become a partner of the Long Term Ecological Research network previously established in USA, and then could play an important scientific role at the global level.
The outcome of such research should be common integrated data bases and theoretical frameworks to explain the long term evolution of grassland systems subjected to contrasted management programmes.A modelling approach would be necessary for integrating information and knowledge gained from this research.This would underpin scenario simulations to evaluate environmental and agricultural issues in terms of risks or benefits resulting from a range of management systems.The scientific and technological outcomes will be crucial in informing local, national and international policy makers on the role, impact and management of grassland systems for sustainable land use.
Grassland at farm and landscape scale: environmental and ecological analysis and socio-economic perspectives
The approach discussed above is built on process-based research and it cannot take into account all the diversity of management programmes that grassland areas are subjected to.Moreover, the scale of investigation is the field plot, and then other spatial scale such as farm or landscape or catchment where other processes are operating are not taken into account.This "What if…" approach is not sufficient for conceiving and evaluating new management programmes dealing with environment, biodiversity and socioeconomic issues at larger scale.It is necessary to complement this kind of research by developing an integrated network at the level of both animal production systems and mixed grazing and cropping systems to investigate the role of different land-management programmes in resource conservation and agricultural production.The question we have to ask at this level of investigation is "What is necessary for…?"That is the reverse question of the preceding one.To answer this question for a series of different case studies requires the creation of new generic methodologies for conceiving and evaluating new agro-ecosystems which can satisfy multiple objectives.It is then necessary here to account for human and society dimension of the system being managed, and not only on their natural processes.The achievement of compromise between the different goals (animal production, socio-economic benefits, environmental risk, biodiversity conservation or other public goods like amenity of landscape) depends on the different stakeholders and social groups involved, and several scenarios have to be investigated to provide decision support to policy makers.This suggests the need for a modelling approach for the simulation and evaluation of virtual systems in order to prospect a large spectrum of possibilities and to select optimum compromises between contradictory objectives.At this level of investigation grasslands have to be considered as components of a land use system.This may be a pure grassland area like the Savanna in Africa, the Pampa and Campos in South America or the Steppe in Asia, but in most other areas in the world grassland is only a part of the land cover at the scale of landscape, and this ecosystem interacts spatially or temporally either with forest or shrub vegetation or with cropping areas.In these situations landscapes have to be explicitly described as a mosaic of soil cover with spatially distributed management programmes.The functioning of such a territorial entity can be analysed through: • the integrated environmental fluxes at the boundaries of the system to the atmosphere and hydrosphere taking into account the spatial interactions between the different agro-ecosystems; • biodiversity at different levels of organisation within the territory and for different populations or communities of plants, insects, birds and mammals (Wallis De Vries, 2002); • the landscape value from a cultural and heritage point of view (Mormont, 2002).
The intensification of agriculture of North-West Europe with the disappearance of most of the grasslands from cereal production regions was accompanied by the extinction of large number of plants, insects, birds and micro-mammals.Grasslands represent an important habitat and a source of food for several protected bird species (Inchausti & Bretagnolle, 2005;Bretagnolle & Inchausti, 2005).Such a role of grassland in biodiversity conservation can only be analysed at landscape level, and must necessarily take into account the socio-economic forces which determine the land use system and its evolution.
Such an integrated and multidisciplinary approach should help to define some options in land use and management systems to be promoted at regional scale for optimising the multifunctionality of agriculture.But this approach does not take into account the potential for socioeconomic resistance to change, which needs to be identified in order to allow some evolution in land use and management systems in the direction of a more sustainable agriculture and rural economy.
For this purpose the landscape has to be considered as an assembly of farmers and farm units which respond individually to their own socio-economic constraints in relation to their own goals.These farmers have individually good reasons for doing that they do in their farms, and not doing that the environmentalist want they do.So these reasons have to be analysed ans studied if we want change agriculture to a more sustainable way.Multifunctionality of agriculture has to meet human demands for rural development, and socioeconomic researchers have to identify and to make explicit the contradictions between the socioeconomic goals of farmers and the environmental or ecological goals of the other components of society.The whole process would be aided by progress in methodology for putting a financial value on the environmental and social aspects of multifunctionality.This would facilitate the identification of optimised land use and management systems, targeting policy for sustainable agriculture, and determination of the role of grassland in such an objective.
Conclusion
Grasslands have to be considered at world level as biome as important as forest for global environment of the earth in relation with global changes.For that, grassland has to be considered not only as a mean for producing food for domestic herbivores to ensure increasing food demand from increasing human population, but also as a functional component of the biosphere for regulating all the bio-geochemical cycles and the dynamics of biodiversity.Multifunctionality is a new paradigm for grassland science.It requires multi-disciplinary research and multi-scale approaches.If animal production related to grassland management is a relatively short term issue, environment and biodiversity purposes require long term observation and experimentation to provide significant results.So grassland science needs to organize networks of long term multidisciplinary experimental plat-forms at national or at international scale, in order to monitor and compute all agronomical, environmental and ecological outputs.Such a network requires a more integrated organisation of grassland research either at national and international level.
|
v3-fos-license
|
2018-12-18T00:51:03.104Z
|
2013-08-05T00:00:00.000
|
62803154
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.maxwellsci.com/announce/RJASET/6-2464-2469.pdf",
"pdf_hash": "949cc4bc2cbf2509e72a1d443aa25957ea58b704",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1101",
"s2fieldsofstudy": [
"Art",
"History"
],
"sha1": "949cc4bc2cbf2509e72a1d443aa25957ea58b704",
"year": 2013
}
|
pes2o/s2orc
|
Research on the Contemporary Japanese Architectural Creation and its Special Aesthetical Taste of Nationality
Contemporary Japanese architecture has achieved great success and has been widely accepted by the whole architectural field. What is the driving force for the contemporary Japanese architectural creation is worth of thinking carefully about. In this study, some famous Japanese architects, including Toyo Ito, Tadao Ando, Arata Isozaki, Kengo Kuma, Kazuyo Sejima and Ryue Nishizawa and their architectural studiess are analyzed and studied from the angle of nationality. These architects keep up with the step of the world architecture and always pursuit the aesthetical taste of nationality. They proposed some special architectural theories that profoundly reflect the nationality of Japanese and make great influence on the architectural forms, architectural space and architectural aesthetics of Contemporary Japanese architecture. In the architectural form, they pursuit the simplicity, pureness, lightness and finesses; in the architectural space and behavior, they pursuit the shadow of the ma space and nature existence; in the architectural aesthetics, they pursuit the beauty of animism, dreariness and substance sadness. Through conjoint analysis of the architectural examples and the theory of nationality, this study proposed that the contemporary Japanese architectural creation is directly derived from the profound traditional culture and firm nationality and nationality is the cornerstone of the Japanese architecture and always promotes the progress of the Japanese architecture.
INTRODUCTION
Contemporary Japanese architecture that has a strong characteristic of the aesthetical taste of nationality has achieved great success and has been widely accepted by the whole architectural field.From the Meiji Restoration to the end of the twentieth century, Japanese architects keep up with the step of the world architecture and always pursuit the aesthetical taste of nationality for generations.At the beginning of this century, few architects are interest in the decoration of architecture or the historical building style in the Japanese architectural field.The pure imitation style is also hardly touched, but the aesthetical taste of nationality is widespread.
Architectural nationality is derived from the traditional cultural nationality and it also has the basic value trends to maintain and develop the national culture.In the face of the impact of foreign culture, nationality can efficiently resolve the conflict through purposeful choice and then renew the traditional culture by changing the national culture to new forms.Therefore, nationality has the characteristics of tradition and typicality.From the human cultural history, nationalism often appear when national culture faces survival crisis against strong foreign heterogeneous cultures and it is the instinct of a stress response, rather than rational thinking.So, it shows a nonconscious thought tendency.In the architecture, it is unworkable that an architectural form is regarded as the eternal.With the development of the age, people have new views on the way of life and aesthetic taste and then create new national forms that are derived from the traditional thinking.Therefore the nationality has the characteristics of continuity and development.
Contemporary Japanese architectural nationality is a new concept that is put forward from the angle of Japanese nationality and its constraint to Japanese architecture.It is closely related to natural factors, structure factors, social factors and psychological factors in Japanese architecture and enables Japanese architecture to always return to the instinctive and nonconscious behavior of its original national culture when facing the shock of strong foreign cultures.It does not exclude the rational part of foreign cultures; on the contrary, it brings many into its own use, while the special thing is that it always returns to its original culture after integrating the rational part of foreign cultures and forms new matters with characteristics of "original Japanese culture".Owing to the identification, attachment, belongingness of its nation and culture, the nationality of Japan has a strong grassroots basis.
Researching on Japanese architecture and national culture, it can be seen that the success of contemporary Japanese architecture is closely associated with the nationality of Japanese.The nationality of Japanese architecture can be found in many aspects, such as its architectural forms, architectural space, architectural aesthetics and so on.
NATIONALITY IN CONTEMPORARY JAPANESE ARCHITECTURE FORM
Everything has its two sides, so does nationality.It owns both active and positive functions.During humans' construction activities, nationality owns direct guiding significance on the original and extrinsic forms of architecture.
Simplicity
and pureness of architectural configuration: In the history of Japanese architectural development, for a long time the buildings have been designed in the forms of rectangle, square, rotundity, etc. Basic geometric figures, which is related to Japanese people's emotion to the nature and inheritance of Buddhism, for they think pure geometric shapes can unscramble the constituting principle of the universe.Take the Five-ringed Tower in Sukhavati Temple as an example, it is named "Five-ringed" Tower because the "Five-rings" represent the Five Elements of the universe, to manifest which, the shapes of cube, sphere, pyramid, crescent, ellipse were piled up from the bottom to the top to form a tower and was named "Five-ringed Tower", as shown in Fig. 1.This idea is in the deep creation emotion of every Japanese architect (Arata, 2010).Tadao Ando's architectural forms always use this kind of pure basic geometric figures, always using arc, such an elementary geometry, when rotundity is required and showing no interest in the free styles created with modern steels, as he said: "I only choose simple rotundity and square as my architectural shapes (Ando, 1987)".When mentioning "spirit over form", he said: "make correspondence with the infinity of spirit to the inanition of form," by integrating rich, complicated and changeable space to simple geometries.He thinks it's more likely to keep the pureness of architecture, cut all imaginations and make it own pure stereoscopic impression in pureness and contemporary construction materials can better highlight the significance of architecture when used with pure basic geometries because of their simplicity.Ando wrote: "my main purpose is to create architecture with both abstractness and concretization by giving pure geometries with mazy representations (Wang and Zhang, 1999)".Thus, we can see these two elements in all his studiess, such as Museum of Literature, Galleria Akka, Garden of Fine Art and Azuma House and so on.The famous architect has created diverse architectural forms inside Fig. 1: Five-ringed tower of sukhavati temple Fig. 2: "Zhouzu" and "Zaozuo" the simple geometries.Such a pure geometry condensation on architecture not only meets Japanese architectural thinking, but also tallies with the tradition of the modern west, for this reason, the buildings meet the aesthetic standards of the modern west and meanwhile own Japanese characteristics.
Lightness and fineness of architectural structure:
Compared to other countries' buildings in the same age, ancient Japanese architecture is more fine and consistent in structure.The ancient Japanese believed that buildings had souls and just as the ancient Chinese did, they put many ancient literature ideas in buildings and most of them were exquisite amatory poems.For instance, in "The Tale of Genji" the palace of the moon, the place for enjoying the moon was from the story of Yuegui (laurel) of ancient China.Under the guidance of such a fine exquisite emotion, Japanese architectural shapes tended to be fine and pretty.The architectural structure of ancient Japan was wood structure which was commonly adopted in the east, such a structure in Japan was divided into "zhouzu" and "zaozuo", the former one refers to pole, girder, mudsill, joist and so on structural units and the latter one means parvis, window, shed, bedroom, grid, etc. composing elements, as shown in Fig. 2. Those wood units are more fine and consistent than Chinese wood elements of the same period and are greatly different with western masonry buildings for being fine, light, pure and consistent and natural and delicate rather than being splendid and imposing (Itou, 2008).
In comparison with other nations' architects, Japanese architects are more careful in handling the surface of buildings and are better at replacing threedimension entities with complication surfaces and making them become people' major impression of architecture.DIOR Omotesando and 21st Century Museum of Contemporary Art designed by architects While DIOR Omotesando used the form of veiling to express this idea.It is a seven-floor cuboid building, with glass as its outside walls attached with transparent propylene panels, which were processed to as its outside wall materials in folding shape so as to utilize the irradiation of lights and enable the visitors to see a building as if it is enwrapped by gauzes.The glass curtain walls on its surface were layered with wire lines, but they give peoples a feeling of homogeneity and smoothness under the influence of sunshine, environment and so on factors and makes the building look light and floating, transparent and flowing as a crystal cube, as shown in Fig. 3. Kazuyo Sejima and Ryue Nishizawa have annotated the fine structure of Japanese architecture.
NATIONALITY IN CONTEMPORARY ARCHITECTURE SPACE
Japanese space cognition owns stronger sensibility than the western people's, including temporality, umbrageousness, fuzziness, etc. features, which are also found in contemporary architectural space.
Shadow of architectural space: Tanizaki, one of major writers of modern Japanese literature, wrote in his essay In Praise of Shadows: "Our ancestors reluctantly lived in shadowy houses, but they have found beauty from the shadows gradually,… Actually, the beautiful degree of a Japanese house is totally determined by its shadow degree."Such a traditional cognition of shadow and chaos of space of the Japanese people is obviously shown in its contemporary architectural field.For example, Arata Isozaki's "ma" space and Fumihiko Maki's "oku" space theories have both mentioned this viewpoint.
As a student of and being affected by Kenzo Tange, Arata Isozaki has strong interest in traditional Japanese architectural space and has committed himself in the exploration.He uses Japanese traditional mean of black and shadowy space to achieve the distance sense between humans and buildings and make observers incompletely know about them so as to fully exert their A building can have many possibilities resulting from different persons' different feelings on space out of their personal experience, knowledge and memory.Everyone knows "ma" as a feeling and experience, but no one can express and explain it in language or theories, which just is Arata Isozaki's "ma" space theory.
In 1978, Japanese Culture Special Feature Exhibition was held in Paris Decorative Art Gallery, in which Arata Isozaki's "ma" space theory was realized.It is an absolute time of homogeneous moment that can flow straightly from the starting point to the ending point and extends to a focal places in X, Y and Z directions, it is calculated completely according to the calculation standards of modern science.In the exhibition, he combined traditional Japanese recognition on space with contemporary architectural aims and created the show of "Ma-Time and Space of Japan", as shown in Fig. 4.
Shanghai Zendai Himalayan Art Center, the new study of Arata Isozaki completed in 2006 is the central manifestation of his architectural thought.It is a complex composed by many cubes, including art center, artists' creative studio, guestrooms, public space and commercial facility.As a whole, it adopts the layout of traditional nine-courtyard of China to realize space penetration.Its core is the art center in the centrality and the multifunctional theatre that can hold over 2000 persons and a garden in the air is designed to make the over ground and underground squares form a cubic public space.As for the center of the building, the architect uses the method of optimization of evolutionism structure, of which the condition formed by the enveloping space is the "ma" space theory, as shown in Fig. 5.The design can be fast accepted by Chinese, because it complies with the space cognition of people from East Asian countries.The cubic as the guestrooms is on the platform of over thirty meter height, the artistic pictographs on the bottom out wall are embedded into the grids, such a design expresses the tradition in an abstract way and makes the space with sense of shadow.The art center is divided into two extremely different parts: the glittering and translucent cubic and natural and organic anomalous part.Furthermore, their space conditions are different too, one is a public commercial space and the other is a closed office space, the place blending the two parts is Arata Isozaki's "ma" space theory, because the comparison space with vividly different features is its expression.
Nature of architectural environment: Japanese nationality regards environment as "nature", which is compared with the construction civilization of the modern west.It was advanced by Japanese politician Masao Maruyama in his study "A Study on Japan's Political Thought" and later applied in national architectural styles by Japanese architect Ryuichi Hamaguchi, who thinks that "nature" is what, is formed naturally according to the environment or the subjective cognition of people on trends of things.This is Japanese people's intrinsic value on life and stresses humans' emotional action power on the environment and people's subjective behavior possibility given by it.Traditionally, Japanese like to image an architectural space as an artistic conception (such as a tearoom); in such an artistic conception people will have a kind of behavior (realization of life), which comes from the artistic conception, certainly vice versa.The higher the level is, the broader and profounder humans' spatial behavior is.
About the soul of traditional Japanese architectural space, Ryuichi Hamaguchi pointed out in his essay that the concept of Japanese architecture tends to space and behavior.In traditional Japanese architecture, how is the idea nature exist in space and behavior expressed?The ancient Japanese believed that architectural space was given by the natural environment, there was no clear limit between space and environment and they need exchange and communication.In Nijo Castle in Kyoto, the walls of traditional buildings do not belong to bearing structure and are open, there is a transitional space between the inside and outside, as shown in Fig. 6.It plays an important role in adjusting nature, including soaking up the sunshine, adjusting monsoons and avoiding rains (Kenji, 2011).When all shoji screens are open, sunshine, wind and outside landscape enter the room, humans and the natural environment are full of emotions, while the building itself disappear, as shown in Fig. 7.The identification of such a transitional space is made according to humans' behavior-when humans and the nature are interactive, it is the extend of the inside floor; when the screens are closed, it is outside gallery.Furthermore, the inside space of traditional Japanese buildings has no separation gallery, folding screens and shoji screens are used to separate the space into moving spaces, the space forms can be changed in accordance with human behavior.In conclusion, traditional Japanese buildings are based on human space.
In 1986, architect Toyo Ito's own house "The Silvery Cottage" was built in Tokyo, which is the modern explanation of Japan's traditional spatial concept.The building's spatial structure is partial twofloor.On the first floor, the center is an atrium, its right is washroom and study, its south is bedroom, its north is dining room and living room, whose right is tearoom, the north most part is a store room.The baby room is on the second floor.The whole building centers around the atrium, which is the fuzzy space of the building and nature and through which people can get to any room of the building.It seems simple, but is actually rich.Walking in it, you can feel the openness of the space."The Silvery Cottage" differs from normal houses, because it is the exhibition of the traditional open, temporary and floating space of Japan.The architect deliberately weakened the attribute of space and makes it flow inside and outside; he also uses transparent or hollowed double acting doors as its separations to create "the false" and "the true" of the inside and outside of the building.From its design, we can see Toyo Ito's architectural creations show the national spatial concept of Japanese, especially in humans' grasp of spatial behavior and flowing, he pursues the realm of traditional Japanese space.
NATIONALITY IN CONTEMPORARY JAPANESE ARCHITECTURE AESTHETICS
Japanese national ideal owns the special philosophical views of easterners and "the discourse of Buddhism" is its supreme realm.Japan also develops Zen thought, which believes that "impermanence" is the highest ideal level of existence in the universe and it corresponds to the real world's physical forms.The basis of existence is "nothing", while "nothing" is not the end of spiritual revivification."In the thinking of exceeding 'nothing', spirit gains the highest intoxication, which is over 'nothing' and guide spirit to absolute existence.Absolute existence means the things exist outside the relevant world (Shuji, 2011).Japan's Zen thought shows common people all things and the truth and is reflected in all aspects of Japanese society, especially in Japanese people's unique aesthetics on arts, including architectural aesthetics.The ideal of Japan is their pursuit of nature.
Dreariness and substance sadness of architectural aesthetics:
The thought of "impermanence" is shown in every field of Japanese society, especially in its traditional buildings and courtyards, which express Zen in the form of "impermanence", of which the best example is Japan's dull landscape."No pool, no running water, only some stones standing, this is 'dull landscape'", this definition is from "Gardening".Such a kind of courtyard is different from others, it has no water and tree and only stones, sands and mosses are used to create dynamic effects and a visionary world of concrete things.It uses "emptiness" to represent "nothing", enter immensity from finity, abandon self to reach the realm of anatta.
Another representative example of dull landscape is Ryoanji in Kyoto, which was built in the 15 th century.It is an oblong courtyard of 330 m 2 with 15 pieces stones in different shapes and sizes and there are also a limitless sea made of white sands and a thick forest made of mosses in a symbolic way, as shown in Fig. 8.In it, you can image a big natural world.
Under the influence of the thought of "impermanence", contemporary Japanese architectural works always give people a feeling of "illusion" and "meditation" (Ido, 1991).Furthermore, they have put forward some architectural theories in Japanese thinking, such as Kengo Kuma's "decomposition".He once mentioned that the creative thought was from Japan's courtyard culture.One of the good manifestations of his "decomposition" thought is Kiro-San Observatory designed by him in 1994, in which he adopted a new method of reversing to express the "quietness" of the architecture, which is located on a hilltop of an island of the enclosed sea, is constituted by platforms set off by green boskages and connected by a narrow slit.The visitors cannot see the architecture under the mountain.He thinks that an observatory should be built for visitors to view landscape, while in reality most of them are located highly and become architecture to be seen, thus are often simply standing in the natural environment.In his design, to "dissolve" the observatory, he broke the architecture and buried it into the ground on the hilltop, thus set an out-of-order building in a turbid environment, which disappears through diffusion with the disordered context around.This is particle architecture advocated by him later, that is to say a building needs to be broken down and the broken particle will gain higher freedom through recombination, as shown in Fig. 9.
"Natural beauty" of architectural aesthetic ideal: "Natural beauty is the foundation and main body of Japanese awareness of beauty."Japan owns rich natural resources, they has been enjoying the beauty of the nature from the beginning, with deep love and special emotion to it.Their cognition of beauty has originated from the nature, thus natural beauty becomes the ante type of all kinds of beauty in Japanese culture.They do not observe but feel the nature with emotions and imagination and raise them to virtue and sentiments.In architecture field, their cognition of natural beauty was from Japan's proterozoic.
In 1992 World Exposition held in Sevilla, the Japan Pavilion designed by Japanese architect Tadao Ando showed us the traditional Japanese thought of natural beauty as a typical study by combining traditional wood architectural space with modern timber framing craft, as shown in Fig. 10.The building adopts wood structure with 4-floor on the ground, a lot of wood walls, stakes and beams are used and a system of glued wood beams is used as its bearing.The visitors enters the building through a bridge-shaped arch with special traditional charm that leads to the 11m-height viewing platform, the entrance of the pavilion is a large open multilayer space, a huge porch.In this building, the designer strongly expresses his desire of harmony between humans and natural environment.The whole wood building has no painting coat, all materials are used to show their original appearances, the Teflon cover can directly introduce the sunshine to save energy, furthermore, the soft lights and the wood architecture set each other off, which reminds people traditional Japanese resident space.The front and back of the architecture are cambered walls made of wood battens, giving prominence to the natural materials and meanwhile showing the shape beauty of modern architecture.The whole building not only strongly expresses traditional Japanese thought of natural beauty, but also stresses the ecological concept of modern architecture; with it the designer has showed the whole world the charm of traditional Japanese aesthetics in the form of transformation.
Contemporary Japanese architectural aesthetics is the ecological aesthetics combining traditional natural aesthetics with western aesthetics (Wu, 1997).As Tadao Ando says: "nature is not the original nature, but the ordered nature generalized from nature or the disordered nature arranged by humans-artificial nature!"The nature referred by Ando is not the greening nature in a general sense, but the artificial nature or architectural nature.He considers that greening is just a mean to beautify the reality and it is a rude way to simply take gardening and the season change of the plants in the gardens as the symbol.The nature with abstract light, water and wind is shown when materials and architecture based on geometry are imported together.Onishi Yoshinori also points out that the experience of beauty contains two basic parts: "natural part" and "artistic part".The essential structure of art exists in the relation integrating the two parts.These are the voices of contemporary Japanese architectural aesthetics field; from them it can be found natural ecology is the root of contemporary Japanese architectural aesthetics.
CONCLUSION
The success of contemporary Japanese architecture is the result of the hard study of the Japanese architects for generations and also is directly derived from national spirit of Japanese.In the evolution process of the traditional architecture, the architectural form with the aesthetical taste of Japanese is never ruled out when it is impacted by the advanced culture.At the last century, the modernist swept across the whole world, the aesthetical taste is used to interpret the eastern architectural culture in Japan and its driving force is derived from the nationality of Japanese.The aesthetical taste of nationality is universal in contemporary Japanese architecture and it is also the representative characteristics of nationality.Through conjoint analysis of the architectural examples and the theory of nationality, this study proposed that the creative thinking of Japanese architecture owns the profound traditional culture and firm nationality and it is a various nonlinear system, making a transition to ideological constraint from simple imitation.Although contemporary architecture has characteristics of polytropism, nationality of Japanese always affects its development and the aesthetical taste of nationality is the spiritual foundation of the development of Japanese architecture.
Fig. 3 :
Fig. 3: DIOR omotesando Kazuyo Sejima and Ryue Nishizawa both have shown this idea.In 21st Century Museum of Contemporary Art, the two architects chose 360-degree transparentand open glass curtain walls, making the shape extremely slight.While DIOR Omotesando used the form of veiling to express this idea.It is a seven-floor cuboid building, with glass as its outside walls attached with transparent propylene panels, which were processed to as its outside wall materials in folding shape so as to utilize the irradiation of lights and enable the visitors to see a building as if it is enwrapped by gauzes.The glass curtain walls on its surface were layered with wire lines, but they give peoples a feeling of homogeneity and smoothness under the influence of sunshine, environment and so on factors and makes the building look light and floating, transparent and flowing as a crystal cube, as shown in Fig.3.Kazuyo Sejima and Ryue Nishizawa have annotated the fine structure of Japanese architecture.
|
v3-fos-license
|
2020-05-28T09:14:05.074Z
|
2020-05-22T00:00:00.000
|
219453401
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jpcc.0c00798",
"pdf_hash": "63c955163b99ce8aafd84321c12337026f03402c",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1102",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science",
"Engineering"
],
"sha1": "f80a449c2d86a2b26cba07912b901d5ab3a2a2f5",
"year": 2020
}
|
pes2o/s2orc
|
Modi fi cation of Surface States of Hematite-Based Photoanodes by Submonolayer of TiO 2 for Enhanced Solar Water Splitting
: Surface states are inherently involved with photoelectrochemical (PEC) solar fuel production; some of them are bene fi cial and participate in the surface reactions, but some act as recombination centers and therefore limit the PEC e ffi ciency. Surface treatments have been applied to modify the surface states, but interrelated e ff ects of the treatments on both types of surface states have not been properly considered. This research examines the modi fi cation of the surface states on hematite-based photoanodes by atomic layer deposition of submonolayer amount of TiO 2 and by postannealing treatments. Our results show that the postannealing causes di ff usion of Ti deeper into the hematite surface layers, which leads to an increased saturation photocurrent and an anodic shift in the photocurrent onset potential. Without postannealing, the separate TiO 2 phase on the hematite surface results in a second intermediate surface state and delayed charge carrier dynamics, i.e., passivation of the recombination surface states. It is evident by these results that the intermediate surface states observed with impedance spectroscopy in a PEC cell are directly involved in the surface reaction and not with the recombination surface states observed with ultrafast (picoseconds − nanoseconds) transient absorption spectroscopy in air. These results open new optimization strategies to control the bene fi cial and detrimental surface states independently.
■ INTRODUCTION
The increased energy demand has created a need for environmentally friendly energy production methods. Solar water splitting is a method to convert solar energy into hydrogen fuel directly at the semiconductor−electrolyte interface. Hematite (α-Fe 2 O 3 ) is a promising material for solar water splitting because it is abundant, nontoxic, and economically viable to produce. Hematite is also chemically stable in alkaline environment and has a band gap of 2.2 eV. 1 It is crucial that the band gap is large enough to produce the potential difference for the water splitting reaction (1.23 V) but small enough to be able to absorb light in the visible region. 2 Even though hematite is a promising material, it has some limitations. The charge carrier mobility is poor and the charge carrier recombination rate is high, especially at the surface states, thus limiting the overall water splitting efficiency. 3 The conduction band edge is also below the reduction potential of hydrogen, and for this reason, a bias-free hydrogen production is not possible. 4 Surface states are electronic states that occur only at the interface between two different materials or phases. These states are due to the asymmetry of the electronic potential at the interfaces. The surface states and band structure of the hematite−electrolyte interface are illustrated in Figure 1. Two different surface states are proposed to exist: one below the conduction band and one just above the valence band. The recombination surface state (r-SS) below the conduction band is responsible for the charge carrier recombination and the intermediate surface state above the valence band (i-SS) for the hole transfer across the interface in water splitting reaction. 5,6 Electrons from the conduction band and holes from the valence band can both transfer to the r-SS. This causes the recombination of the photogenerated electron−hole pairs, thus preventing the holes from participating in the water splitting reaction at the photoanode surface. The photogenerated holes must be transferred from the valence band of the photoanode to the water molecules through the photoanode−electrolyte interface to cause the water splitting reaction. This charge transfer can proceed via i-SS or directly from the valence band if the i-SS is completely unoccupied. 7 Surface states are reported to be a necessary intermediate step for the charge carrier transfer from the semiconductor to the electrolyte, 8,9 but the surface states are also reported to cause parasitic losses, charge carrier recombination, and charge carrier trapping. 10 The total effect of the surface states is thus unclear. The surface states and the charge transfer properties of the hematite−electrolyte interface can be modified by substituting a small amount of different material at the hematite surface. 3,11 To optimize the water splitting efficiency, treatments that can modify the r-SS and i-SS independently are needed. However, many of the latest publications 3,10,12 only consider passivation of the r-SS.
In this research, the effect of a submonolayer of ALD TiO 2 deposited on hematite and the effect of the postannealing at 300−700°C on the surface states were studied with photoelectron spectroscopy, transient absorption spectroscopy, and electrochemical impedance spectroscopy. Two different surface states were identified, one of which can be directly linked with the produced photocurrent and the water splitting reaction and the other one with the charge carrier recombination.
■ EXPERIMENTAL SECTION
Hematite thin films were fabricated on 25 × 10 × 1.1 mm 3 indium tin oxide (ITO)-coated glass substrates (Prazisions Glas & Optik, CEC020E, ITO coating (20 ± 5 ohm/sq) coated on EAGLE2000 boro-aluminosilicate glass) by an anodic electrodeposition. The electrolyte was a 1 M solution prepared from FeCl 2 ·4H 2 O (Sigma-Aldrich, reagent grade, ≥98%), and the temperature of the electrolyte was kept at 60°C during the electrodeposition. The electrodeposition was done at a constant potential of +1.2 V vs Ag/AgCl electrode (Harvard Apparatus, Leak-Free reference electrode 69-0023) by using an Autolab PGSTAT101 potentiostat. 8 A charge of 60 mC/cm 2 was used in the electrodeposition, which corresponds to the film thickness of 60 nm. This was verified by XPS depth profiling shown in Figure S1. In the electrodeposition the Fe 2+ ions oxidize and form FeOOH that was deposited on a 10 × 15 mm 2 area. FeOOH was then converted into hematite by annealing samples in a tube furnace (Carbolite Gero 30−3000°C ) at 750°C for 8 h. The heating and cooling ramps were 5 and 1°C/min, respectively. The formation of hematite was confirmed from the XRD patterns shown in Figure S2. In addition, the annealing induced a diffusion of In and Sn from the ITO substrate to the surface and a concurrent doping of the hematite layer with In and Sn as shown in Figure S3. This has been found to be beneficial to the photocurrent because of the increased charge carrier concentration. 13−15 After the electrodeposition and annealing, two ALD cycles of TiO 2 were deposited on the top of the fabricated hematite films, which correspond to the average film thickness of 0.07 nm or 0.2 monolayers based on the growth per cycle determined from thicker films by ellipsometry (Rudolph Auto EL III ellipsometer, Rudolph Research Analytical). The deposition was performed at 200°C in a Picosun Sunale ALD R200 Advanced reactor. Before starting the deposition the substrates were held in the reaction chamber for 30 min to stabilize the temperature. Tetrakis(dimethylamido)titanium-(IV) (TDMAT, electronic grade, 99.999+%, Sigma-Aldrich) and ultrapure Milli-Q water were used as precursors. During the deposition the TDMAT bubbler and the precursor gas delivery line was kept at 76 and 85°C, respectively, to reach the proper vapor pressure and to prevent condensation of the precursor. The water was cooled to 18°C with a Peltier element. Ar gas was used as a carrier/purge/venting gas (99.9999%, Oy AGA Ab). 16 The continuous Ar flow in the TDMAT and water lines was 100 sccm. The deposition was started with the 1.6 s TDMAT pulse and was followed by the 0.1 s water pulse. Between each pulse the excess precursor was pumped during the 6.0 s purge period.
A total of 18 samples were fabricated, from which three without TiO 2 were used as reference. The remaining 15 samples were postannealed at 300, 400, 500, 600, and 700°C for 1 h, three samples at each temperature, to verify the reproducibility of the measurements. We note that all the samples had been subject to a heat treatment at 750°C before the TiO 2 deposition, and therefore the diffusion of In and Sn within the bulk of the hematite and the decrease in conductivity of the ITO substrate during the postannealing are assumed negligible. Annealing at 300°C or higher temperatures provides reasonable stability of ALD TiO 2 during chopped light measurements under the water splitting condition ( Figure S4). 16 Equipment used for XPS measurements included an analysis chamber, a load lock chamber, an X-ray source (V. G. Microtech, 8025 twin anode X-ray source), and an energy analyzer with electron multiplier and detector (V. G. Microtech, CLAM4MCD LNo5). All measurements were conducted by using X-ray source operated at 300 W power and Al anode (Al Kα, hν = 1486.7 eV). The pressure of the analysis chamber was below 2 × 10 −8 mbar. XP spectra were calibrated so that the binding energy of the Fe 3+ 2p 3/2 peak is at 710.6 eV.
Electrochemical impedance spectroscopy (EIS) and linear sweep voltammetry (LSV) were performed by using a threeelectrode setup with an Ag/AgCl reference electrode (Harvard Apparatus, Leak-Free reference electrode 69-0023), a platinum wire counter electrode, and a sample as an working electrode (the diameter of the sample−electrolyte contact was 6 mm). The photoelectrochemical cell was filled with 3.5 mL of 1 M solution of NaOH (Sigma-Aldrich, sodium hydroxide, reagent grade) (pH 13.6). Measurements were done by an Autolab PGSTAT12 potentiostat (Metrohm AG) equipped with a The Journal of Physical Chemistry C pubs.acs.org/JPCC Article frequency response analyzer (FRA2). The measured potential was converted to potential versus reversible hydrogen electrode (RHE) using the equation V RHE = V Ag/AgCl + 0.197 V + 0.0592 V × pH. The front side of the sample was illuminated through the electrolyte with an Asahi Spectra HAL-C100 solar simulator, and the intensity was calibrated by a 1 sun checker (Asahi Spectra CS-30). Transient absorption spectra (TAS) of the Fe 2 O 3 /TiO 2 samples were measured by using a pump−probe setup with an excitation wavelength of 380 nm under ambient air conditions. The excitation density was roughly 1 μJ/cm 2 . The primary laser pulses were obtained by using a Ti:sapphire laser (Libra F, Coherent Inc., 100 fs pulse at 1 kHz repetition rate). Most of the laser radiation was directed to a parametric amplifier (Topas C, Light Conversion Ltd.) to generate the pump pulses. Time-resolved transient absorption spectra were recorded by using ExciPro TA spectrometer (CDP, Inc.) in the wavelength range 430−730 nm.
■ RESULTS AND DISCUSSION
Mixing of TiO 2 and Fe 2 O 3 Layer. The effect of the postannealing on the mixing of TiO 2 and the Fe 2 O 3 layer was studied by XPS. The XP spectra were measured before and after the ALD TiO 2 deposition and after the postannealing. The Fe 2p 3/2 spectra were fitted with parameters described in ref 17 for hematite. The Fe 2p 3/2 and Ti 2p 3/2 spectra of Fe 2 O 3 /TiO 2 samples postannealed at 300 and 700°C are presented in Figure 2. The binding energy of the Fe 2p 3/2 peak can be attributed to the oxidation state of 3+, and no changes in the Fe 2p peak shape were observed between samples. In contrast, the Ti 4+ 2p 3/2 peak shows a shift of −0.31 eV and a decrease in full width at half-maximum value from 1.78 to 1.47 eV as the postannealing temperature is increased from 300 to 700°C. Spectra for other samples are presented in Figure S3. The relative atomic concentrations and the binding energy values for the photoemission peaks of Fe, Ti, In, and Sn were obtained from XPS results and are presented in Figure 3.
For the postannealing temperatures higher than 500°C the surface concentration of Ti decreases while the surface concentrations of In and Sn are increased ( Figure 3). This indicates the diffusion of Ti at high temperatures from the surface into the top layer of hematite and the formation of mixed Fe 3+ Ti 4+ surface oxide. The decreasing binding energy of the Ti 2p 3/2 peak with the increasing postannealing temperature follows the same trend with the Ti concentration. According to Hiltunen et al., 18 the binding energy of the Ti 2p 3/2 peak shifts from 458.84 to 458.43 eV when Ti is mixed with Fe 2 O 3 . Because the peak shift is smaller than the chemical shift between Ti 3+ and Ti 4+ , 19 there is no change in the oxidation state of Ti. The binding energies of the Sn 3d 5/2 and Raman spectra showed no difference between the samples with submonolayer amount of ALD TiO 2 (two cycles) and the Fe 2 O 3 reference due to the insufficient amount of TiO 2 . However, an increase in LO peak (forbidden longitudinal optical mode) at 660 cm −1 was detected for samples with 15 ALD TiO 2 cycles when film structure was postannealed at 750°C ( Figure S5). The increase in LO peak is linked with increase in the disorder in the crystal lattice. 20 This supports the hypothesis of diffusion of Ti 4+ into the hematite film. The structures of fabricated films were confirmed to be mesoporous by scanning electron microscopy ( Figure S6). The morphology of the samples is not affected by the TiO 2 deposition or postannealing treatments.
The band gap of the Fe 2 O 3 /TiO 2 samples was defined by Tauc analysis (Figures S7 and S8). The indirect band gap was 2.1 eV for all samples, and neither the ALD TiO 2 deposition nor the postannealing had any effect on the band gap.
Surface States and Photocatalytic Properties. Photoelectrochemical (PEC) measurements were performed to determine the photoresponse, chemical stability, and the charge transfer properties of the Fe 2 O 3 /TiO 2 samples. PEC measurements included electrochemical impedance spectroscopy (EIS), linear sweep voltammetry (LSV), and chronoamperometric measurements. LSV curves are presented in Figures 4 and 7. TiO 2 addition to the hematite surface and postannealing were found to increase the saturation photocurrent and shift the onset potential anodically. The same trend in the photocurrent density curve is reported in the literature to be resulted from the Ti doping of hematite. 21,22 In sharp contrast, effectively the same change in the saturation photocurrent resulted here from the Ti addition to the surface without doping of the hematite films, since the mixing of surface Ti with hematite is shown in Figure 3 to require temperatures higher than 500°C. Postannealing is also reported to cause oxygen vacancies at the surface, which contributes to the doping concentration. 23 The equivalent circuit presented in Figure 5 was used to model frequency response of the hematite−electrolyte interface. A similar equivalent circuit is commonly used for fitting impedance data. 9,24−26 The circuit is the simplest equivalent circuit that can model the impedance response of the hematite−electrolyte interface with reasonable accuracy. R s and C ex were found to be constant across potential range as the external capacitance and resistance of the solution do not depend on the applied potential. The C sc −2 vs applied potential plot is linear, which supports that the C sc presents the capacitance of the semiconductor−electrolyte interface.
The flat-band potential was calculated from the EIS data by fitting the equivalent circuit ( Figure 5) and doing Mott− Schottky analysis from C sc component ( Figure S9). Hematite is an n-type semiconductor, and a relative permittivity of 32 was used in the calculations. 9 Obtained results are shown in Figure 6. The flat-band potential of hematite before the ALD deposition of TiO 2 was 0.48 V vs RHE. The deposition increased and the postannealing at 400−600°C decreased the flat-band potential. Similar flat-band potentials are reported in the literature for the hematite/1 M NaOH interface. 27 The charge carrier density followed a similar trend compared to the flat-band potential. However, the charge carrier density drastically increased when the samples were postannealed at Equivalent circuit used to model the Fe 2 O 3 /TiO 2 − electrolyte interface. C ex is the capacitance due to the measuring arrangements, R s is the resistance of the electrolyte, C sc is the capacitance of the space charge layer of the Fe 2 O 3 /TiO 2 −electrolyte interface, C ss is capacitance of the surface states, R ct,ss is the charge transfer resistance, and R trap is the trapping resistance of the surface states. 7 Figure 3. The photocurrent onset potentials reported for hematite in the literature 5,9,28−32 show strong variation ranging typically from +0.70 to +1.15 V vs RHE, and therefore it is challenging to compare absolute values with the literature. However, our values (+0.77 to +0.89 V vs RHE) fall within the range reported in the literature, and most importantly we were able to assign the anodic shift to the change in the surface composition. Furthermore, the largest variation in all results was observed for samples postannealed at 500°C, which corresponds to the temperature range where surface composition changes strongly The photocurrent onset potential is directly related to the flat-band potential plus the overpotential needed to drive the water splitting reaction. 7 A more cathodic flat-band potential causes larger band bending, resulting in better charge separation and thus lower charge recombination rate. 33 The flat-band potential does not correlate with the photocurrent onset potential, which implies that the needed overpotential changes when TiO 2 is deposited on the hematite films and when films are postannealed. This anodic shift can be linked to the change in the i-SS. 34 The LDOS of the i-SS can be determined from the surface state capacitance. The filling of the surface states at certain potential is directly proportional to the capacitance g(E) = C/ q. 9 The surface state capacitance C ss was obtained from the frequency response of the hematite−electrolyte interface ( Figures S10−S21). The obtained surface state capacitance is shown in Figure 7, and the measurements were done in the dark and under 100 mW/cm 2 (1 sun) illumination with applied external bias voltage. Surface state capacitance was only observed when measurements were done under illumination. This indicates that filling or depleting of these states does not occur unless photons excite electron−hole pairs. Two wide and low C ss peaks are observed in the case of the Fe 2 O 3 sample while the C ss peaks are much more distinct for the Fe 2 O 3 /TiO 2 samples postannealed at 300°C. Similar results are reported in the literature, 5 and it was discovered that the deposition of a different material on top of the hematite films, such as Al 2 O 3 , produces two distinct peaks. 5,35 Postannealing at 500°C or higher temperatures causes the right peak to disappear and the left peak to move to the more anodic potential. The photocurrent onset is at the potential corresponding the maximum of the left C ss peak, and the onset potential and peak positions change equally. The height of the peak is larger for samples postannealed at 500°C or higher temperatures. For these samples the photocurrent onset is also sharper, and from these results it can be concluded that the left C ss peak is related to the photocurrent onset potential and thus to the water splitting charge transfer reaction. 5 The i-SS can be described by Tamm states which are induced by unsaturated oxygen at the surface. These states exist just above valence band. 21,33 The right peak does not shift but is superimposed when the postannealing temperature is higher than 500°C. Similar double peaks were reported in the literature for Al 2 O 3 -coated electrodes, 5 and presumably this capacitance peak can be attributed to two different surface phases. The mixing of TiO 2 and Fe 2 O 3 layers causes the peaks to combine.
The r-SS is difficult to detect by impedance spectroscopy because no charge transfer takes place through these states. 5 The r-SS was studied by TAS. The measurements were done in air without electrolyte, and therefore the data do not provide information about the i-SS. Instead, the differences in TAS spectra can be compared and be linked with the material properties of hematite. The trapping and recombination of charge carriers take place at the picoseconds−nanoseconds time scale, which is significantly faster than the charge transfer (∼1 s time scale) across the hematite−electrolyte interface. 1 The recombination dynamics in hematite probed by TAS in the microseconds−milliseconds time scale at 580 nm has been reported to be insensitive to the electrode environment. 36 For these reasons, the TAS measurements conducted in the absence of electrolyte provide here information about the charge carrier dynamics of the r-SS in the hematite films.
To compare charge carrier dynamics between the samples, the TAS spectra were normalized at a delay time of 0.2 ps (Figure 8 and Figure S22). The spectra featured a strong peak at 570 nm, which is also reported in the literature for measurements done under electrolyte conditions. 37−39 The wavelength corresponds to an electron transition of 2.1 eV from the top of the valence band to the localized states just below the conduction band. 1 The shapes of the spectra at 0.2 However, at longer delay times the spectral difference becomes more pronounced. At 570 nm the signal is on the level of 0.2−0.3 relative to that at 0.2 ps, and even stronger decay can be seen in the red side of the spectrum at wavelengths longer than 640 nm. As a rough approximation, an almost complete decay in the red part can be attributed to the disappearance of the free carriers and the remaining absorption at 570 nm to the r-SS in Fe 2 O 3 . 37 The sample postannealed at 300°C shows the least degree of recombination of free carriers and the highest r-SS population at 1 ns delay time. Interestingly, the charge carrier lifetime is prolonged only when there exists a separate TiO 2 phase at the hematite surface, and the charge carrier concentration in the trap states is thus higher. From this it can be concluded that TiO 2 clearly modifies the r-SS. In general, the increased charge carrier lifetime increases the probability of the charge carriers to take part in the water splitting reaction. 32 The TAS results confirmed the successful passivation of the r-SS by the TiO 2 phase. Strikingly, no difference in the TAS signals was observed between the sample with Ti diffused into the hematite surface and the hematite reference. This suggests that the differences in the PEC performance presented in Figure 7 are not due to the charge carrier dynamics associated with the r-SS. In contrast, these results support the hypothesis that the intermediate surface states (i-SS) probed via the surface state capacitance are involved with the charge transfer during the water splitting reaction.
■ CONCLUSIONS
The surface states of the hematite photoanodes take part in the charge carrier transfer, trapping, and recombination processes during the solar water splitting reaction, having a significant impact on the photocatalytic efficiency. The modification of the hematite surface states by a submonolayer of ALD TiO 2 was studied by impedance spectroscopy and transient absorption spectroscopy. The results show that the surface states are a necessary intermediate step in water splitting reaction, and the charge transfer can only take place when holes are occupying the intermediate surface states (i-SS). The potential of the i-SS affects the photocurrent onset potential, and the LDOS of i-SS affects the amount of generated photocurrent, thus corresponding to the sharpness of the photocurrent onset. Two different surface phases (Fe 2 O 3 and TiO 2 ) give rise to two distinct surface state capacitance peaks. Postannealing causes the mixing of the layers and thus the merging of the i-SS peaks. Unfortunately, this also shifts the LDOS of the i-SS and the photocurrent onset to the anodic direction, which decreases the overall water splitting efficiency. The charge carrier lifetime in recombination surface states (r-SS) is increased when a separate TiO 2 phase exists at the hematite surface, which results in an efficient passivation of these detrimental surface states.
This work provides a deeper understanding on the type and role of surface states in photoelectrochemical water oxidation on hematite-based photoanodes.
|
v3-fos-license
|
2020-11-12T09:07:39.890Z
|
2020-11-05T00:00:00.000
|
228874904
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://f1000research.com/articles/9-1302/v1/pdf",
"pdf_hash": "63eaeaafc6c159358d82a855b12eb9983f3a4c69",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1103",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "b1c4319c651e81540900cbddd42fa649511bff0f",
"year": 2020
}
|
pes2o/s2orc
|
A simulation of a medical ventilator with a realistic lungs model [version 1; peer review: 1 not approved]
Background: The outbreak of COVID-19 pandemic highlighted the necessity for accessible and affordable medical ventilators for healthcare providers. To meet this challenge, researchers and engineers world-wide have embarked on an effort to design simple medical ventilators that can be easily distributed. This study provides a simulation model of a simple one-sensor controlled, medical ventilator system including a realistic lungs model and the synchronization between a patient breathing and the ventilator. This model can assist in the design and optimization of these newly developed systems. Methods: The model simulates the ventilator system suggested and built by the “Manshema” team which employs a positive-pressure controlled system, with air and oxygen inputs from a hospital external gas supply. The model was constructed using SimscapeTM (MathWorks®) and guidelines for building an equivalent model in OpenModelica software are suggested. The model implements an autonomously breathing, realistic lung model, and was calibrated against the ventilator prototype, accurately simulating the ventilator operation. Results: The model allows studying the expected gas flow and pressure in the patient’s lungs, testing various control schemes and their synchronization with the patient’s breathing. The model components, inputs, and outputs are described, an example for a simple, positive end expiratory pressure control mode is given, and the synchronization with healthy and ARDS patients is analyzed. Conclusions: We provide a simulator of a medical ventilation including realistic, autonomously breathing lungs model. The simulator allows testing different control schemes for the ventilator and its synchronization with a breathing patient. Implementation of this model may assist in efforts to develop simple and accessible medical ventilators to meet the global demand. Open Peer Review
Introduction
One of the positive COVID-19 consequences is a great social gathering of creators, scientists and engineers to assist the worldwide pandemic effort, including the design of custom-made open source ventilators 1 . The review of Pearce (2020) covers about 160 publications and links to websites that provide computer-assisted design (CAD) models, construction and installation instructions and bills of materials. It is probably not covering hundreds of other projects that are not published yet or could not pass the strict definitions of the opensource ventilator of the author.
Manshema is an emergency ventilation machine and was created during the Assuta COVID-19 Hackathon Sprint by the group comprising engineers, medical doctors, and scientists. The Manshema Ventilator (MV) was designed to assist in the ventilation of patients who are capable of autonomous breathing yet require assistance to maintain a sufficient positive end expiratory pressure (PEEP) and blood oxygen saturation levels.
One of the major drawbacks of the custom-made open-source ventilator designs is that these are created in a very short time and do not allow detailed analysis of their performance, quality assurance and thus regulatory approval. One of the key points is a lack of proper set of mathematical models that describe the performance of a specific ventilator due to a large variety of the parts, sensors and components used in its creation. This study is addressing this gap by creating a detailed mathematical model and a simulator of a realistic lung ventilation, carefully calibrated and tuned specifically to the MV design, parts and sensors. The simulation will provide the design team the opportunity to design the next version, to extend the ventilator capabilities and to assure its performance corresponding to the specific patient condition. Furthermore, the simulation provides a template for a large variety of open source designs, such as Ambu bag ventilators or linear actuator ventilators, and may eventually lead to a closed loop, feedback-back control at the level of commercial regulatory approved mechanical ventilators.
Methods and materials General description
The MV consists of an input branch which mixes air and oxygen from the hospital reservoirs and feeds it into the patient mask, and an expiratory output branch which is opened or closed by the control system. Figure 1a shows a schematic illustration of the MV. The input compressed air and oxygen are supplied by the hospital central reservoirs. The flow from each of the reservoirs is controlled with a flow control valve. After the flow control valves, the gases flow through two similar pipes, mix and flow through the main pipe and mask pipe towards the breathing mask. A pressure relief valve marked Popoff is located between the Main pipe and the Mask pipe. This pressure relief is set to mechanically control the maximal pressure in the system and avoid over-pressuring the lungs. The expiratory air flows through a directional check valve which does not permit breathing the exhaled gasses. After the check valve, the outlet pipe leads to the expiratory flow control system which opens and closes the expiratory pipe flow path using an ON/OFF solenoid valve. The outlet of the control system is connected to a pressure relief valve that is set according to the required PEEP valve. Figure 1b shows a CAD drawing of the complete MV system, and Figure 1c shows a CAD drawing of the main components of the MV. Figure 1d is a photograph of the main components of the MV prototype.
Control strategy
In order to reduce the system costs to minimum, the MV operates with a single pressure sensor and a solenoid valve which controls the gas flow out of the system. The minimum and maximum pressure in the system are controlled with a hysteresis control scheme. This control strategy is realized with a pressure sensor located near the inlet to the solenoid valve and a relay which sets the state of the solenoid valve according to the measured pressure. In its initial state, the solenoid valve is closed directing the input air to the patient allowing the pressure to build up. When the patient exhales, the pressure at the expiratory pipe increases rapidly and after it reaches the expiratory positive airway pressure (EPAP), the relay opens the solenoid valve, the exhaled air is removed, and the pressure starts to drop. Once the patient starts to inhale, air is removed from the pipes and the pressure drops. When the pressure reaches the inspiratory positive airway pressure (IPAP) the solenoid valve closes, and the breathing cycle continues.
The Manshema ventilator model
The MV model was built with MathWorks® Simulink® Simscape™ Gas system toolbox. The outline for the model followed the MathWorks® "Medical Ventilator with Lung Model" example and was modified to describe the MV design, control system as well as an autonomously breathing patient. The model source files, along with an elaborated description of model parameters, variables, Simulink® block parameters and their values can be found under the data availability section as well as in the model OSF webpage 2 . Figure 2 shows the block diagram of the model and Figure 3 shows a block diagram of the control system. The lungs are modeled as a translational mechanical converter that is coupled to a spring, a damper, and a force source. The spring and damper model the mechanical compliance and resistance of the lungs 3 and force source models the muscle induced pressure 4 which is a result of the patient autonomous breathing (patients that cannot breath autonomously can be modeled by replacing the variable muscle pressure term with a constant pressure). The pressure induced by the muscles contraction and relaxation, P mus , is realized with exponential functions as described by Fresnel et al. 4 : Where T 1 is the time period for muscles contraction in every breathing cycle and T tot is the breathing cycle length. τ c and τ r are the contraction and relaxation time constants, respectively, and P max is the maximum pressure that can be induced by the muscles. All the parameters in Equation (1) can be easily derived from the mouth occlusion pressure, P 0.1 and the breathing frequency, f v as described in 4. A block diagram of the lungs branch is shown in Figure 4.
Model calibration. The model was calibrated against the MV prototype in two steps. First, the parameters of the PEEP and Popoff pressure relief valves were calibrated by comparing the model output to the measured output when the lungs port was blocked. In this set of experiments, the pressure at the inlet to the solenoid control valve was measured as a function of the total input gas flow rate when the solenoid valve was open and when it was closed. The experiment was repeated for PEEP values of 2, 5 and 10 cmH 2 O. For each of the pressure relief valves in the model the set pressure differential and the maximum valve open area were tuned to provide the best fit to the measured data. Figure 5 shows the modeled and measured pressure as a function of the input gas flow rate and PEEP values of 2, 5 and 10 cmH 2 O. Next, the model calibration was tested by comparing model results to the output of the MV prototype when it was connected to an IMTMedical Easylung test lung. In order to minimize the effects of the popoff and PEEP valves, the total input gas flow rate was set to 10 L/min and the PEEP was set to 2 cmH 2 O. To probe the transient response of the MV, the solenoid valve was opened and closed in intervals of 3.5 seconds. Under these conditions, pressure drops across the different parts of the system are low, the Popoff valve remains closed throughout the experiment and the pressure in the system is determined mostly by the mechanical lung parameters. This provides optimal conditions for calibrating the mechanical lungs' compliance and resistance in the model. Figure 6 shows the measured and modeled pressure (a) and gas flow (b). The gray regions in the figure are time periods in which the solenoid valve is closed. The fitted spring and damper constants for the lungs model are 148.5 N/m and 40 N/(m/s), respectively. When the solenoid valve is closed, air is directed into the test lung, increasing the pressure in it. Then, when the solenoid valve is reopened, the air in the test lung along with air from the reservoirs flow out of the system through the PEEP pressure relief valve. The MV model was able to capture the measured transient response of the system well. Particularly, the modeled gas flow follows the measured values closely.
Once the lungs parameters were found, the model predictions were compared to experimental measurements at input flow rates of 20L/min and 30 L/min and a PEEP of 5 cmH 2 O, which better represent the operating conditions of the MV. Figure 7 are the time periods in which the solenoid valve was closed. Unlike the lung calibration experiments in which the Popoff valve remains constantly closed, at a flow rate of 20 L/min the Popoff valve opens when the pressure rises above 13 cmH 2 O and closes back when it drops below 8 cmH 2 O. This transition, which is not instantaneous, is not captured well in the simulation, resulting in a deviation between the measured and modeled responses. Nevertheless, the simulation was able to predict fairly well the minimum and maximum pressure in the system, which are most important for its safe operation. In a similar manner, there is a deviation between the measured and modeled flow in the system when the input gas flow rate. The overestimation of the calculated air flow under higher input flow rates may be a result of leakages in the system that are not considered in the model.
Simulated ventilation of autonomously breathing patients
After the model is calibrated and tested, we turn to simulate the ventilation of autonomously breathing patients. We start by simulating the ventilation of a healthy person which is Table 2. As the patient begins to inhale (purple region), the lungs expand, air flows into the lungs, and the pressure in the expiratory pipe drops. Once the pressure reaches the IPAP, the solenoid valve closes (pink region) and the input gas is inhaled by the patient, resulting in a nearly constant gas flow at a rate that is determined by the flow regulating valves at the inputs to the system. The pressure in the lungs decreases at the beginning of this stage as the lungs are still expanding and then it increases slowly as the lungs fill with air. When the patient attempts to exhale the pressure increases rapidly in the system. At first the solenoid valve is still closed and gas flows into the patient's lungs (white region), but once the EPAP is reached, the solenoid valve opens and the air flows out of the MV through the expiratory pipe (light blue region). Figure 9 shows the time evolution of the pressure, air flow and tidal volume simulating the ventilation of an acute respiratory distress syndrome (ARDS) patient. The patient breathing rate was taken to be 20 breaths per minute 5 , the occlusion pressure is 6.65 cmH 2 O 6 , IPAP was set to 5 cmH 2 O and the EPAP was set to 13 cmH 2 O 5 . The color coding is as described in Table 2. It can be easily seen that MV output ventilation of the ARDS patient is very similar to that shown in Figure 8. The slight decrease in tidal volume is a result of the higher breathing rate which is recommended for patients with ARDS 5 . The low Figure 8. Healthy patient example. The calculated pressure (a) flow (b) and tidal volume (c) time evolution in the lungs and at the control system solenoid valve. The patient breathing rate was taken to be 15 breaths per minute, the occlusion pressure is 4 cmH 2 O, IPAP was set to 5 cmH 2 O and the EPAP was set to 13 cmH 2 O. The color coding is as described in Table 2. Figure 9. Acute respiratory distress syndrome (ARDS) patient example. The calculated pressure (a) flow (b) and tidal volume (c) time evolution in the lungs and at the control system solenoid valve. The patient breathing rate was taken to be 20 breaths per minute, the occlusion pressure is 6.65 cmH 2 O, IPAP was set to 5 cmH 2 O and the EPAP was set to 13 cmH 2 O. The color coding is as described in Table 2. variance in the MV output with respect to the patient condition is a result of the fairly constant gas flow upon inhalation. This may be an advantage since it allows simple tuning of the ventilation parameters according the guidelines for specific respiratory syndromes.
Perfect synchronization between the MV and the patient is obtained if the solenoid valve opens and closes exactly when the patient attempts to inhale and exhale. Thus, the synchronization level of the MV can be optimized by attempting to minimize the white and purple regions in the output plot.
Conclusions
In this study we provide a solution for the majority of the custom-designed ventilators created around the world in a response to the COVID-19 crisis. The simulator can lead to the opportunity to assure quality of the designed machines versus the digital twin model, analyze human response as compared to the realistic lung model and enable future, regulatory approved designs. This project contains the following underlying data:
Data availability
Peep and Popoff calibration (data underlying Figure 5): This project contains the following extended data: -Manshema_Sim_Parameters.csv (Table of input The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com
|
v3-fos-license
|
2021-09-27T19:32:13.293Z
|
2021-08-11T00:00:00.000
|
238649282
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://jbt.sljol.info/articles/10.4038/jbt.v5i2.29/galley/34/download/",
"pdf_hash": "7562e6bdefd9e4c74430b0cef95876f4eaee7948",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1104",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"sha1": "33efa89c4db3128ad4af426f09fc3dd34f517ef4",
"year": 2021
}
|
pes2o/s2orc
|
Trust Repair Efforts and Repurchase Intention after Negative Publicity of Milk Powder in Sri Lanka
This paper aimed to examine the impact of trust repair efforts on consumers repurchase intention of milk powder products affected by negative publicity in Sri Lanka. Consumer response to business entity’s effort to mitigate negative information are understudied in developing economies including Sri Lanka. A quantitative survey design was opted for this study. Data was collected from 140 consumers residing in the western province of Sri Lanka employing a questionnaire during the pandemic. The proposed hypotheses were analyzed using an OLS regression. Our findings revealed that functional repair efforts, affective repair efforts, and informational repair efforts have a significant and positive impact on repurchase intention. Among the three trust repair efforts, informational repair efforts were found to be the most important trust repair efforts that influence the repurchase intention. Based on the findings, milk powder producers in Sri Lanka should pay attention to strengthening ties with the consumers and consider trust repairing efforts to negate negative campaigns. The main limitation of the study is that data collection had to be limited among the urban consumers and collecting data randomly was challenging due to the pandemic. Nevertheless, our study contributes to the growing body of literature on trust repairing and trust theory from an under-explored economic and socio-cultural context.
INTRODUCTION
Bad news can cause considerable damage to an organization. Takata corporation and Boeing are some of the recent examples where companies had to go bankrupt, or their operation got disrupted due to equipment failure and software glitch. Boeing incurred losses for the first time in two decades after the two deadly 737 Max crashes ("Boeing puts" 2020). Likewise, Volkswagen's car sales fell 5.3% in October 2015 after the company admitted to rigging emission tests (Kottasova, 2015). In the cases of Boeing and VW, CEOs had to step down as the incidents had a severe negative impact on the companies. In Japan, the CEO is expected to stand down in the aftermath of a crisis. In the past, adverse effects of business activities were treated as minor externalities; however, today, such crisis is not ignored and receive significant attention (Mitroff, Shrivastava, & Udwadia, 1987). ISSN 2738-2028 (Online) | Vol. 5 | No. 2 | 2021 July internet. The risk of businesses exposed to the negative campaign has heightened as unsubstantiated information is circulated using tech-based media platforms. It has been claimed that consumer decisions are more likely to be affected by negative information than positive information (Eagly & Chaiken 1993), and generally information processing is steered by the cognitive economy principle (Ahluwalia & Gurhan-Canli 2000).
Prior studies have shown that negative information lowered the consumption of foods associated with negative publicity (see Brown, 1969;Swartz & Strand, 1981;and Youn, Lim & Jin 2012). Investigating the sales loss in the aftermath of milk contamination in Hawaii, Smith, van Ravenswaay and Thompson (1988) found that the impact of negative media coverage superseded the positive coverage to deter milk purchases. Authors also claimed that government assurance of food safety and statements from the producers were futile in restoring consumer trust (Smith et al., 1988).
How do companies strategize against adverse publicity, and how do consumers evaluate corporations reaction to negative publicity is understudied claims Vyravene and Rabbane (2016). Yu, Liu, Lee, & Soutar (2018) reports that most negative publicity research have concentrated on Western consumers, and there has been less attempt to understand how consumers in other cultures respond to such information. Some past studies have revealed that cultural differences may influence consumers to blame the brands for causing negative publicity and purchase decisions (see Laufer, Gillespie, McBrider, & Gonzalez, 2005;and Turnbull, Leek & Ying 2000).
Research context
This paper focuses on the powdered milk products in Sri Lanka. Annual per capita milk consumption in Sri Lanka in 1983 was around 13 Kg, and it was around 36 Kg in 1997(Ibrahim, Staal, Daniel, & Thorpe, 1999. According to Vidanarachchi et al. (2019) Sri Lanka's annual per capita milk consumption in 2018 was approximately 35 Kg. While Sri Lanka's annual per capita milk consumption is below India and Pakistan, it is higher than some developing economies. Domestic milk production in Sri Lanka fulfils only 38 % of the national requirement (Central Bank, 2019). Hence, milk products mainly in the form of powdered milk are imported to Sri Lanka. As a result, per capita powdered milk consumption in Sri Lanka is higher than liquid milk consumption. It has been claimed that milk powder import amounts to US $ 300 million per annum (Lugoda, 2020). Milk production in Sri Lanka has increased over the years with the intervention of government policies and initiatives. In the past decade, a debate has emerged within the Sri Lankan community 'whether consuming milk powder is unhealthy?'.
The first blow to the milk powder importers in Sri Lanka was the Ministry of Technology and Research revelation that a harmful substance identified as Dicyandiamide (DCD) had been found in four imported milk powder brands in 2013 ("Harmful chemical" 2013). In 2014, a leading importer of milk powder was suspended from distributing and selling milk products after some consumers claim to have fallen ill after consuming the company's milk powder (Aneez & Sirilal, 2014). The suspension was lifted later. The second blow occurred when a Deputy Minister claimed that a harmful substance are included in imported milk powder in the parliament of Sri Lanka in 2019 (Siriwardena & Perera, 2019). The discourse about the negatives of milk powder consumption involving medical professionals, parliamentarians, educators, and consumers in Sri Lankan society continues to persist and evolve after the above revelation. On 20 th January 2020, a program was launched by the Government Medical Officers' Association (GMOA) and the Ministry of Health to raise awareness among the general public about the health risks of imported milk powder consumption at the National Hospital of Sri Lanka (NHSL) (Gunatilleke, 2020).
The GMOA has been an outright critique of milk powder consumption and claims it may cause many non-communicable diseases (see Vishwa Karma, 2019;and Government Medical Officers' Association, 2019). However, milk powder importers in Sri Lanka stressed that the claim milk powder consumption is unhealthy is unsubstantiated ISSN 2738-2028 (Online) | Vol. 5 | No. 2 | 2021 July and whether milk powder consumption causes diseases is not based on scientific justification ("Companies say" 2020). Nevertheless, the Consumer Affairs Authority (CAA) classified imported milk powder as high-risk food items citing several reasons (Dissanayake, 2019). This is even though an interim report of the CAA revealed that there was no adulteration in milk powder imported to Sri Lanka ("Companies say" 2020).
In response to the above developments, milk powder importers and producers of Sri Lanka have taken various steps to mitigate the negative criticism towards the industry. These include investing in technologies to detect foreign matter in its milk powder plants, milk preservation and traceability, increasing sourcing from local suppliers, highlighting their SLS acceptance, conducting advisory workshops partnering with officials, and engaging with communities to uplift their livelihoods.
Studies that have attempted to explore the effect of negative publicity consumer purchase intentions, consumer trust or and trust repair are few in the context of Sri Lanka. Hence, this paper attempts to examine whether consumers are willing to repurchase milk powder products affected by negative publicity after considering the trust repairing efforts by the producers. This paper is organized as follows. The following section focuses on the trust repairing literature, relevant theories, and the hypotheses. Subsequent sections detail the methods applied in this study followed by the findings of the study. Final section concludes the study by linking the findings to previous studies and highlighting the implications.
Consumer trust repair
A company's response to a negative campaign lies between two extremes. At one extreme is stonewalling, which is to deny responsibility or absence of any form of measures or communications, and at the other extreme is an apology to the consumers or victims (Dawar & Pillutla, 2000). There may lie many ambiguous efforts from the violator between these extremes. Trust repair is one such mechanism pursued by the violating party in the aftermath of an adverse event. Bozic (2017, p. 535) defines trust repair or trust restoration as the "improvement in a trustor's trust after a trust violation damaged it." Trust repair has been generally looked upon as a dyadic relationship between the victim (trustor) and violator (trustee) (Schweitzer, Hershey, & Bradlow, 2006). The interest of this paper lies in the dyadic relationship between consumer and the companies producing and distributing milk powder affected by the negative publicity.
Chen, Wu, and Chang (2013) claim that trust repairing plays a vital role in determining customer feelings in the event of a trust violation. Kim, Ferrin, Cooper, & Dirks (2004) claim that trust repairing is complex and may require different strategies. Prior studies reveal that trust repair may encompass functional repair efforts, affective repair efforts and informational repair efforts (Cao, Shi, & Yin, 2014;Chen et al., 2013). Economic or monetary compensation for the consumers affected by the trust violation falls within the scope of functional repair efforts (Xie & Peng, 2009). An earnest apology from the trust violator to the consumers and public is identified as an affective repair effort (Kim et al., 2004;Xie & Peng 2009). Using channels of communication to disseminate up-to-date information and to demonstrate steps taken to mitigate future incidents is regarded as informational repair efforts (Chen et al., 2013).
Repurchase intention
Repurchase intention is the decision of an individual to buy a designated product or service again (Hellier, Geursen, Carr, & Rickard, 2003). Buyers usually would have the first-hand experience with sellers and repurchase is a situation where the buyer use previous experiences as a source of making purchase decisions (Sullivan & Kim, 2018). Some scholars have pointed out that negative publicity adversely affects repurchase intention (Yu et al., 2018). In the case of a product or a service being perceived to be low in value because of low quality or high price, the intention to purchase is expected to be low ISSN 2738-2028 (Online) | Vol. 5 | No. 2 | 2021 July (Sullivan & Kim, 2018), and negative publicity can further damage the overall perceived value of the product. Managers may attempt to use public relations to convince consumers of the product value in the aftermath of negative information (Yu et al., 2018).
Theoretical background A theory of consumer trust
Trust theory (see Isaeva, Gruenewald, & Saunders 2020) provides a foundation for enabling customer trust for organizations. Isaeva et al. (2020) propose three recommendations on how service organizations can work towards enabling customer trust. Their conceptualization of trust theory can be extended to organizations in other industries. First, the authors propose that organizations and their representatives should take a proactive approach to customer trust. This shall include selecting and training employees to demonstrate trustworthy behaviour and institutionalizing trustworthy behaviour using formal and informal controls (Isaeva et al., 2020). Second, to avert trust violations in the early stages of the relationships and establish plans to respond to any incidence of violations. Third, to initiate trust repair. Prior studies have established that trust has three components, namely cognitive, affective, and behavioural. The cognitive element of trust deals with beliefs and expectations, the affective element of trust deals with emotional connectivity, and the behavioural aspect of trust is concerned with the actual behaviour of the trustor (Lewicki & Briensfield, 2017). Thus, the three main domains of the trust theory based on the work of Isaeva et al. (2020) can be further strengthened by absorbing a comprehensive understanding of the concept of trust.
Hypotheses development
Functional repair efforts are centered around economic compensation (Cao et al., 2014). It is a common act for businesses to provide some financial compensation to resolve a negative event or fault experienced by a customer in obtaining a product or services from a business (Xie & Peng 2009). Initiating functional repair efforts by a company experiencing negative publicity may have a positive or negative or impact on customers' repurchase intention. Cao et al. (2014) revealed that functional repair efforts showed no significant influence on repurchase intention after negative publicity in the context of Chinese dairy products. In contrast, Chen et al. (2013) revealed that functional repair remedies cause positive moods among consumers in an e-commerce context. Moreover, Xie and Peng (2009) have shown that functional repair reveals a company's intention to look after the well-being and interest of its consumers. Accordingly, the following hypothesis is suggested.
H1: Functional repair efforts significantly influence the repurchase intention of milk powder post negative publicity.
Typically, affective repair efforts involve apologizing to the victims and public and expressing regret (Kim et al. 2004). Responding to negative publicity or resolving customers' criticisms helps to improve customer satisfaction with the perception of justice and rebuild trust in the organization (Brown, Chandrashekaran, & Tax, 1998). Extant studies have shown that if a customer receives an honest apology after an unhappy shopping experience, the customer perceives interactional justice (Smith, Bolton, & Wagner, 1999). Cao et al. (2014) and Chen et al. (2013) have explored affective repair efforts in their studies and found they positively affect repurchase intention. It can be assumed that repairing the customer's emotional distress seems to be particularly important. Hence, the following hypothesis is proposed.
H2: Affective repair efforts significantly influence the repurchase intention of milk powder post negative publicity.
Informational repair efforts typically entail appropriate communication such as showing evidence, clarifying facts, and divulging upto-date news over the crisis handling stage (Xie & Peng, 2009). Research findings of Yu et al. (2018) have revealed that negative information may impact customer attitudes and repurchase intention (Yu et al., 2018). Alternatively, studies have shown that information repair efforts may effectively influence customers' positive mood and ISSN 2738-2028 (Online) | Vol. 5 | No. 2 | 2021 July repurchase intention (Cao et al., 2014;Chen et al., 2013). Correspondingly, the following hypothesis is proposed.
H3: Informational repair efforts significantly influence the repurchase intention of milk powder post negative publicity.
Sample and data collection
Milk powder consumers in the Western province of Sri Lanka is the population of this study. According to the Central Bank Annual Report (2019), Western Province has the highest household expenditure share as a percentage for milk and milk product category. It was intended to collect at least 200 responses from Milk powder consumers. Data collection became a daunting task amidst the nCovid-19 pandemic. Leading supermarket chain outlets in the Western province were visited, and approval was obtained from the respective managers to collect data for the questionnaire. Due to the pandemic and organization policies, data collection was not permitted inside the outlets. However, the managers did permit to collect data from the outside of the outlets. Accordingly, several outlets belonging to the three districts in the Western province was visited on Monday, Wednesday, and Friday from 5.00 p.m. to 6.30 p.m. Every third customer exiting the outlet after making purchases was approached and asked 'whether they are willing to participate in the survey' and 'whether they had purchased Milk powder'. 140 questionnaires were collected after two months of effort.
Questionnaire development
The questionnaire was designed to include two sections. The first section included questions about the responding consumer's profile. The second section of the questionnaire included questions to measure the proposed variables. The predictor variables of this study are functional repair efforts (3 items), affective repair efforts (4 items) and informational repair efforts (5 items). These variables were measured on a 7point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Repurchase intention is the outcome variable and was measured using 3 items on a Likert scale ranging from strongly disagree (1) to strongly agree (9). See the annexure for the questionnaire items and the sources.
Analysis method
This paper applies descriptive statistics, correlation analysis and multiple linear regression (MLR) to analyze the collected data. MLR shall be executed using the ordinary least square (OLS) estimation technique. Statistical Package for Social Science (SPSS) version 25 was used for data analysis.
Sample adequacy
As mentioned before, the data collection was undertaken amidst the pandemic. Although the intention was to collect more than 200 responses, 150 questionnaires were collected after two months of effort. However, 10 questionnaires had to be removed due to incompleteness and missing data. Thus, 140 complete and usable questionnaires were available for the analysis. Although it may be argued that the sample size may be small in the case of consumer behaviour studies, approaching customers in the context of a pandemic to obtain their consent and collect data was an arduous task. As for the MLR there are more than 9 observations per question item, and there are 35 observations per variable. Furthermore, this study only intends to examine the direct impact of the predictor variables. Hence, it can be deemed that the response rate is adequate to perform the analysis and test the proposed hypotheses. Table 1 provides background details of the consumers who responded to the survey. As depicted in the Table, the majority of the respondents were female. Details of the marital status of the respondents suggest most of the respondents were married. Most of the respondents were from Colombo and Kaluthara districts. Gampaha district had the lowest participation in the survey. Education qualifications of the respondents indicate that many of the respondents had passed the ISSN 2738-2028 (Online) | Vol. 5 | No. 2 | 2021 July G.C.E. (A/L) examination in Sri Lanka, and 47.1% of the respondents had obtained a bachelor degree or professional qualification. This suggests that most of the respondents were knowledgeable. Employment details of the respondents suggest that most of them were employed in either the public sector or private sector. There were also respondents who were self-employed. Reliability assesses the consistency of survey items or to what extent the survey items reflect a construct. The Cronbach's Alpha coefficient measuring the reliability of the constructs are given in Table 2. All the constructs report a coefficient value of 0.7 and above. According to Nunnally and Bernstein (1994), a Cronbach's Alpha coefficient of 0.7 is an acceptable level of reliability. Fornell and Larcker (1981) suggest composite reliability (CR) of 0.6 or more and average variance extracted (AVE) greater than 0.5 meets convergent validity expectations. According to Mirzaei, Dehdari, Taghdisi, & Zare (2019), convergent validity can be confirmed even if AVE is lesser than 0.5, but CR is above 0.6. According to Table 2 CR is above 0.8 for all the variable. AVE of the informational repair efforts variable is 0.46, and for all other variables, it is above 0.5. Although the informational repair efforts variable has an AVE below 0.5, its CR is greater than 0.8. Fornell and Larcker (1981) suggest that the square root of each construct's AVE should be greater than its correlation with other constructs to establish discriminant validity. Table 2 depicts that the square root of each construct's AVE is greater than its correlation with other constructs. Thus, providing evidence of discriminant validity.
Univariate and bivariate analysis
Descriptive statistics and correlation coefficients between variable are presented in Table 3. The mean values range between 3.08 and 6.15, and the standard deviation range between 0.552 and 1.095. The highest absolute Skewness statistic is reported for informational repair efforts (Sk = -.35), and the highest absolute Kurtosis statistic is reported for functional repair efforts (K = -1.106). In sum, descriptive statistics suggest the variables have not significantly departed from a normal distribution.
According to the Pearson correlation matrix in Table 3 there is positive and significant association between functional repair efforts and repurchase intention (r = .194, p = .022). Similarly, there is a positive and significant association between affective repair efforts and repurchase intention (r = .394, p < .01) and informational repair efforts and repurchase intention (r = .531, p < .01). The highest correlation coefficient is reported between informational repair efforts (IR) and repurchase intention (RI), and the next highest correlation coefficient is between affective repair efforts and information repair efforts (r = .404, p < .01). The strength of the correlation between variables in this study can be interpreted based on Cohen's classification of correlation strength into small (0.1< |r| <0.3), medium (0.3< |r| <0.5), and large (|r| >0.5) (1998). The highest VIF value reported is 1.232. Since there is no high correlation between the variables and the VIF values are well below the stipulated threshold level, multicollinearity is unlikely. Table 4, all three predictor variables have a significant and positive relationship with repurchase intention. However, the three predictor variables explain only 34 per cent of the outcome variable's total variance. Therefore, it seems there are omitted variables that may play an essential role in determining consumers' milk powder repurchase intention. Additionally, the relative importance of each variable was examined (see Table 5). Informational repair efforts is the most critical predictor variable among the three. However, it should be noted that the relative importance of the predictor variables was evaluated using the traditional approach, and it is prone to problems where the unstandardized and the standardized relative importance of variable may give a relative importance to different variables in certain circumstances.
Robustness check
Several robustness tests were performed to meet the assumptions of multivariate analysis. First, normality was assessed by examining measures of shape and computing their critical ratios (Z-score). The values of Skewness and Kurtosis are within the acceptable threshold levels (Sk < ±2, K < ±7) (Hair, Black, Babin, & Anderson, 2010). The critical ratios of Skewness and Kurtosis were computed by dividing their test statistics from the standard errors. The highest and lowest critical ratios for Skewness was -1.707 and .263, respectively. The lowest and highest Z-score for Kurtosis was -2.717 and -.263. According to Kim (2013) for samples between 50 and 300 critical values within ± 3.29 reflects normal distribution. Additionally, Q-Q plots (not shown) were extracted, and these scatterplots showed that the data points mainly were positioned along the diagonal line. The above evidence suggests that data distribution is within an acceptable level of normality despite slight deviations. Next, linearity between the variables was examined using scatterplots. Straight lines or trend lines in the scatterplots suggested that linearity existed between the variables. Next, the standardized values against the standardized predicted values were plotted to check for heteroscedasticity. An eyeball test was carried out using scatterplots and observed that this assumption is not violated.
Finally, the Durbin-Watson coefficient (DW = 1.678) was calculated to test the independence of observations. According to Gujarati (2009) and Saunders, Lewis, and Thornhill (2009) DW statistics ranges between zero to four and a value closer to two reflects independence of observations or no autocorrelation. Since the DW test statistic is within this stipulated range, it was determined that independence of observation had been established.
CONCLUSION
The purpose of this paper was to examine the impact of trust repair efforts on the repurchase intention of milk powder products affected by negative publicity. Our findings revealed that trust repair efforts have a significant and positive influence on consumers repurchase intention of milk powder products affected by negative publicity. Among the three types of trust repair efforts, we found that information repair efforts had the most significant influence on consumer repurchase intention, followed by affective repair efforts and functional repair efforts.
Our findings support the notion echoed in previous studies that trust violators in this particular circumstance the milk powder producers should pay attention to strengthen ISSN 2738-2028 (Online) | Vol. 5 | No. 2 | 2021 July ties with their consumers using trust repair efforts. It seems that information repair efforts is likely to have the most significant effect on repurchase intention, and the milk powder producers may consider improving their informational repair efforts further.
We make several contributions to the existing literature. Bozic (2017) state that trust repair is a nascent area of research, and our paper contributes to this field from an underexplored economic and socio-cultural context. First, this paper provides evidence on trust repair efforts in a South Asian developing economy. Second, our findings support previous empirical work examining trust repair. Third, the paper also contributes to the growing body of literature on trust theory.
Several limitations should be taken into consideration when interpreting the results presented in this paper. The main limitation of this study is the sample. As mentioned previously, data for this study was collected during the nCovid19 pandemic. Thus, collecting a random sample was affected, although the researchers made every attempt to collect data systematically. Further, the data was collected from respondents residing in the urban areas of the western province of Sri Lanka. How far the results will hold among the rural consumers of Sri Lanka is difficult to predict. Next, the scope of this study was limited to trust repair efforts and repurchase intention towards milk powder products affected by negative publicity. Hence, conclusions cannot be drawn on whether trust repair efforts affect the milk powder brands or corporate reputation. Next, there could be differences in behavioural intention and actual behaviour towards buying milk powder due to the negative publicity.
Future studies may consider undertaking similar studies with larger samples that represent a cross-section of the consumers residing in urban and rural areas of Sri Lanka. Researchers may also consider investigating the effect of trust repair efforts on milk powder brands and the company's image. Further, studies may consider including actual behaviour that reflects consumer purchases post negative publicity. This study can also be extended to cover edible oil in Sri Lanka as there have been claims of edible oil includes chemical substance beyond the permitted level for human consumption (see "CAA removes" 2021).
Disclosure Statement
The authors declare that there is no conflict of interest and are not aware of any affiliations or funding that may affect the objectivity of this study.
|
v3-fos-license
|
2021-05-05T00:08:04.456Z
|
2021-03-26T00:00:00.000
|
233679750
|
{
"extfieldsofstudy": [
"Physics",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/2515-7647/abf24e",
"pdf_hash": "e43817c16d7360c598ffdd0667a42a28ec0e794d",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1107",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "317a455c4845bc96ea4c6d8663ca4a809748dbc8",
"year": 2021
}
|
pes2o/s2orc
|
Single-photon generation from self-assembled GaAs/InAlAs(111)A quantum dots with ultrasmall fine-structure splitting
We present a novel semiconductor single-photon source based on tensile-strained (111)-oriented GaAs/InAlAs quantum dots (QDs) exhibiting ultrasmall exciton fine-structure splitting (FSS) of ≤ 8 µeV. Using low-temperature micro-photoluminescence spectroscopy, we identify the biexciton-exciton radiative cascade from individual QDs, which, combined with small FSS, indicates these self-assembled GaAs(111) QDs are excellent candidates for polarization-entangled photon-pair generation.
One can generate on-demand polarization entangled photons from QDs via the biexciton-exciton decay cascade [15][16][17]. Compared with other approaches to entangled photon generation, embedded QDs are bright [18], compact and offer the benefit of compatibility with existing planar semiconductor device architectures. The key to producing robust entanglement is for the fine-structure splitting (FSS) between the energies of the two exciton bright states to be as close to zero as possible [19]. However, these intermediate exciton states are typically non-degenerate for traditional III-V QDs such as InAs/GaAs that self-assemble on (001) surfaces. Asymmetries in (001) QD structure and piezoelectric fields create finite FSS on the order of tens or hundreds of µeV [20,21]. Researchers have developed various techniques for tuning the FSS in an individual QD to zero, from post-growth annealing [22], to the use of external magnetic and electric fields [16,23].
A more scalable approach would be to work with QDs for which the FSS is intrinsically zero. Due to the high symmetry of the (111) surfaces, QDs grown with this orientation should have vanishingly small FSS [21]. The challenge is that until recently, the self-assembly of (111) QDs via the Stranski-Krastanov (SK) growth mode was believed to be impossible due to the rapid relaxation of compressive strain on this surface orientation [24][25][26][27]. As a result, various alternatives have been devised to enable the growth of QDs on (111) surfaces, including surface patterning [28,29], and approaches based on droplet epitaxy (DE) [26,27,[30][31][32].
However, reports now show that the spontaneous formation of coherently strained III-V QDs on (111) surfaces is in fact possible via a modified SK growth mode, provided one uses tensile rather than compressive strain to drive the self-assembly process [33][34][35][36].
An SK-based approach to the synthesis of (111)-oriented QDs has several advantages over other methods. The SK growth mode represents a simple, single-step route to QD self-assembly without the need for complex pre-or post-growth sample processing [28,29]. Growth of the SK QDs takes place at similar substrate temperatures to the barriers, while DE typically involves a low temperature step. Growth interrupts are hence needed immediately before and after DE to cool and heat the substrate. As well as lengthening the growth time, impurities can be incorporated at the episurface during these pauses, and rapid encapsulation of the QDs to prevent ripening effects may not be possible.
The SK growth mode permits the use of residual strain for QD band structure engineering. DE does not rely on strain as it begins with the formation of liquid metal droplets. Indeed, DE is particularly useful for forming QDs in systems such as GaAs/AlGaAs that have negligible lattice mismatch [37]. DE has been used to produce QDs in mismatched systems such as InAs/InP [27,30,32], or InAs/InAlAs [26], but here the InAs QDs have the larger lattice constant, resulting in compression. In contrast, it is tensile strain that drives the SK self-assembly of defect-free QDs on (111) substrates [38,39]. We can take advantage of the tensile strain as well as the quantum confinement to modify the ground state transition energy. Unlike compressive strain which acts to widen the band gap, tensile strain narrows the band gap energy [33][34][35]. The tensile strain hence acts in opposition to quantum confinement, giving us highly tunable push-pull control over QD emission. Using tensile strain to red-shift the photon emission is compatible with efforts to generate entangled photons at fiber-optic infrared wavelengths for quantum communication [40]. What is more, tensile strain produces QDs with a light-hole valence band ground state, offering a way to convert between photon and electron qubits [41,42].
In addition to these benefits, we have shown previously that tensile-strained GaAs/InAlAs(111)A QDs grown by SK self-assembly have naturally low FSS [33]. After surveying multiple QDs, we measured a median FSS of 7.6 µeV and demonstrated that >60% of these QDs have FSS ⩽10 µeV, confirming the scalability of this approach. These FSS values compare favorably against the DE-based InAs QDs referred to above for which average FSS values of 17-42 µeV were reported [26,27,30], and are significantly lower than the 176 µeV reported by the same authors for InAs/InP(001) QDs that self-assemble via the SK growth mode [27].
Building on these promising results, in this paper we identify exciton and biexciton emission from individual GaAs/InAlAs(111)A QDs using power-dependent photoluminescence (PL), and report the first measurements of biexciton binding energy in tensile-strained GaAs(111)A QDs. We also provide the first evidence of photon antibunching in second order autocorrelation measurements, allowing us to confirm that individual GaAs(111)A QDs behave as single-photon sources.
Methods
Samples are grown by solid-source molecular beam epitaxy (MBE) on Fe-doped nominally on-axis (±0.5 • ) InP(111)A substrates. The sample structure consists of a 50 nm In 0.53 Ga 0.47 As smoothing layer, a 200 nm In 0.52 Al 0.48 As bottom barrier, a layer of 3.5 monolayers (MLs) GaAs QDs, a 100 nm In 0.52 Al 0.48 As top barrier, and a 10 nm In 0.53 Ga 0.47 As cap. The 3.5 ML GaAs QDs are grown at a substrate temperature of 540 • C, with a growth rate of 0.1 ML s −1 , under a V/III beam equivalent pressure ratio of 75. The lattice-matched InGaAs and InAlAs layers are grown at a substrate temperature of 510 • C, with a growth rate of 170 nm h −1 , under a V/III beam equivalent pressure ratio of 160. All layers are grown with As 2 . We have optimized these MBE growth conditions previously [34,36]. We calibrate substrate temperature by comparing known changes in the reflection high-energy electron diffraction (RHEED) surface reconstruction to pyrometer and thermocouple readings. We use RHEED intensity oscillations to measure growth rate, and find the beam equivalent pressure ratios with a beam flux monitor.
The ensemble PL emission spectrum from the 3.5 ML GaAs(111)A QD sample is shown in figure 1(a). We performed PL using a 633 nm continuous wave laser with the sample temperature held at 5 K in a closed-cycle cryostat. Spectra were acquired with 0.175 nm resolution by optically chopping the laser at 275 Hz, sweeping the grating of a 0.3 m monochrometer, and recording the output on a single-pixel InGaAs detector connected to a lock-in amplifier. The sharp peaks at 843 and 874 nm correspond to emission from the InAlAs barriers and the tensile-strained GaAs wetting layer, respectively [35]. Emission from the array of tensile-strained GaAs(111)A QDs gives rise to the broad PL peak centered at 1064 nm.
Atomic force microscopy (AFM) images of uncapped 3.5 ML GaAs(111)A QDs reveal an average height of 0.731 ± 0.176 nm and an average diameter of 72.4 ± 14.2 nm ( figure 1(b)). These tensile-strained GaAs QDs are characteristically triangular or hexagonal in shape as a result of the threefold symmetry of the (111) surface orientation [36]. We selected the MBE conditions above partly with the aim of producing QD arrays with very low areal density. AFM confirms a density of ∼2 µm −2 : around two orders of magnitude lower than is typical for traditional InAs/GaAs(001) QDs. Reducing the areal density greatly simplifies the task of collecting light from individual QDs.
To characterize the optical properties of individual QDs, we perform micro-PL (µ-PL) spectroscopy at a sample temperature of 5 K using a 633 nm continuous-wave laser for excitation (figure 1(c)). A solid-immersion lens is placed directly on the sample surface to enhance the collection efficiency, and an 18 mm aspheric lens is used to excite and collect emission from the sample. The emission is spectrally resolved with 0.07 nm resolution using the aforementioned spectrometer with a silicon charge-coupled device (CCD).
For FSS measurements, the collected µ-PL is sent through an ultra-sharp and ultra-narrow band-pass filter to isolate a single exciton resonance. The emission is next sent through a piezo-tunable Fabry-Perot etalon to map out the spectral linewidth with ∼1 GHz (∼4 µeV) resolution. A half-wave plate mounted on a rotational stage and a linear polarizer before the etalon enable measurement of the FSS as discussed in more detail below.
We acquire the second-order auto-correlation function g (2) (τ ) using a modified Hanbury Brown and Twiss (HBT) interferometer. The spectrometer is used as a filter to isolate emission from a single resonance with 6.9 nm bandwidth. The filtered light is coupled into a single-mode optical fiber, sent through a fiber-based 3 dB beam splitter, and single photons are detected using WSi superconducting nanowire single-photon detectors [43]. A time-correlated single-photon counting module enables measurement of g (2) (τ ) using a bin size of 128 ps.
Results and discussion
For the µ-PL measurements shown in figure 2 we intentionally selected GaAs(111)A QDs emitting in the 950 nm range, i.e. in the short wavelength tail of the main QD band centered at 1064 nm ( figure 1(a)). The reasons for this are twofold. First, our Si CCD is more sensitive in this shorter wavelength range. Second, the higher density of QDs emitting close to the peak at 1064 nm makes it harder to isolate emission from individual dots for the µ-PL measurements. Figure 2(a) shows µ-PL spectra from a representative GaAs(111)A QD at 5 K. These spectra feature two peaks at ∼957 and ∼959 nm that we assign to the exciton (X) and biexciton (XX), respectively. The linewidth of the peaks in figure 2(a) are limited by the ∼0.07 nm (∼90 µeV) resolution of the spectrometer. However from etalon data we measure linewidths of 40.0 and 81.4 µeV for the X and XX peaks, respectively. Although broader than exciton linewidths from traditional In(Ga)As(001) QDs [44,45], those QD systems have undergone two decades of optimization, while SK-grown (111) QDs remain a comparatively recent discovery. As familiarity with the GaAs(111)A QD system grows, we will borrow from methods used over the years to reduce InAs QD linewidth. For example, we routinely use resonant pumping to reduce spectral diffusion, which is undoubtedly a big source of broadening in our QD emission [44,45].
As we increase excitation density up to 20 µW, the intensity of both peaks increases, with the longer wavelength peak growing more quickly. We plot the peak intensities against excitation density on a log-log scale ( figure 2(b)). The linear (quadratic) increase for the peak labeled X (XX) is a signature that these spectra arise from an exciton-biexciton pair confined in the same GaAs(111)A QD [46]. The exciton peak saturates at excitation densities above ∼10 µW, while the biexciton intensity continues to increase quadratically until it saturates near 15 µW. We observed similar exciton-biexciton spectra across many QDs in this sample.
Notably, the biexciton-to-exciton radiative transition is red-shifted from the exciton-to-ground state transition by the biexciton binding energy ∆E XX = 2.46 meV. To the best of our knowledge, this is the first measurement of the biexciton binding energy in these tensile-strained, (111)-oriented QDs. The energy level structure of the exciton-biexciton decay cascade, including ∆E XX , is shown schematically in figure 2(c). For emphasis, the diagram shows a large FSS between the intermediate exciton states. In QDs with an asymmetric confinement potential, such as traditional InAs/GaAs(001) QDs, the anisotropic electron-hole exchange interaction can lead to a large FSS on the order of tens or hundreds of µeV [20,27]. As a result, photon pairs emitted through the radiative biexciton-exciton cascade are both linearly polarized either along H or V corresponding to the high symmetry directions of the crystal.
A potential advantage of the QDs studied here is the intrinsic rotational symmetry of the (111) crystal orientation [36]. This symmetry eliminates the anisotropic exchange interaction so that, in principle, the FSS vanishes [21]. To measure ∆ fss in the GaAs(111)A QDs, we look at the spectral linewidth of the exciton as a function of collection linear polarization angle θ. Shown in figure 3(a), the polarization-resolved µ-PL is sent through an ultrasharp bandpass filter to isolate the exciton emission and then a Fabry-Perot etalon to map out the lineshape. The emission passes through a half-wave plate and then a linear polarizer. As we rotate the half-wave plate from 0 (H) < θ < π/4 (V), Lorentzian fits to the peaks reveal a small spectral shift of the peak center equal to ∆ fss = 8.3± 0.5 µeV. Given that the resolution of our Fabry-Perot etalon is ∼4 µeV, this value represents an upper bound on the measured FSS, but it is consistent with more detailed measurements of FSS in tensile-strained GaAs(111)A QDs published previously [33].
To verify the single-photon emitter behavior of the QDs, we perform an HBT experiment to measure photon-antibunching. The second-order autocorrelation function g (2) (τ ) is shown in figure 3(b) acquired with continuous-wave above-gap excitation. The characteristic anti-bunching dip with g (2) (τ = 0) < 0.5 is observed. From the fit to the data (solid black line) we extract g (2) (0) = 0.33 ± 0.09 and measure a recombination lifetime of τ r = 0.84 ns. From this lifetime, we estimate a maximum emission rate of 1.2 GHz, assuming perfect preparation efficiency and zero non-radiative recombination. This represents the first confirmation of single-photon emission behavior from tensile-strained QDs on a (111) surface.
A recent review of chip-scale quantum light generation allows us to put this g (2) (0) value into broader context with the current state-of-the-art for various single-photon technologies [9]. Although g (2) (0) values for defect centers of 0.1 − 0.3 are often lower than we report here [9,47,48], their optical efficiencies are usually only a few percent, owing to emission in pronounced phonon sidebands [47,48]. Compared with III-V semiconductors, the fabrication of optical cavities to enhance emission into a single optical mode, is significantly more challenging with defect center host materials like diamond. Values of g (2) (0) < 0.01 are typically observed from nonlinear sources [9], but they are probabilistic and thus cannot produce photons on demand, even with heralding [8]. In addition, to prevent multi-photon-pair generation events, non-linear sources are limited to lower single-photon generation rates than III-V (001) QDs. Values of g (2) (0) < 0.01 are also routinely reported for traditional III-V (001) QDs [9,49,50]. However, as we have noted, SK-grown QDs with a (001) orientation typically suffer from FSS that is large enough to limit their use as entangled photon sources [20,27].
Although (111) SK QDs are less mature than these other platforms and, so far, have higher g (2) (0) values, the fact that we have demonstrated single-photon anti-bunching for the first time is an exciting development. On-demand single-photon generation is not an issue for III-V (111) nanostructures [51], and we have already demonstrated that GaAs(111)A QDs exhibit low FSS [33].
We suspect that higher g (2) (0) = 0.33 in the (111) SK QDs results from background light from the nearby wetting layer causing uncorrelated counts. To reduce g (2) (0) to the <0.01 regime and improve the single-photon purity, we intend to embed these QDs in a resonant microcavity [52,53]. We can borrow from the more than two decades of research into (001)-oriented III-V QD nanomaterials, and apply established photonic device processing techniques to (111) QDs. A resonant microcavity could help suppress unwanted background light, whilst enhancing the collection efficiency into a single optical mode through a significant Purcell enhancement. This arrangement will also allow us to make reliable measurements of the emission rate and efficiency of these (111) SK QD single-photon sources.
Thus, we expect that through microphotonic cavity integration, these (111) SK QDs will serve as high quality, scalable sources of on-demand entangled-photon pairs, an area of future study that will build on this work.
Conclusions
We have explored light emission from individual GaAs(111)A QDs that self-assemble via the SK growth mode under tensile strain. Tuning the excitation density during µ-PL spectroscopy allows us to distinguish between exciton and biexciton emission, and hence to measure the biexciton binding energy. Due to their high symmetry, we demonstrate that the GaAs(111)A QDs behave as single-photon emitters, with low FSS ⩽8 µeV. Tensile strain reduces the band gap of these self-assembled GaAs QDs meaning that their emission is significantly red-shifted compared with bulk GaAs. These QDs are amenable to being embedded within photonic microcavities to enhance photon collection for entanglement measurements. By using higher tensile strain or a semiconductor whose band gap is narrower than GaAs, we expect to push QD emission even further into the infrared. In this way, we anticipate that tensile-strained, (111)-oriented QDs could offer a route to entangled photon emitters compatible with telecommunications fiber optics.
|
v3-fos-license
|
2019-03-17T13:08:58.987Z
|
2017-06-01T00:00:00.000
|
79670125
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://juniperpublishers.com/jocct/pdf/JOCCT.MS.ID.555669.pdf",
"pdf_hash": "593df20e6b8a7a0e78f7ec442940cc70c7f1f694",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1108",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "3fcedca7bf4ed87458f250ebe8c34f4defacd385",
"year": 2017
}
|
pes2o/s2orc
|
How to Use Risk Score Systems on Severe Aortic Stenosis?
The mostused score isthe STS score. This was generated from the U.S. database separated into three large cohorts with more than 100,000 patients each. In groups 2 and 3, only valve surgeries (aortic valve replacement, mitral valve replacement and mitral valve repair), combined valve surgery and coronary artery bypass grafting (CABG) were respectively included. The performance of the STS model are poor at predicting 30-day mortality post TAVR [4]
Introduction
Aortic stenosis is the most common acquired valvar disease, when severe and symptomatic, surgical approach was the gold standard therapy. It is Worth noticing that severe aortic stenosis treatment was change in the last years and trans catheter aortic-valve replacement (TAVR) become an important therapy for a specific group of patients with a severe aortic stenosis.
In this setting risk scoring play a important role, identifying patient with high risk whom could benefit from a percutaneous approach. Risk scoring systems have been developed to predict mortality after cardiac surgery in adults Preoperative risks tratification is essential to making sound surgical decisions.
Risk Scores Models
Curiously, specific scores for mortality prediction in TAVR are recente publish. TAVR specific clinical prediction models are France TAVR registry (FRANCE-2 model) [1], the Italian TAVI registry (OBSERVANT model) [2] and the Society of Thoracic Surgeons/American College of Cardiology Transcatheter Valve Therapy registry (ACC model) [3].
The mostused score isthe STS score. This was generated from the U.S. database separated into three large cohorts with more than 100,000 patients each. In groups 2 and 3, only valve surgeries (aortic valve replacement, mitral valve replacement and mitral valve repair), combined valve surgery and coronary artery bypass grafting (CABG) were respectively included. The performance of the STS model are poor at predicting 30-day mortality post TAVR [4] These scores were tested prospectively on every TAVR procedure in the United Kingdom from January 2007 to December 2014. A total of 7431 was assessed and all scores were analized in terms of calibration and discrimination. Calibration is the comparing between the expected and observed event rates (discrimination is the ability to distinguish between those who will experience an event and those who willnot. Discrimination of the risk models wasa nalyzed using the area under the receiver operating characteristic (ROC) curve.
The ACC and STS models were the closest to the observed mortality in terms of absolute and relative diferences [5]. The area underthe ROC curve was below 0.7 for all models, with the majority close to 0.6; the ACC and FRANCE-2 had the highest discrimination [5].
High Risk Aortic Stenosis Patients
First TAVR approval was made for patients were not candidates for surgeryorat high risk for complications due to surgery. These recomendation derived from two cohort of trial Partner: the high risk cohort included 699 patients with severe aortic stenosis and cardiac symptoms at 22 centers the median of STS score was 11.8% and the TAVR was non inferior when comparing with cardiac surgery [6]. At 1 year, the rate of death from any cause in the intention-to-treat population (the primary study end point) was 24.2% in the transcatheter group as comparedwith 26.8% in the surgical patients [6].
In the cohort of patients who cannot undergo surgery, 358 were included at 21 centers, with median STS score, 11.6±6.0%. There were many patients with low STS scores, but with coexisting conditions that contributed to the surgeons determination that the patient was not a suitable candidate for surgery, including: an extensively calcified (porcelain) aorta (15.1%), chest wall.
Deformity or deleterious effects of chest-wall irradiation (13.1%), oxygen-dependent respiratory insufficiency (23.5%), and frailty. At the 1-year follow-up, the rate of death from any cause (the primary end point), as calculated with the use of a Kaplan-Meier analysis, was 30.7% in the TAVI group, as compared with 50.7% in the standard-therapy group without surgery [7].
Intermediate-Risk Patients with Severe Aortic Stenosis
Recently, the PARTNER 2 tria lshowed results of 2032 intermediate-risk patients with severe aortic stenosis, at 57 centers, to undergo either TAVR or surgical replacement. The intermediate-riskpatients, TAVR was similar to surgical aortic-valve replacement with respect to the primary end point of death or disabling stroke. The mediam of STS score was 5.8%, 6.7% of the patients had an STS score that was lessthan 4.0%, 81.3% had a score that was between 4.0% and 8.0%, and 12.0% had a score that was greater than 8.0% [8].
Another study recently published from the SURTAVI investigators included a total of 1746 patients underwent randomization at 87 centers. The mean age of the patients was 79.8 years, and all were at intermediate risk for surgery with mean of STS score 4.5±1.6%. In this trial surgery was associated with higher rates of acute kidney injury, atrial fibrillation, and transfusion requirements, whereas TAVR had higher rates of residual aortic regurgitation and need for pacemaker implantation. The investigators concluded that TAVR was non inferior when comparing with cardicsurgery [9].
Conclusion
Probably in the next years all patients with aortic stenosis will be always schedule for TAVR. The risk score models will be used to give more information for the patients about the morbidities and mortality risks. The best score toused in your institution will be validated with local reality.
|
v3-fos-license
|
2021-07-13T21:51:54.077Z
|
2021-07-12T00:00:00.000
|
235808016
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0254442&type=printable",
"pdf_hash": "90493b436213a1fd163a49fd3ea4e306880ae7bf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1112",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7b1ec24862fd9c8f3c0dd9fa682fe53bd50c6be2",
"year": 2021
}
|
pes2o/s2orc
|
Alteration of corneal biomechanical properties in patients with dry eye disease
Purpose To evaluate the association between symptoms and signs of dry eye diseases (DED) with corneal biomechanical parameters. Methods This cross-sectional study enrolled 81 participants without history of ocular hypertension, glaucoma, keratoconus, corneal edema, contact lens use, diabetes, and ocular surgery. All participants were evaluated for symptoms and signs of DED using OSDI questionnaire, tear film break-up time (TBUT), conjunctival and corneal staining (NEI grading) and Schirmer test. Corneal biomechanical parameters were obtained using Corvis ST. Mixed-effects linear regression analysis was used to determine the association between symptoms and signs of DED with corneal biomechanical parameters. Difference in corneal biomechanical parameter between participants with low (Schirmer value ≤10 mm; LT group) and normal (Schirmer value >10mm; NT group) tear production was analyzed using ANCOVA test. Results The median OSDI scores, TBUT, conjunctival and corneal staining scores as well as Schirmer test were 13±16.5 (range; 0–77), 5.3±4.2 seconds (range; 1.3–11), 0±1 (range; 0–4), 0±2 (ranges; 0–9) and 16±14 mm (range; 0–45) respectively. Regression analysis adjusted with participants’ refraction, intraocular pressure, and central corneal thickness showed that OSDI had a negative association with highest concavity radius (P = 0.02). The association between DED signs and corneal biomechanical parameters were found between conjunctival staining scores with second applanation velocity (A2V, P = 0.04), corneal staining scores with second applanation length (A2L, P = 0.01), Schirmer test with first applanation time (A1T, P = 0.04) and first applanation velocity (P = 0.01). In subgroup analysis, there was no difference in corneal biomechanical parameters between participants with low and normal tear production (P>0.05). The associations were found between OSDI with time to highest concavity (P<0.01) and highest displacement of corneal apex (HC-DA, P = 0.04), conjunctival staining scores with A2L (P = 0.01) and A2V (P<0.01) in LT group, and Schirmer test with A1T (P = 0.02) and HC-DA (P = 0.03), corneal staining scores with A2L (P<0.01) in NT group. Conclusions According to in vivo observation with Corvis ST, patients with DED showed more compliant corneas. The increase in dry eye severity was associated with the worsening of corneal biomechanics in both patients with low and normal tear production.
Introduction Dry eye disease (DED), which is among the most frequently encountered ocular disease, is a multifactorial disease that affects both ocular surface and tear film layer. Patients with DED usually presented with eye irritation, photosensitivity, and blurred vision which have an impact on both quality of life and quality of vision [1]. Previous studies using in vivo confocal microscopy have shown that DED has a significant effect on cornea at the cellular level. The changes in corneal epithelium, corneal nerves, corneal stroma and corneal endothelium have been observed. In addition, the corneal structural change was correlated with dry eye severity [2].
Corneal biomechanics includes elastic and viscoelastic properties, which are the capacity of cornea to reversibly deform under stress [3]. The biomechanical properties of cornea depend on the patterns of fiber organization and constitution in each layer of the cornea [4]. Various alterations of cornea in patients with DED have been demonstrated, including decreased corneal superficial epithelial cell density [5], increased corneal keratocyte density [6], increased inflammatory dendritic cells [7], decreased subbasal nerve plexus number, increased beadings and tortuosity of corneal nerve [8,9], and decreased endothelial cell density [10,11]. Moreover, ocular inflammation, which is the main pathogenesis of DED, could lead to stromal change and weakening of the corneal tissue [12,13]. Therefore, we hypothesized that DED has an impact on corneal biomechanics and it is also interesting to illustrate whether corneal biomechanical alteration associates with the disease severity or not.
The purpose of this study is to evaluate the association between symptoms and signs of dry eye diseases with corneal biomechanical parameters.
Materials and methods
In this cross-sectional study, 81 participants were recruited from Comprehensive Geriatric Clinic, King Chulalongkorn Memorial Hospital, Bangkok, Thailand. An institutional review board in the Faculty of Medicine, Chulalongkorn University approved the protocol of this study. The study adhered to the tenets of the Declaration of Helsinki. Written informed consent was obtained from each participant.
Consecutive participants were enrolled. Participants with history of contact lens use, ocular hypertension, glaucoma, keratoconus, corneal edema, corneal dystrophy, any ocular surgeries, corneal cross-linking treatment, and diabetes mellitus were excluded.
All participants were evaluated for symptoms of dry eye disease using the Ocular Surface Disease Index (OSDI) questionnaire. Best-corrected visual acuity, manifest refraction, intraocular pressure (IOP) and signs of DED including tear film break-up time (TBUT), conjunctival and corneal staining scores using NEI grading system and Schirmer test with anesthesia were consecutively done by trained corneal specialists.
Corneal biomechanical properties were then evaluated using Corneal Visualization Scheimpflug Technology (Corvis ST, Oculus, Wetzlar, Germany) by a single masked investigator. After an air impulse, Scheimpflug camera recorded images of the first applanation, the highest concavity of the cornea, and the second applanation, respectively. Ten biomechanical parameters were recorded (Fig 1) The data from only one eye from each participant was used for analysis. The data from right eyes was first analyzed. If the right eye was excluded, the data from the left eye was used.
The statistical analysis was performed using Stata/IC for Windows (version 14.1, Stata Corp). The distribution of the data was tested by means of the Shapiro-Wilk test. Data with non-normal distribution were presented as median and interquartile range (IQR), and data with normal distribution were presented as mean and standard deviation (SD). Mixed-effects linear regression analyses adjusted for participants' spherical equivalent refraction (SE), IOP, and central corneal thickness (CCT) were used to determine the association between symptoms and signs of DED with corneal biomechanical parameters in all participants. To further evaluate the effect of tear production on corneal biomechanics, the data were classified into low tear production (LT) and normal tear production (NT) groups. The LT group included participants with Schirmer value of 10 mm or less and the NT group included participants with Schirmer value more than 10 mm. ANCOVA test was used to compare corneal biomechanical parameters between groups after adjusting for SE, IOP and CCT. Mixed-effect linear regression analyses was also used to determine the association between symptoms and signs of DED with the adjusted corneal biomechanical parameters in each group. P-values of less than 0.05 were considered as a statistical significance.
Results and discussion
This study included 81 participants who had a mean age of 66 ±3.4 years (range, 60-77 years), comprising 53 females and 28 males. Twenty-eight (34.6%), 22 (27.2%), 1 (1.2%) and 1 (1.2%) participants respectively had dyslipidemia, hypertension, rheumatoid arthritis, and history of Steven Johnson syndrome. None of the participants had allergic conjunctivitis. There were no topical eye medications used by any participants except for 39 (48.75%) participants, who were using artificial tears.
Adjusting with SE, IOP, and CCT, OSDI scores showed significant negative association with HC-radius (P = 0.02). There were significant positive associations between Schirmer test and A1T (P = 0.04), and corneal staining scores with A2L (P = 0.01). Significant negative associations were found between conjunctival staining scores with A2V (P = 0.04) and between Schirmer test with A1V (P = 0.01). There were no associations between OSDI score, TBUT, corneal and conjunctival staining scores and Schirmer test with other corneal biomechanical parameters (Table 1).
Twenty-five participants (14 females, 11 males) with a mean age of 64.76 ±3.45 years (range, 60-74 years) were in the LT group and 56 participants (39 females, 17 males) with a mean age of 66.38 ±3.33 years (range, 61-77 years) were in the NT group. Demographic data in each group were shown in Table 2. There was no difference in corneal biomechanical parameters between the LT and NT groups (Table 3).
In the LT group, OSDI scores were positively associated with HC-time (P<0.01) and HC-DA (P = 0.04). Moreover, conjunctival staining scores showed significant negative associations with A2L (P = 0.01) and A2V (P<0.01), and significant positive association with HC-PD (P = 0.04). There were no associations between TBUT, corneal staining scores and Schirmer test with corneal biomechanical parameters (Table 4).
In the NT group, Schirmer test was found to be positively associated with A1T (P = 0.02) and negatively associated with HC-DA (P = 0.03). Also, there was positive association between corneal staining scores and A2L (P<0.01). There were no associations between OSDI, TBUT and conjunctival staining scores with corneal biomechanical parameters (Table 5).
Associations between symptoms and signs of DED with the alteration of corneal biomechanical properties were found in this study. Among ten corneal biomechanical parameters detected by Corvis ST, eight parameters were associated with either symptoms or signs of DED. The increase in severity of DED was associated with less stiffness of cornea in both low and normal tear production. Moreover, there was no difference in corneal biomechanics between participants with low and normal tear production.
Proper assessment of corneal biomechanical parameters is necessary since they are associated with diagnosis and management of various ophthalmic diseases including preoperative screening of corneal refractive surgery candidates and the precise intraocular pressure measurement and interpretation. The conditions associated with the alteration of corneal biomechanics have previously been mentioned including keratoconus, post-corneal refractive surgery, corneal edema, corneal scar, autoimmune diseases, myopia, and glaucoma [14][15][16][17][18]. In the current study, we found that DED, a common ocular surface disease, was significantly associated with the alteration of corneal biomechanics.
Multiple studies have shown that there was no association between symptoms and signs of DED [19,20]. However, we found that both symptoms and signs of DED except TBUT were significantly correlated with at least one among ten corneal biomechanical parameters. After an air impulse generated by Corvis ST, the patients with more severe dry eye symptoms showed smaller concavity radius (HC-radius). The patients with higher conjunctival staining scores demonstrated lower second applanation velocity (A2V). The patients with higher corneal staining scores exhibited longer length of flattened cornea at second applanation (A2L) and the patients with lower tear production evaluated by Schirmer test displayed a higher speed of corneal apex at first applanation (A1V) and shorter first applanation time (A1T). Interestingly, the alteration of corneal biomechanical parameters in patients with symptoms and signs of DED except A2L were consistent with the alteration that was found in patients with keratoconus [21][22][23][24][25][26].
PLOS ONE
Corneal biomechanics in dry eye disease Theoretically, an earlier applanation, which is represented as a shorter time to the first applanation (A1T) and a higher velocity at the first applanation (A1V), suggests a softer cornea. In addition, an increase in HC-DA and a decrease in HC-radius means a greater change in corneal shape after an air impulse, which can be interpreted as reduced corneal stiffness. Likewise, a higher velocity at the second applanation (A2V) suggested more compliant cornea. In clinical setting, various studies have compared corneal biomechanical parameters between normal and keratoconic eyes using Corvis ST. Five corneal biomechanical parameters including A1T, A2T, A1V, A2V, HC-radius and HC-DA consistently showed the alteration between normal and keratoconic eyes, while the other parameters demonstrated no significant difference [21][22][23][24][25][26]. Similar to keratoconic eyes, we found that the increase in severity of DED resulted in the alteration of corneal biomechanical parameters including HC-radius, A2V and A1T. Thus, our results indicated that the corneas of patients with DED became weaker and more deformable. The increase in severity of DED resulted in the more compliant cornea.
No difference in corneal biomechanics between low and normal tear production was detected. Both participants with low and normal tear production showed the associations between either symptoms or signs of DED and the corneal biomechanical parameters. In participants with low tear production, the increase in OSDI scores and conjunctival staining scores were associated with more compliant cornea. Furthermore, the lower Schirmer test was associated with the more compliant cornea in participants with normal tear production.
There were a few studies which investigated corneal biomechanics in patients with DED. Most studies have shown that patients with autoimmune-mediated DED including Sjogren's [30]. In contrast, Firat PG and Doganay S. demonstrated no difference in corneal biomechanics between patients with and without DED using Ocular Response Analyzer [31]. Compared to our study, the previous studies included smaller sample sizes and did not adjust the biomechanical parameters with intraocular pressure, spherical equivalent refraction, and corneal thickness, which are considered important factors affecting the biomechanical properties. Most of our patients had no known underlying diseases which could affect corneal biomechanics, except one patient with rheumatoid arthritis. The alteration in corneal biomechanics in DED could be due to various changes in corneal structure including destruction and apoptosis of corneal epithelium, decrease in corneal subbasal nerve density, inflammation of corneal stroma and decrease corneal endothelial cell density [2]. Corneal stroma, which comprised 90% of total corneal thickness, was the major part that contributed to the corneal biomechanical properties [16]. Various conditions associated with corneal stromal inflammation such as keratoconus, Sjogren syndrome, herpes stromal keratitis have shown to be associated with more compliant corneas [32][33][34][35][36]. Despite that the immunopathogenesis of DED has not yet been fully understood, numerous studies have concluded that ocular surface inflammation displayed a critical role [37][38][39]. Elevation of inflammatory cytokines, matrix metalloproteinases-9 (MMP-9), chemokines, and infiltration of immune cells have been found in DED [40][41][42]. MMP-9, which was also found to be elevated in corneal ectatic diseases, including keratoconus and post-LASIK ectasia [43,44], is a primary enzyme of the ocular surface that causes degradation of the collagen and the protein in extracellular matrix [13]. In addition, Giannaccare G et al. reported that the tear level of MMP-9 was negatively correlated with the value of corneal hysteresis in ocular graft versus host disease patients with DED [27]. Therefore, we believe that the corneal inflammation leads to a more compliant cornea in DED.
Ocular surface inflammation in DED can both cause and be the consequence of corneal and conjunctival epithelial cell damage [39]. Elsheikh A et al. demonstrated that corneal epithelial integrity was responsible for the stability of corneal biomechanics [45]. Thus, the ocular surface damage would further lead to more compliant cornea in DED. Until now, the role of the corneal nerve and corneal endothelium in biomechanics is still unclear. In addition to DED, the findings of decreased corneal nerve density, decreased corneal endothelium cell density, and poor corneal biomechanics, were also found in keratoconus and herpetic stromal keratitis [33,46]. Moreover, Parissi M. et al. have showed after strengthening cornea by collagen crosslinking treatment, corneal nerve density increased [47]. However, the findings of the decrease in corneal nerve and corneal endothelial cell density as well as the alteration of corneal biomechanics could be a consequence of corneal inflammation, which is normally found in those conditions. The direct contribution of corneal nerve and corneal endothelial cells on biomechanics need to be investigated in the future.
The limitations of this study are as follows. Firstly, we have adjusted biomechanical parameters with only spherical equivalent refraction, intraocular pressure, and corneal thickness. However, we did not assess all ocular intrinsic factors that may alter corneal biomechanics, such as axial length. Secondly, participants using artificial tears had not been excluded. This may affect the results since biomechanical properties can be altered by corneal hydration. Finally, the cause-effect relationship between symptoms and signs of DED with corneal biomechanics cannot be ascertained.
|
v3-fos-license
|
2021-07-21T13:15:46.064Z
|
2021-07-01T00:00:00.000
|
236319576
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.mdpi.com/2077-0375/11/7/529/pdf",
"pdf_hash": "66209012aa972823456261b1064771d6380fd799",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1114",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8e09ae0a9c4520d615cc46654ed79ee13d09ecc1",
"year": 2021
}
|
pes2o/s2orc
|
Electron Microscopic Confirmation of Anisotropic Pore Characteristics for ECMO Membranes Theoretically Validating the Risk of SARS-CoV-2 Permeation
The objective of this study is to clarify the pore structure of ECMO membranes by using our approach and theoretically validate the risk of SARS-CoV-2 permeation. There has not been any direct evidence for SARS-CoV-2 leakage through the membrane in ECMO support for critically ill COVID-19 patients. The precise pore structure of recent membranes was elucidated by direct microscopic observation for the first time. The three types of membranes, polypropylene, polypropylene coated with thin silicone layer, and polymethylpentene (PMP), have unique pore structures, and the pore structures on the inner and outer surfaces of the membranes are completely different anisotropic structures. From these data, the partition coefficients and intramembrane diffusion coefficients of SARS-CoV-2 were quantified using the membrane transport model. Therefore, SARS-CoV-2 may permeate the membrane wall with the plasma filtration flow or wet lung. The risk of SARS-CoV-2 permeation is completely different due to each anisotropic pore structure. We theoretically demonstrate that SARS-CoV-2 is highly likely to permeate the membrane transporting from the patient’s blood to the gas side, and may diffuse from the gas side outlet port of ECMO leading to the extra-circulatory spread of the SARS-CoV-2 (ECMO infection). Development of a new generation of nanoscale membrane confirmation is proposed for next-generation extracorporeal membrane oxygenator and system with long-term durability is envisaged.
The most serious dysfunctions in ECMO are the increased excessive pressure drop in blood flow path, plasma leakage, and decrease in gas exchange rate [4][5][6][7][8]. In addition, as blood coagulation and thrombus are generated in severely ill patients with COVID-19 [9], incidents have been reported in which a blood flow path of membrane oxygenator is easily clogged in ECMO treatment. There is a concern that the treatment for critically ill COVID-19 patients causes more serious excessive pressure drop.
On the other hand, plasma leakage frequently occurs in extracorporeal membrane oxygenation (ECMO) when used in respiratory support therapy [4,5], which requires more time than when being used in cardiovascular surgery. When the ECMO membrane pores are in contact with blood for a long time, the hydrophobicity of the membrane is gradually impaired and the membrane becomes hydrophilic, the air in the pores is replaced with plasma, and plasma leaks to the gas side. If plasma leakage occurs, not only is the gas exchange efficiency lowered, but the water balance of the patient is also disturbed and, in the worst case, the patient may be in a critical condition [3]. Even in the case of such an incident, at present, an operator such as a clinical engineer takes measures such as replacing it with an unused membrane oxygenator before an accident occurs.
It has been reported that plasma components in the blood leak into the lumen of hollow fiber membrane (plasma leakage) and yellow foam leaks from the gas outlet port of a membrane oxygenator during an ECMO treatment for critically ill patients with COVID-19 [2]. There is a concern that SARS-CoV-2 in plasma may permeate through the pore in the membrane and diffuse as an aerosol from the gas outlet port, which is still one of the issues in operating ECMO.
The pore structure of the gas exchange membrane for ECMO and the mechanism of plasma leakage have been studied so far [3][4][5]. However, the membrane and pore structures of the recent ECMO membranes have not been clarified, and the permeability of solutes such as viruses through the membrane has not been investigated.
The objective of this study is to clarify the pore structure of ECMO membranes by using our approach for analyzing membrane pore structures by scanning probe microscope (SPM) and field emission scanning electron microscope (FE-SEM) [10][11][12][13][14][15][16]. Then, the permeability of SARS-CoV-2 through the membrane is evaluated using the steric exclusion model and hindered diffusion model, which are simple permeation theories in membrane science. We suggest the development of a new generation of nanoscale membrane confirmation to prevent extra-circulatory spread of the SARS-CoV-2.
Hollow Fiber Membrane for Extracorporeal Membrane Oxygenator
The samples studied were commercially available extracorporeal membrane oxygenators, which are typical in Japan. These are the outside blood flow oxygenators. They contribute to ECMO systems in cardiovascular surgery and in the treatment of severe acute respiratory distress syndrome, both in Japan and around the world. Table 1 shows the technical data of the samples. oxia ® ACF (JMS Co., Ltd., Hiroshima, Japan, Sample A) equips a hollow fiber membrane made of polypropylene and is approved as Extracorporeal Membrane Oxygenator (ECMO) in Japan. The MERA NHP ® (SENKO MEDICAL INSTRUMENT Mfg. Co., Ltd., Tokyo, Japan, Sample B) is a hollow fiber membrane made of polypropylene coated with a silicone layer on the outer surface. Sample B is approved as ECMO and ECMO for assisting respiration. BIOCUBE ® (Nipro Co., Ltd., Tokyo, Japan, Sample C) is a polymethylpentene (PMP) membrane. This is the first PMP membrane for artificial lung worldwide. Sample C is approved as ECMO and ECMO for assisting respiration. The manufacturing approval standard (requirement) [17] of ECMO for assisting respiration is that "the membrane characteristics of a silicone membrane or a special polyolefin membrane can prevent plasma leakage". (2) extracorporeal membrane oxygenator; ECMO (2) Cardiac ECMO Respiratory ECMO (3) extracorporeal membrane oxygenator; ECMO (2) Cardiac ECMO Respiratory ECMO (3) (1) The gas permeability of the membranes was measured by the Gurley method using an air resistance tester defined in ISO5636-5 [18].
The smaller the value, the larger the gas permeability; (2) Usage time 6 h; (3) Usage time 6 h, the membrane characteristics of a silicone membrane or a special polyolefin membrane can prevent plasma leakage.
Observation of Three-Dimensional Tortuous Pore Using Scanning Probe Microscope (SPM) System
The method reported in our previous study [15,16] was employed to observe the inner and outer surfaces of hollow fiber membranes.
A sample, as shown in Figure 1, was prepared in order to observe the inner surface of the hollow fiber membrane (dry). A flat surface without curvature was created to improve the accuracy of the image. When observing the outer surface of the hollow fiber, a sample having a low sample height was prepared and a flat surface was observed. Three or more samples were observed, and a representative image was shown in the result.
Observation of Three-Dimensional Tortuous Pores Using Field Emission Scanning Electron Microscope (FE-SEM)
For a comparative verification against the SPM, followed by the design validation, we used a FE-SEM (JSM-7610F, Jeol Ltd., Tokyo, Japan) to observe the inner and outer surfaces of the hollow fiber membranes at an accelerating voltage of 1.5 kV, a working distance of 4.5 mm, and an emission current of 47.2 µA [16]. No conductive treatment with Au or C was applied. Pore diameters were measured in observation fields of a magnification of 100,000 in size by the analysis of digital imagery utilizing ImageJ software (Naional Institute of Mental Health, Bethesda, MD, USA).
Observation of Three-Dimensional Tortuous Pores Using Field Emission Scanning Electron Microscope (FE-SEM)
For a comparative verification against the SPM, followed by the design validation, we used a FE-SEM (JSM-7610F, Jeol Ltd., Tokyo, Japan) to observe the inner and outer surfaces of the hollow fiber membranes at an accelerating voltage of 1.5 kV, a working distance of 4.5 mm, and an emission current of 47.2 µA [16]. No conductive treatment with Au or C was applied. Pore diameters were measured in observation fields of a magnification of 100,000 in size by the analysis of digital imagery utilizing ImageJ software (Naional Institute of Mental Health, Bethesda, MD, USA).
Validation of SARS-CoV-2 Permeability Using the Steric Exclusion Model and Hindered Diffusion Model
An attempt was made to quantify the membrane permeability for the SARS-CoV-2 using the steric exclusion model and hindered diffusion model [19].
The diffusive coefficient of SARS-CoV-2 in water was calculated by Equation (1) expressing the Stokes-Einstein relationship.
where R is the ideal gas constant, T is the absolute temperature, a is the solute radius, NA is Avogadoro's number (6.02 × 10 23 mol −1 ), and is the solution viscosity.
The diffusive coefficient of SARS-CoV-2 in plasma at 37 °C was calculated in consideration of the increase in plasma viscosity relative to the viscosity of water.
Validation of SARS-CoV-2 Permeability Using the Steric Exclusion Model and Hindered Diffusion Model
An attempt was made to quantify the membrane permeability for the SARS-CoV-2 using the steric exclusion model and hindered diffusion model [19].
The diffusive coefficient of SARS-CoV-2 in water was calculated by Equation (1) expressing the Stokes-Einstein relationship.
where R is the ideal gas constant, T is the absolute temperature, a is the solute radius, N A is Avogadoro's number (6.02 × 10 23 mol −1 ), and µ is the solution viscosity. The diffusive coefficient of SARS-CoV-2 in plasma at 37 • C was calculated in consideration of the increase in plasma viscosity relative to the viscosity of water.
where the plasma temperature was set to 37 • C assuming extracorporeal circulation. Therefore, the fraction of the pore cross-sectional area through which the solute penetrates is given by Equation (3).
where K is known as the solute partition coefficient, a is molecular radius, r is pore radius, and C A is solute concentration.
Mass transfer flux in a pore is given by Equation (4) from the Fick's first law applicable to a dilute solution.
where D pore is diffusive coefficient in a pore, L is membrane thickness, and C Apore is solute concentration in a pore. The solute concentrations in the pore at the inlet and outlet of the pore are related with the bulk solute concentrations just outside the pore by the partition coefficient. Therefore, Equation (4) is expressed as Equation (5).
The tortuous nature of the pore is explained by using the tortuosity (τ) for the actual pore length (τ × L).
In addition, as the hindered diffusion affects the permeation of the solute in the pores, the value of D pore is smaller than the value of D AB due to the parameter for the hindered diffusion (ω r ). The term for hindered diffusion (ω r ) depends on the ratio of the solute radius to the pore radius (a/r). Therefore, Equation (5) can be expressed as Equation (6).
Equation (6) represents the mass transfer flux in a pore of a membrane in terms of the measurable bulk solute concentrations at the surface on both sides of the membrane. From Equation (6), the solute diffusive coefficient in the pore (D pore ) and the solute diffusive coefficient across a membrane pore relative to the bulk solute concentration (D m ) are defined as follows.
From previous studies [20,21], Kω r is only a function of λ = a/r. This is given by Equation (9).
The term 1 − a r 2 on the right-hand side of Equation (9) is the partition coefficient (K) by Equation (3). The term on the right-hand side of Equation (9) is the hindered diffusion parameter (ω r ) by Equation (10).
ω r represents the increased hydrodynamic drag in the pore comparable in size to that of the solute. 500 nm × 500 nm, and 200 nm × 200 nm, respectively. The image of (3) shows magnified images of the pores enclosed by the blue square in (2), and the image of (2) shows magnified images of the pores enclosed by the blue square in (1). The color gradient bar at the bottom represents the scale in the Z-direction of the image. (2), and the image of (2) shows magnified images of the pores enclosed by the blue square in (1). (2) × 50,000; and (3) ×100,000, respectively. The image of (3) shows magnified images of the pores enclosed by the blue square in (2), and the image of (2) shows magnified images of the pores enclosed by the blue square in (1).
FE-SEM Observations of Tortuous Pore Structures of ECMO Membranes
On the inner and outer surfaces of the sample A, long elliptical pores were observed in the longitudinal direction of the hollow fiber membrane. The higher the magnification of the image, the deeper the pores could be confirmed, and the three-dimensional tortuous pore structure was confirmed. When polypropylene was stretched by the stretching method to form a hollow fiber membrane, the pores were also stretched in the longitudinal direction of the membrane. The pores on the outer surface were smaller than those on the inner surface. Compared with the SPM images shown in 3.1, the FE-SEM gave very clear images, and the unique pore structure of sample A was observed. For this reason, unlike our previous studies [15,16], FE-SEM is more useful for pore structure analysis of polypropylene ECMO membranes. (2), and the image of (2) shows magnified images of the pores enclosed by the blue square in (1).
On the surface of the hollow fiber membrane of sample A, the unique pore structure stretched in an elliptical shape in the longitudinal direction of the hollow fiber membrane was observed.
Images for the pore structures of ECMO membranes were obtained using SPM, but the images were unclear. Compared with the images of hemoconcentrator membranes and hemodialysis membranes (polyether sulfone) [15,16] in our previous studies, we could not observe clear three-dimensional tortuous pore structures of membranes. Therefore, we decided that an analysis by another approach was necessary. On the inner and outer surfaces of the sample A, long elliptical pores were observed in the longitudinal direction of the hollow fiber membrane. The higher the magnification of the image, the deeper the pores could be confirmed, and the three-dimensional tortuous pore structure was confirmed. When polypropylene was stretched by the stretching method to form a hollow fiber membrane, the pores were also stretched in the longitudinal direction of the membrane. The pores on the outer surface were smaller than those on the inner surface. Compared with the SPM images shown in 3.1, the FE-SEM gave very clear images, and the unique pore structure of sample A was observed. For this reason, unlike our previous studies [15,16], FE-SEM is more useful for pore structure analysis of polypropylene ECMO membranes. Sample B is also made of polypropylene, and the pores on the inner surface of the membrane are stretched in the longitudinal direction of the membrane. On the other hand, as the outer surface of the membrane is coated with thin silicone layer, the pore structure is not confirmed. From these, sample B is a polypropylene membrane coated with silicone layer, and in the outside blood flow membrane oxygenator, the silicone layer comes into direct contact with blood. However, as shown in Figure 5, a structure in which the silicone layer was peeled off was partly confirmed on the outer surface of the membrane. (2) ×50,000; and (3) ×100,000, respectively. The image of (3) shows magnified images of the pores enclosed by the blue square in (2), and the image of (2) shows magnified images of the pores enclosed by the blue square in (1). Sample B is also made of polypropylene, and the pores on the inner surface of the membrane are stretched in the longitudinal direction of the membrane. On the other hand, as the outer surface of the membrane is coated with thin silicone layer, the pore structure is not confirmed. From these, sample B is a polypropylene membrane coated with silicone layer, and in the outside blood flow membrane oxygenator, the silicone layer comes into direct contact with blood. However, as shown in Figure 5, a structure in which the silicone layer was peeled off was partly confirmed on the outer surface of the membrane. Sample C is a PMP membrane, and the unique pore structure is different from Figures 3 and 4. The inner surface of the membrane has the unique mountain-range structure that includes pores. The outer surface of the membrane is highly porous compared to the inner surface of the membrane. The three-dimensional unevenness of the outer surface of the membrane is considerably larger than the inner surface, but in principle for the FE-SEM, it is difficult to obtain three-dimensional information using the FE-SEM.
FE-SEM Observations of Tortuous Pore Structures of ECMO Membranes
In addition, Figure 7 is the image of a different sample taken from the same device. Sample C is a PMP membrane, and the unique pore structure is different from Figures 3 and 4. The inner surface of the membrane has the unique mountain-range structure that includes pores. The outer surface of the membrane is highly porous compared to the inner surface of the membrane. The three-dimensional unevenness of the outer surface of the membrane is considerably larger than the inner surface, but in principle for the FE-SEM, it is difficult to obtain three-dimensional information using the FE-SEM.
In addition, Figure 7 is the image of a different sample taken from the same device. The pore structure on the outer surface of Figure 7 is completely different from that of Figure 6. Many samples had structures similar to the image in Figure 7, but some had structure similar to Figure 6. This is as the skin layer as shown in Figure 7 is formed during the formation (melt-spinning method) of the PMP membrane.
As described above, in this study, the unique pore structures of ECMO membranes, which are commonly used in Japan and worldwide, are clarified in detail for the first time using FE-SEM. In particular, each membrane has completely different anisotropic pore structure on the inner and outer surfaces of the membrane. Extracorporeal membrane oxygenator, which is the outside blood flow type, is the mainstream of membrane oxygenator [22][23][24][25][26][27]. Therefore, it is necessary to appropriately design the pore structure on the outer surface of the membrane that comes into direct contact with blood and the pore structure on the inner surface of the membrane that comes into contact with gas. For this purpose, it is important to control the anisotropic structure of the cross-section of the membrane. (2), and the image of (2) shows magnified images of the pores enclosed by the blue square in (1).
As described above, in this study, the unique pore structures of ECMO membranes, which are commonly used in Japan and worldwide, are clarified in detail for the first time using FE-SEM. In particular, each membrane has completely different anisotropic pore structure on the inner and outer surfaces of the membrane. Extracorporeal membrane oxygenator, which is the outside blood flow type, is the mainstream of membrane oxygenator [22][23][24][25][26][27]. Therefore, it is necessary to appropriately design the pore structure on the outer surface of the membrane that comes into direct contact with blood and the pore structure on the inner surface of the membrane that comes into contact with gas. For this purpose, it is important to control the anisotropic structure of the cross-section of the membrane.
Measurement of Pore Diameter and Pore Diameter Distribution and Evaluation of SARS-CoV-2 Permeability
For a pore diameter measurement, pores that were measured in observation fields of a magnification of 100,000 in size were analyzed. As none of the pores were true circles, the major and minor axis of the pores were measured through a line length analysis.
Measurement of Pore Diameter and Pore Diameter Distribution and Evaluation of SARS-CoV-2 Permeability
For a pore diameter measurement, pores that were measured in observation fields of a magnification of 100,000 in size were analyzed. As none of the pores were true circles, the major and minor axis of the pores were measured through a line length analysis. Figure 8 shows the distributions of pore diameters. Table 2 shows the values for the pore diameter of the ECMO membranes. The pore diameter on the outer surface of sample B was described as a reference as there were only five sets of data that could be observed after the silicone layer was peeled off. Furthermore, the partition coefficient and intramembrane diffusion coefficient of SARS-CoV-2 calculated using the steric exclusion model and the hindered diffusion model described in 2.4 are also shown in Table 2.
In Figure 8 and Table 2, the pore diameters on the outer surfaces of the membranes are smaller than those on the inner surfaces of the membranes. It is necessary to verify what kind of spinning process creates the unique pore structure of each membrane. Membranes 2021, 11, x FOR PEER REVIEW 12 of 18 (1) D m was calculated with the tortuousity on the outer surface side of the membrane as 1. It needs to be modified in the future. (2) Calculated glucose diffusion coefficient 4.7 × 10 −10 m 2 /s, 9.3 × 10 −10 m 2 /s (the value in the literature) [19].
The diameter of SARS-CoV-2 is said to be 50-200 nm [28]. Table 2 shows the partition coefficient and the intramembrane diffusive coefficient calculated with the diameter of SARS-CoV-2 at 50 nm and 80 nm which were smaller than the pore diameter. The partition coefficients are greater than 0, intramembrane diffusion coefficients of SARS-CoV-2 are 8.9 × 10 −14 -7.2 × 10 −13 m 2 /s. Therefore, when a plasma leakage occurs in an extracorporeal membrane oxygenator, SARS-CoV-2 also permeates through the pores of the membrane with the filtration flow of the plasma from the outside to the inner lumen of the membrane. The risk of SARS-CoV-2 permeation in Sample B and Sample C were lower than that of Sample A. The risk of SARS-CoV-2 permeation is completely different due to each anisotropic pore structure, and certainly chemical property. Glucose diffusion coefficient in water calculated by this model is 4.7 × 10 −10 m 2 /s, while the value in the literature is 9.3 × 10 −10 m 2 /s. From these data, the credibility of the values calculated by the model is not perfect, but it is reasonably sufficient. The transfer rate of SARS-CoV-2 in the membrane is about 1/1000 of the transfer rate of glucose in water.
ECMO Infection and Usefulness of Theoretically Validating SARS-CoV-2 Permeation through Membrane
Serious dysfunctions in extracorporeal membrane oxygenator are excessive pressure drop in blood flow path due to blood coagulation and thrombosis, and plasma leakage [4][5][6][7][8]. Plasma leakage occurs in extracorporeal membrane oxygenation over a longer term. When a plasma leakage occurs, not only is the gas exchange efficiency is lowered, but the water balance of the patient is also disturbed, and in the worst case, there is a concern that the patient may be in critical condition.
During the current COVID-19 pandemic, it has been reported that the plasma components from the gas outlet port of the membrane oxygenator became yellow foam and positive PCR test results were obtained from the gas outlet port of the membrane oxygenator [2,29]. This is due to the fact that SARS-CoV-2 in plasma may permeate the hollow fiber membrane and aerosol diffusion occurs from the gas outlet port [29]. Therefore, in ECMO support for COVID-19 patients, it must be recognized that there is the risk of ECMO infection due to extra-circulatory spread of the SARS-CoV-2. Moreover, there is a concern that, if the blood coagulation and thrombus are generated in severely ill patients with COVID-19 [9] and they cause a more serious excessive pressure drop, then the transmembrane pressure (TMP) is increased, and plasma and virus easily permeate through the membrane. As the inner lumen of hollow fiber is filled with gas, the TMP is larger than that of the dialyzer in which the inner lumen of hollow fiber is filled with blood. With these concerns, medical staff always use N95 masks, gowns, caps, and face shields to prevent infection with SARS-CoV-2 during ECMO support. Medical staff are burdened by serious pandemic. However, there was no direct evidence of how such a SARS-CoV-2 infection phenomenon occurred. In particular, there is the need to verify the risk of SARS-CoV-2 leakage through PMP membrane and silicone-coated membrane, and to examine the damaged membranes [29].
Therefore, in this study, from the viewpoint of membrane science, we analyzed the recent common ECMO membranes used in Japan and worldwide. The precise pore structure of the membranes was elucidated by direct microscopic observation using FE-SEM. The pore structure of the hollow fiber membrane is not homogeneous but asymmetric. As the pore structures on the inside and the outside of the membrane are different, if the pores on the outside of the membrane are highly microporous, SARS-CoV-2 penetrates from the outside to the inside of the hollow fiber lumen. Here, we find that SARS-CoV-2 may permeate the membrane, transfer from the patient's blood to the gas side, and diffuse from the gas side outlet port of ECMO. Even in the case of a membrane that suppresses plasma leakage using a silicone-coated membrane such as sample B, plasma leakage may occur due to silicone layer peeled off. In addition, when plasma does not permeate the silicone layer, but gas (water vapor) permeates the silicone layer. Furthermore, condensation (wet lung) occurs in the gas side (inner lumen of hollow fiber) due to a change in the temperature of the gas side, and porous pores on inner lumen are filled with water. In these cases, SARS-CoV-2 is also likely to permeate the membrane wall, even if plasma leakage does not occur. These increase the risk of ECMO infection caused by extra-circulatory spread of the SARS-CoV-2. Additionally, the risk of ECMO infection may be quantitatively evaluated from the viewpoint of membrane science.
In terms of regulatory approval, a PMP membrane with a dense outer layer has been approved for 30 days usage for ECMO (CE marking), whether the plasma leakage is completely prevented requires further study [30]. The FDA has urgently approved the use of ECMO for up to 15 days. In Japan, ECMO is approved for use up to 6 h only. The package inserts for the three samples in this study also state that plasma leakage may occur.
Optimal Design of Asymmetrical Pore Structure of ECMO Membrane
From these perspectives, it is necessary to appropriately design the pore structure on the outer side of the membrane that comes into direct contact with blood as well as the pore structure on the inner side of the membrane that comes into contact with gas. It is important to control the anisotropic structure of the membrane cross-section.
We also focus on the fouling during ECMO treatment [16]. As a next-generation membrane with long-term durability, fouling does not proceed easily, and SARS-CoV-2 does not easily penetrate to prevent extra-circulatory spread of the SARS-CoV-2. These three membranes in this study are contributing to cardiovascular surgery and support for severe acute respiratory distress syndrome, both in Japan and worldwide. Plasma leakage has always been an issue, and its importance is once again recognized due to the current COVID-19 pandemic. In the future, it is necessary to develop next-generation membranes and systems with long-term durability suitable for treatment [31] of COVID-19 critically ill patients. Microstructured hollow fiber membranes that increase the gas exchange surface area were also proposed to improve oxygenator performance [32].
Limitations of Theoretically Validating SARS-CoV-2 Permeation Based on the Membrane Transport Model
In this study, we attempted to evaluate the SARS-CoV-2 permeability using the steric exclusion model and the hindered diffusion model [19] for transport phenomena in membrane. These are fundamental and valuable models for simply calculating the SARS-CoV-2 permeability based on the data listed in Table 2.
However, in this study, data such as the molecular weight distribution, diffusion coefficient in a fluid or quiescent fluid and physical properties of SARS-CoV-2, its affinity with membrane and concentration in plasma have not been available. As soon as such data are accumulated, more detailed validations are feasible. Additionally, there is no actual data on the SARS-CoV-2 permeation, as the suitability of using SARS-CoV-2 in vitro is questionable. To study actual virus permeation, an approach of directly observing virus permeation through membrane wall [33] is also useful.
On the other hand, medical staff are doing rigorous work day by day, so direct evidence is required as described above. Although the output of our study may not provide direct evidence, it provides novel insights to ECMO support of COVID-19 critically ill patients. From our research and other studies [2,29], SARS-CoV-2 is highly likely to permeate the membrane transporting from the patient's blood to the gas side, and may diffuse from the gas side outlet port of ECMO.
Conclusions
The precise pore structures of the ECMO membranes were clarified by direct microscopic observation by FE-SEM. The three types of membranes, polypropylene membrane, polypropylene membrane coated with thin silicone layer, and polymethylpentene (PMP) membrane, each have a unique pore structure, and the pore structures on the inner and outer surfaces of the membranes are completely different anisotropic structures. When plasma leakage occurs during long-term prolonged ECMO treatment, SARS-CoV-2 is also likely to permeate through uniquely shaped anisotropic pores with the filtration flow of plasma or wet lung. In this case, SARS-CoV-2 is discharged from the outlet port of the oxygenator gas side, so care must be taken to prevent airborne transmission and aerosol infections of SARS-CoV-2. The risk of SARS-CoV-2 permeation in polypropylene membrane coated with thin silicone layer (Sample B) and PMP membrane (Sample C) was lower. At the time of this current COVID-19 pandemic, the risk of infections on the operators of medical devices is drawing attention. In the future, development of next-generation extracorporeal membrane oxygenator and system with long-term durability is envisaged.
|
v3-fos-license
|
2017-11-08T01:12:43.564Z
|
2008-02-12T00:00:00.000
|
11864776
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/6604241.pdf",
"pdf_hash": "bad6db80f234f0bbf253dab8621c831e964163fd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1116",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "bad6db80f234f0bbf253dab8621c831e964163fd",
"year": 2008
}
|
pes2o/s2orc
|
Second malignancies after breast cancer: the impact of different treatment modalities
Treatment for non-metastatic breast cancer (BC) may be the cause of second malignancies in long-term survivors. Our aim was to investigate whether survivors present a higher risk of malignancy than the general population according to treatment received. We analysed data for 16 705 BC survivors treated at the Curie Institute (1981–1997) by either chemotherapy (various regimens), radiotherapy (high-energy photons from a 60Co unit or linear accelerator) and/or hormone therapy (2–5 years of tamoxifen). We calculated age-standardized incidence ratios (SIRs) for each malignancy, using data for the general French population from five regional registries. At a median follow-up 10.5 years, 709 patients had developed a second malignancy. The greatest increases in risk were for leukaemia (SIR: 2.07 (1.52–2.75)), ovarian cancer (SIR: 1.6 (1.27–2.04)) and gynaecological (cervical/endometrial) cancer (SIR: 1.6 (1.34–1.89); P<0.0001). The SIR for gastrointestinal cancer, the most common malignancy, was 0.82 (0.70–0.95; P<0.007). The increase in leukaemia was most strongly related to chemotherapy and that in gynaecological cancers to hormone therapy. Radiotherapy alone also had a significant, although lesser, effect on leukaemia and gynaecological cancer incidence. The increased risk of sarcomas and lung cancer was attributed to radiotherapy. No increased risk was observed for malignant melanoma, lymphoma, genitourinary, thyroid or head and neck cancer. There is a significantly increased risk of several kinds of second malignancy in women treated for BC, compared with the general population. This increase may be related to adjuvant treatment in some cases. However, the absolute risk is small.
The overall survival rate of patients with early advanced breast cancer (BC) has increased over the years largely because adjuvant therapy, whether chemotherapy, radiotherapy or hormone therapy, has helped prevent local and distant failures (Fox, 1979;Jones and Raghavan, 1993;EBCTCG, 2005). Second malignancies that occur in long-term survivors may be due to sporadic cancers that would have occurred anyway, environmental or genetic factors (Klijn et al, 1997;Schrag et al, 1997;Turner et al, 1999;Meijers-Heijboer et al, 2000;Pierce et al, 2000Pierce et al, , 2003Stoppa-Lyonnet et al, 2000;Galper et al, 2002;Kauff et al, 2002;Pierce, 2002;Robson, 2002;Seynaeve et al, 2004;Kirova et al, 2005aKirova et al, , b, 2006aLaki et al, 2007), or BC treatment (Neugut et al, 1993;Inskip et al, 1994;Ahsan and Neugut, 1998;Karlsson et al, 1998;Kirova et al, 1998Kirova et al, , 2005aKirova et al, , b, 2007Obedian et al, 2000;Rubino et al, 2000;Scholl et al, 2001;Shousha et al 2001;Yap et al, 2002Yap et al, , 2005Deutsch et al, 2003;Zablotska and Neugut, 2003;Zablotska et al, 2005;Mellemkjaer et al, 2006) The aim of this study was to estimate the risk of a second malignancy after adjuvant treatment for BC in a homogeneous cohort of patients from a single institution. The observed incidence of second malignancies in these BC patients was compared with the expected age-adjusted number of new cases in the general population of French women as given by data from five regional registries (Remontet et al, 2003).
PATIENTS AND METHODS
We analysed data for 16 705 consecutive patients with nonmetastatic BC who were treated at the Institut Curie between 1981 and 1997. The data, including treatments, were entered prospectively into the Institute's BC database set up in 1981 (Salmon et al, 1997). Chemotherapy regimens in adjuvant and neoadjuvant setting varied over time based on CMF (cyclophosphamide, methotrexate and 5-fluorouracile), FAC (5-fluorouracile, adriamycine and cyclophosphamide) or FEC (epirubicin). All patients received alkylating agents and the majority received anthracyclins. Hormonal therapy consisted mostly of 2 -5 years of administration of tamoxifen. Patients who underwent radiotherapy received highenergy photons produced by a 60 Co unit or linear accelerator, as previously described, either as sole treatment or pre-or post surgery (Fourquet et al, 1991;Campana et al, 2005;Kirova et al, 2006b). Follow-up included a six-monthly clinical examination and a once-yearly mammogram for 5 years, and then a once-yearly clinical examination and a unilateral or bilateral mammogram for the lifetime of every patient. All follow-up data were entered into the database. At 5 years, 5% of patients were lost to follow-up and at 10 years, 8% were lost to follow-up.
We recorded clinical and primary tumour variables, radiation history and irradiation fields, for all patients with histologically confirmed second malignancies. Second malignancies included all first cancers occurring after treatment of the primary BC, but excluded contralateral BC.
Statistical analysis
We first calculated Kaplan -Meier cumulative incidence and the 10-year risk of developing each type of second malignancy (Kaplan and Meier, 1958). The observed crude incidence rates in the entire patient population (cases per 100 000 person-years) were then compared with the expected incidence in the general population of French women as given by age-standardized data from five regional registries (Remontet et al, 2003), and a standardized incidence ratio (SIR) was calculated for each malignancy. We then calculated the SIRs for the highest-risk malignancies according to the adjuvant treatment the patients had received to study the impact of treatment on risk. The Poisson regression model was used to adjust the analysis. The data were analysed using 'S Plus 6.2, Insightful Corp.' software.
RESULTS
Median follow-up was 10.5 years (range 0.2 -24 years). Median patient age at the time of BC diagnosis was 56.2 years. Of the total population of 16 705 patients, 13 472 (80.6%) received radiation therapy, 2347 (17.4%) underwent mastectomy followed by radiotherapy, 8596 (63.8%) lumpectomy then radiotherapy, and 2529 (18.8%) were treated by radiotherapy alone. A total of 4528 patients (27.1%) received chemotherapy (14.3% chemotherapy alone; 12.8% chemotherapy plus hormone therapy) and 16.5% received hormone therapy alone. Overall, 9414 patients (56.4%) did not receive any systemic adjuvant therapy. The number of patients receiving different treatment combinations is given in Table 1.
By 10.5 years of median follow-up, 709 patients had developed a second malignancy. Table 2 gives the cumulative incidence of second malignancies 10.5 years after BC in the study population by decreasing order of incidence. Gastrointestinal (GI) cancer was the most common cancer, followed by gynaecological cancer (cervical and endometrial) and ovarian cancer. Table 3 compares the observed crude incidence in patients and the incidence in the general population of French women. Of all the malignancies, only leukaemia, ovarian and other gynaecological cancers (cervical and endometrial), and GI tumours, showed a significantly higher incidence in patients than in the general population. Among the 74 patients with histologically confirmed primary ovarian cancer, 13 underwent genetic testing because they presented a familial history of BC or ovarian cancer and, of these 13 patients, 10 were carriers of BRCA mutations (9 of BRCA1, 1 of BRCA2).
The extent to which the different treatments constituted risk factors for a second malignancy is shown in Table 4. Chemotherapy was the most important risk factor for leukaemia and highly significantly increased the risk of this disease. Radiotherapy was a much less significant risk factor. Both hormone treatment and radiotherapy were significant risk factors for gynaecological cancers. The SIR of ovarian cancer was threefold higher in patients who had received radiotherapy plus chemotherapy than in patients receiving no adjuvant therapy. The combination treatment was a highly significant risk factor. Chemotherapy alone had no significant effect maybe because of the small number of events and lack of statistical power. We found no relationship between GI tumours and BC treatment (not shown).
DISCUSSION
To our knowledge, this is the largest retrospective study from a single institution on second malignancies and one of the first to attempt to relate the incidence and risk of a second malignancy in patients with non-metastatic BC to the expected number of cases in the general population of women of the same age, after stratifying patients by treatment received (Rubino et al, 2000). Patients treated for BC showed increased risk of leukaemia, ovarian cancer, and gynaecological cancers, and a slightly enhanced risk of GI cancers, in addition to the well-known risk of developing sarcomas (Kirova et al, 2005b) and lung cancer after radiation therapy . The increase in leukaemia was most strongly related to chemotherapy (alkylating agents) and that in gynaecological cancers to hormone therapy (the main treatment was tamoxifen). Radiation therapy alone also had a significant, but lesser, effect found only in comparison with the general population (Rubino et al, 2000).
There was no difference between irradiated and non-irradiated patients with regard to leukaemia risk , but there was a significant difference between our patients and the general population. Such a difference has already been noted and has been related to the use of adjuvant chemotherapy (7, 13, 19, 34;Rubino et al, 2000). At the Institut Gustave Roussy, the overall SIR for leukaemia was 3.1 (95% confidence interval (CI): 1.7 -5.0) in (Rubino et al, 2000). Our observation of an increased risk of ovarian cancer confirms previous findings (Easton et al, 1993;Breast Cancer Linkage Consortium, 1997;Fisher et al, 1998;Chappuis et al, 2000;Haber, 2002;Haffty et al, 2002;Kauff et al, 2002;Pierce et al, 2003;Blamey et al, 2004) and suggests that these patients may have a familial predisposition to BC and ovarian cancer. Although we tested 13 of 74 patients with ovarian cancer for BRCA1 or BRCA2 mutations and found a mutation in 10 of the 13 patients with familial cancer, this result is not representative of the whole population of patients. We included patients as from 1981, but only began genetic testing in the early nineties. The increased risk of endometrial cancer might be due to tamoxifen use, as shown by others (Ewertz and Mouridsen, 1985;Brenner et al, 1993;Volk and Pompe-Kirn, 1997). Confirmation of this would need distinguishing different types of hormone therapy (anti-estrogens, anti-aromatase) from surgical hysterectomy and radiation-induced castration.
No relationship between GI cancers and different treatment modalities was observed. This and our previous study did not find increased incidence of oesophageal cancers, related to the radiation treatment .
A major strength of our study is the large volume of individual patient data from a single institution. This differentiates it from epidemiological studies that lack individual data on patient treatment and from most single-institution series that are much smaller. However, despite the large number of patients and long follow-up (10.5 years), the incidence of second malignancies may nevertheless remain underestimated because of the long latency period of some tumours.
In conclusion, this study has confirmed an increased risk of second malignancies in women treated for BC, compared with the general population. This increase may be related to adjuvant treatment in some cases. However, the absolute risk is small and the influence of other predisposing factors, such as for instance family history of cancer and history of smoking, will need to be investigated in a prospective study, preferably with a long enough follow-up to exclude other late complications.
|
v3-fos-license
|
2021-04-14T13:36:42.524Z
|
2020-12-31T00:00:00.000
|
233224727
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-10761-5",
"pdf_hash": "34cfb521d362386508aabe1f96f7cac820d04eb4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1118",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "34cfb521d362386508aabe1f96f7cac820d04eb4",
"year": 2021
}
|
pes2o/s2orc
|
Prevalence and factors associated with alcohol consumption among persons with diabetes in Kampala, Uganda: a cross sectional study
Background The prevalence of diabetes has been rising increasing rapidly in middle- and low-income countries. In Africa, the World Health Organization projections anticipate diabetes mellitus to be the seventh leading cause of death in by 2030. Alcohol consumption influences diabetes evolution, in such a way that it can interfere with self-care behaviours which are important determinants of diabetes prognosis. In this study, we evaluated factors associated with alcohol consumption among persons with diabetes in Kampala to inform management policies and improve comprehensive diabetes care. Methodology A cross-sectional study was conducted systematically among 290 adults with diabetes, attending diabetic clinics at Mulago National Referral Hospital and St Francis Hospital Nsambya. Data were entered and analysed in Epi-Info version 7 and STATA 13 software. Modified Poisson regression was used to identify factors associated with alcohol consumption among persons with diabetes. All tests were two-sided and the significance level for all analyses was set to p < 0.05. Results The prevalence of alcohol consumption among persons with diabetes was 23.45% [95% CI: 18.9–28.7%]. Divorced, separated and widowed patients (Adj PR: 0.42, 95% CI: 0.21–0.83); and Protestant (Adj PR: 0.44, 95% CI: 0.24–0.82); Muslim (Adj PR: 0.30, 95% CI: 0.14–0.62); and Pentecostal (Adj PR: 0.32, 95% CI: 0.15–0.65) patients were less likely to consume alcohol. Diabetic patients who had a diabetes duration greater than 5 years were more likely to consume alcohol (Adj PR: 1.90, 95% CI: 1.25–2.88). Conclusion Approximately one-quarter of participants consumed alcohol. However being catholic, never being married and having diabetes for more than 5 years predisposed persons with diabetes to alcohol consumption. Sensitization messages regarding alcohol consumption among persons with diabetes should be target patients who have never been married and those who have spent more than 5 years with diabetes; religion should also be considered as an important venue for health education in the community.
Background
Diabetes mellitus is a global public health concern, with a steadily increase incidence [1,2]. In 2012, an estimated 1.5 million deaths were directly caused by diabetes mellitus. The prevalence of diabetes has been increasing more rapidly in middle-and low-income countries [3]. In Africa, 12.1 million people were estimated to be living with diabetes in 2010, and this figure is projected to increase to 23.9 million people by 2030 [4]. According to the International Diabetes Federation, in Uganda there were 400,600 cases of diabetes in 2015 compared to approximately 98,000 in 2000 [5].
Alcohol consumption is detrimental among persons with diabetes and influences diabetes evolution [6]. It can also interfere with self-care behaviours, which are important determinants of diabetes prognosis because they are necessary for maintaining a good glycaemic control [6]. Addiction to alcohol among diabetic patients has been found to increase the risk of hyperglycemia, hypoglycemia, dehydration, high blood pressure, eye disease, damage to nerves, injuries and death [7][8][9][10].
While, diabetes is expected to increase by more than fifteen fold in Uganda over the next decade (Businge, 2010), the prevalence of alcohol consumption remains high [11] despite health education, and the existence of alcohol consumption guidelines and legislation. Up to 26.8% of the individuals in Uganda are current alcohol users, and the highest prevalence was found among people living in urban areas and in Central and Western regions, including Kampala [12,13].
Despite the statistics from the Uganda Demographic and Health Survey (UDHS) that indicate an increase in diabetes incidence as well as an increase in alcohol consumption, information related to alcohol consumption among diabetic patients in Kampala is scant yet it is a public health problem that can be overcome through durable, persistent strategies. Previous literature on alcohol consumption in Uganda has focused mainly on people living with HIV [14], psychiatric patients [15] and the general population [16].
This study fills an information gap on both alcohol consumption and diabetes by determining the prevalence and identifying factors related to alcohol consumption among persons with diabetes. We will therefore increase the knowledge about alcohol consumption, especially that among diabetic patients. We will inform management policies, and we will guide the formation of evidence-based health promotion guidelines and strategies for secondary prevention, which are necessary to slow the development of diabetes complications and improve comprehensive diabetes care.
Study setting and design
A facility-based cross-sectional study was conducted between May and June 2017 among outpatients with diabetes, attending the two selected main hospitals (Mulago National Referral Hospital and St. Francis Hospital Nsambya) in Kampala.
Kampala is a Uganda city, with an estimated population of 1,659,600 habitants in 2011 according to the Uganda Bureau of Statistics (UBS).
The two main hospitals were selected purposively due to the larger number of people with diabetes enrolled in their diabetes clinics; they are teaching hospitals that treat patients from different districts, and with different socioeconomic levels. Mulago National Referral Hospital (MNRH)/Kirudu directorate (threating more than 100 patients/week) is a public hospital yet St Francis (treating almost 30 patients/week) is a private not-for-profit hospital run by the Uganda Catholic Medical Bureau Kampala. Both health facilities have good record management, have diabetic outpatient clinics that open once a week and almost all stable diabetic patients are given a medical appointment every 2 months.
Study population
The study population was composed of people with diabetes, who visited St. Francis Hospital, Nsambya and MNRH diabetic clinic; during the study period. The study included all persons with diabetes aged eighteen and above, who were followed up for at least 1 year, at the diabetic clinics of St. Francis Hospital Nsambya and MNRH during the time of the study. Very sick patients who were unable to respond to the questionnaire and patients who did not sign the consent form were excluded.
Sample size determination
The sample size was estimated using the formula for cross-sectional studies developed by Kish Leslie [17] as follows: n ¼ ½ Z 2 PQ δ 2 with 95% confidence interval (Z = 1.96), assuming that 45% of persons with diabetes consume alcohol (from Saint Francis Nsambya Hospital record, 2016), and assuming an absolute difference (d) of 0.06 and a 10% nonresponse rate. A total sample size of 290 persons with diabetes was required.
The diabetic patients who were followed at each hospital each week on the day of the diabetes clinic composed the study sample. Therefore 68 patients (15 patients/week) were selected at St Francis Hospital and 222 (50 patients/week) were selected at Mulago. Data were collected at St Francis Hospital Nsambya and at MNRH, on each clinic day, and the systematic sampling approach was used to enrol patients. During the sevenweek study period, the sampling interval was two in each hospital. This means that every second patient was chosen, until the desired sample size was achieved.
Data collection methods
Data were collected using a pretested intervieweradministered questionnaire in English and in Luganda (the main Ugandan local language).
During patients medical care visit at the diabetes clinic, the study aim was explained to those who were systematically selected and met the eligibility criteria. All diabetic patients who agreed to participate in the study provided a signed informed consent form in accordance with the Makerere University Faculty of Medicine Research and Ethics Committee (FOMREC) guidelines.
The data collection tools were anonymous and included a structured questionnaire that was completed by research assistants; who had good knowledge of diabetes and who were trained before the commencement of the study in accordance with the study aim, data collection methods and research ethics.
The study involved only a quantitative data collection method, which was used to calculate percentages and test the relationships between variables.
The questionnaire included sociodemographic characteristics of the participants (age, gender, marital status, religion, tribe, occupation, residence, and others), clinical data about diabetes (type of diabetes, time spent with diabetes, health education about diabetes, and others), level of knowledge on alcohol consumption and diabetes (signs of alcoholism, effect of alcohol on the body, effect of alcohol on diabetes and others), and personal information about alcohol consumption within the last year.
The level of knowledge on diabetes control and alcohol consumption was assessed on the basis of the adapted Substance Use Knowledge, Attitude and Practice Survey questionnaire [18]. Regarding the main signs of alcoholism [19], three questions were asked, and the participants who responded properly to 2 questions were considered to have a good level of knowledge on signs of alcoholism. There were nine questions about the effects of alcohol on the body and the participants who respond properly to five questions were considered to have good knowledge on the effects of alcohol consumption on the body. Questions on the effects of alcohol consumption on diabetes were asked, and the participants who responded properly to four questions of the eight questions asked were considered to have good knowledge of the effects of alcohol consumption on diabetes.
The health education variable assessed whether patients received health education at the facility during the routine visit (yes or no), who delivered the message, and at what frequency (never, always, and sometimes).
Information on alcohol consumption within the last year, was self-reported by the patients; and then categorized into binary outcome variables (yes or no). For those who reported consuming alcohol, we further classified their alcohol consumption into 5 categories; using the Alcohol Use Disorders Identification Test (AUDIT) questionnaire [20,21]: non-drinkers; any alcohol drinking; alcohol misuse (scoring 3 to 7 points for men or 3 to 8 points for women points or higher on the AUDIT tool); hazardous alcohol drinking (scoring 8 or more points for men and 7 or more points for women on the AUDIT tool) and binge alcohol drinking (corresponding to 6 or more drinks on a single occasion for men or 5 or more drinks on a single occasion for women at least once last year). In this study, nondrinkers and any alcohol drinkers (both having an AUDIT score less than three) were put in the same categories of "nondrinkers" as they carry low risks in developing complications [20][21][22].
The reasons for alcohol consumption were reported by the patient were categorized into family influence, pleasure or peer influence and means of coping with stress (health worries, work stressors, etc.). Additionally, each patient reported the type of alcohol that was usually taken (beer, wine, and spirit/local brews).
Data analysis
Data were field edited, coded, cleared and checked for consistency. Coding was performed to clearly identify the required variables for analysis. The data were entered into Epi-Info version 7, transferred to Microsoft Excel 13 for cleaning, and then exported to STATA 13 software for statistical analyses. Summary statistics including frequencies and proportions for categorical variables were performed, and means with their standard deviations (SDs) were obtained for continuous variables. We identified factors associated with alcohol consumption (included only alcohol misuse, hazardous alcohol drinking and binge drinking) among persons with diabetes, by using both bivariate (to check for associations and relationships between alcohol consumption and the predictors) and modified Poisson regression analysis (Poisson regression with robust error variance) to obtain estimates that are relatively robust to omitted covariates, as the prevalence of alcohol consumption among persons with diabetes was greater than 10% [23].
Variables that were significant in the bivariate analysis were included in modified Poisson regression model, and the inclusion criterion was p ≤ 0.05. The forward elimination method was then used to build the statistical model and hence to determine factors that were associated with alcohol consumption among persons with diabetes. All statistical tests were two sided. To measure the strength of association, we used the prevalence ratio (PR). We reported crude and adjusted prevalence ratios with their 95% confidence intervals and p values. The significance level for all the analyses was set to p ≤ 0.05. Table 1).
Sociodemographic characteristics of participants
Most of the participants did not receive health education regarding diabetes when they came for routine visits at the hospital 68.9% (200/290). (Table 1).
As shown in Table 1, most of the participants did not receive health education regarding diabetes when they came for routine visits at the hospital 68.9% (200/290), and the majority of study participants (50%, 145/290) had good knowledge of signs of alcoholism and the effects of alcohol on the body, followed by those with of signs of alcoholism (30.3%, 88/290).
Alcohol consumption among patients with diabetes
Based on the AUDIT questionnaire and alcohol consumption self-report, 23 and binge drinking was reported by 3.1% (9/290) [95% CI: 1.6-5.8%] of the study participants (see Fig. 1).
The majority of the persons with diabetes consumed beer (77.9%, 53/68) as shown in Fig. 2; and 58.8% (40/ 68) of patients reported means of coping stress as a major reason of alcohol consumption (see Fig. 3).
Factors associated with alcohol consumption among persons with diabetes
Multivariable Poison regression models were used to analyse factors that were related to alcohol consumption among persons with diabetes followed at MNRH and at Saint Francis Hospital Nsambya.
The variables revealed as independent predictors for alcohol consumption among persons with diabetes in Kampala were religion, marital status and time spent with the disease.
Patients who had spent 5 to 10 years with diabetes were more likely to consume alcohol than those who had spent less than 5 years with diabetes. (Adj PR: 1.90, 95% CI: 1.25-2.88) ( Table 2).
The prevalence of alcohol consumption among divorced, separated and widowed patients was 58% lower than that among patients who had never been married. (Adj PR: 0.42, 95% CI: 0.21-0.83) ( Table 2).
Discussion
Alcohol consumption remains a long-standing public health issue in Uganda. Alcohol consumption can be harmful to vulnerable people with diabetes, by interfering with self-care behaviours and affecting important organs in the body. Therefore, this study fills a knowledge gap by detecting factors associated with alcohol consumption among persons with diabetes in Kampala, to improve comprehensive diabetes care by providing possible strategies and interventions and informing management policies.
One-quarter of diabetic patients treated in the two selected health facilities (MNRH and St Francis Hospital) in Kampala consumed alcohol. Alcohol use in Uganda is widely accepted in local culture and tradition. Additionally, Uganda is abundantly supplied with alcoholic beverages (beer, wine, liquor produced in factories in the country or imported and informally produced beer and distilled liquor in local makeshift bars and homes), such as Heineken, Tusker, Guinness, Bell, Nile Special and Club. The findings were similar to a countrywide estimate of the prevalence of alcohol use in Uganda that showed an overall prevalence of current alcohol use of 26.8% [12]. However, the prevalence of alcohol consumption in this study was much lower than that in a study conducted among individuals with type 2 diabetes from 20 different countries in the world, where up to 30% of patients were found to drink alcohol [22], and another study conducted in northern California among adults with diabetes revealed a prevalence of 50% [6]. That prevalence was much higher than the one revealed in a study conducted among Croatians (5.8%) [24]. This must be due to differences in sociodemographic and cultural characteristics among the different study populations.
In this study, the majority of people with diabetes, consumed alcohol hazardously. Additionally among those who consumed alcohol hazardously and who reported binge drinking, the main reason for their drinking was stress. In addition to life events that are inherently stressful, diabetic patients also have to overcome the stress of their disease. In the present era of modernization, balancing work, family, leisure time and a chronic disease along with all its requirements is a big challenge for patients, and may increase their stress level. Studies have revealed that alcohol consumption is strongly associated with stress. Alcohol consumption is mostly used as a means of coping with stress [25,26]. Chronic stress can therefore interfere with a diabetic patient's capacity to adhere to self-care behaviours which are essential for maintaining good health [27]. Our findings emphasize the importance of regular screening for stress as a component of routine diabetes care to identify and manage stress early. This will help to improve Glycemic control as well as quality of life and prognosis. The majority of persons with diabetes consume beer, followed by spirits and wine. This is different from studies done in Uganda among HIV patients and studies done in the USA and Croatia, where the majority of patients consume wine, followed by beer and spirits [6,14,24]. Uganda is abundantly supplied with alcoholic beverages which mostly include beers such as Tusker, Guinness, Bell, Nile, Eagle and Club. Those beverages are cheaper, they are always available in retail and local shops, and they can also be consumed in public places and even at home. Guidelines regulating alcohol production and commercial sales, time and place restrictions for selling alcohol, density of outlets and advertisements practices must be studied further.
Religion was significantly associated with alcohol consumption. Catholics were more likely to consume alcohol; because in the Catholic religion, alcohol consumption is not prohibited, contrary to other religions, which consider alcoholic beverages to be incompatible with a holy life, so abstaining from alcohol is an obligation in those religions. This result is similar to other studies performed in Uganda [14,16] and other countries [6,28,29] where Catholic followers were more likely to consume alcohol than followers of other religions. According to the WHO, religion might play a role in the prevention of alcohol consumption [30]. Therefore, religion can be used strategically to reduce alcoholrelated problems among persons with diabetes. By providing health education to followers, the information can be disseminated throughout the populations.
The duration of the disease was significantly associated with alcohol consumption. This was consistent with other studies performed in Asia and Africa where patients with a diabetes duration of ≤5 years were more adherent to diet, especially regarding alcohol intake, than those who had a duration of > 5 years [31][32][33][34].
According to Glasgow et al., the duration of disease appears to have a negative relationship with diet adherence [35,36]. In 2010 Egede and Ellis showed that despondency can also be a factor influencing poor dietary practice regarding alcohol consumption among diabetic patients [37].
In most health facilities in Uganda, patients presenting with diabetes are initially encouraged to maintain a diet that includes avoiding alcohol consumption, to prevent complications. Over time, health education can be neglected due to lack of motivation, lack of time, absence of family and health care support, and patients becoming fed up with following a dietary regimen. In that sense, health professionals need to double their attention to newly and formerly diagnosed diabetic patients, to provide them solid support in terms of health education. They need to discuss in detail the importance of selfcare behaviours that include avoiding alcohol, because the reason for discontinuing such behaviours after 5 years of the disease duration could be inadequate diabetic education or consultation and a decrease in motivation over time.
This study shows that never married diabetic patients consume more alcohol than widowed patients. This is similar to a study conducted in the USA in 2016 in the general population where never married people were more likely to consume alcohol than married and widowed people [38]. This finding is also similar to a study performed among women in Accra (Ghana) [39]. Widows have more responsibilities than never married people, especially in regard to taking care of children. Therefore, instead of purchasing alcohol, they tend to use the majority of their resources for their children's needs. Additionally, they have to spend less time with friends and coworkers and more time with their children, which may reduce alcohol consumption.
Study limitations and strengths
Recall bias could have occurred as some data, especially from the questionnaire, were self-reported by the person with diabetes. The other limitation in this study is a social desirability bias that could have occurred since most of the information was reported by participants. Persons with diabetes who also drink alcohol may not disclose fully to the interviewers the extent of their drinking.
The strengths of this study include the use of the AUDIT questionnaire, a standardized internationallyvalidated tool for alcohol assessment in primary care settings, allowing for cross-study comparability. Furthermore, previous studies were focused on the general population or on specific groups, such as HIV and psychiatric patients. This study examined alcohol consumption among persons with diabetes in Kampala, where there is a continuous increase in diabetes incidence.
Conclusion and recommendations
Approximately one-quarter of persons with diabetes treated at MNRH and St Francis Hospital outpatient Type of Diabetes diabetes clinic, in Kampala consume alcohol. Being widowed, Protestant, Muslim or Pentecostal, and having spent less than 5 years with diabetes were associated with lower alcohol consumption. Religion is an important venue for health education against alcohol-related problems among persons with diabetes. The sensitization message regarding alcohol consumption among persons with diabetes should be target mainly never married people and those who have spent more than 5 years with the disease. Further study must be done to identify the temporal relationship between time spent with diabetes and alcohol consumption among diabetic patients.
|
v3-fos-license
|
2018-04-03T01:28:34.055Z
|
2017-01-09T00:00:00.000
|
23559967
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.118.026401",
"pdf_hash": "0f09648ae0c76096b24c36ef34628303d493f20e",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1122",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "0f09648ae0c76096b24c36ef34628303d493f20e",
"year": 2017
}
|
pes2o/s2orc
|
Absolute Binding Energies of Core Levels in Solids from First Principles
A general method is presented to calculate absolute binding energies of core levels in metals and insulators, based on a penalty functional and an exact Coulomb cutoff method in the framework of density functional theory. The spurious interaction of core holes between supercells is avoided by the exact Coulomb cutoff method, while the variational penalty functional enables us to treat multiple splittings due to chemical shift, spin-orbit coupling, and exchange interaction on equal footing, both of which are not accessible by previous methods. It is demonstrated that the absolute binding energies of core levels for both metals and insulators are calculated by the proposed method in a mean absolute (relative) error of 0.4 eV (0.16%) for eight cases compared to experimental values measured with x-ray photoemission spectroscopy within a generalized gradient approximation to the exchange-correlation functional.
Since the pioneering work of Siegbahn and co-workers [1,2], x-ray photoelectron spectroscopy (XPS) has become one of the most important and widely used techniques in studying chemical composition and electronic states in the vicinity of the surface of materials [3].Modern advances combined with synchrotron radiation further extend its usefulness to enable a wide variety of analyses such as core level vibrational fine structure [4], magnetic circular dichroism [5], spin-resolved XPS [6], and photoelectron holography [7].The basic physics behind the still advancing XPS measurements dates back to the first interpretation for the photoelectric effect by Einstein [8].An incident x-ray photon excites a core electron in a bulk, and the excited electron with a kinetic energy is emitted from the surface to vacuum.The binding energy of the core level in the bulk can be obtained by measuring the kinetic energy [1,2].Theoretically, the calculation of the binding energy involving evaluation of the total energies for the initial and final states is still a challenging issue especially for insulators, since after the emission of the photoelectron the system is not periodic anymore and ionized due to the creation of the core hole.The violation of the periodicity hampers the use of conventional electronic structure methods under a periodic boundary condition, and the Coulomb potential of the ionized bulk cannot be treated under an assumption of the periodicity due to the Coulombic divergence.One way to avoid the Coulombic divergence is to neutralize the final state with a core hole by adding an excess electron into conduction bands [9][10][11][12][13] or to approximate the bulk by a cluster model [14].However, the charge compensation may not occur in insulators because of the short escape time of the photoelectron (∼10 −16 sec) [15], while the treatment might be justified for metals.Even if we employ the charge compensation scheme, the screened core hole pseudopotential which has been widely used in pseudopotential methods allows us to calculate only the chemical shift of binding energies, but not the absolute values [9].In spite of the long history of XPS and its importance in materials science, a general method has not been developed so far to calculate the absolute binding energies for both insulators and metals, including multiple splittings due to chemical shift, spin-orbit coupling, and exchange interaction, on equal footing [16].In this Letter we propose a general method to calculate absolute binding energies of core levels in metals and insulators, allowing treatment of all the issues mentioned above and direct comparison to experimental results, in a single framework within the density functional theory (DFT) [19,20].
Let us start by defining the absolute binding energy E b of core electrons in bulks measured by a XPS experiment, based on the energy conservation.The energy of the initial state is given by the sum of the total energy E i ðNÞ of the ground state of N electrons and an energy hν of a monochromatic x-ray photon.On the other hand, the energy of the final state is contributed by the total energy E f ðN − 1Þ of the excited state of N − 1 electrons with a core hole and the kinetic energy K spec of photoelectron placed at the vacuum level V spec , as shown in Fig. 1.Therefore, the energy conservation in the XPS measurement is expressed by Noting that the chemical potential of the sample is aligned with that of the spectrometer μ by Ohmic contact, and that the vacuum level of the spectrometer V spec is given by V spec ¼ μ þ φ spec using the work function of the spectrometer φ spec , Eq. (1) reads as The left-hand side of Eq. ( 2) is the binding energy E ðbulkÞ b measured by the XPS experiment [21].The right-hand side of Eq. ( 2) provides a useful expression to calculate the absolute binding energy E ðbulkÞ b for bulks regardless of the band gap of materials.Instead of using the experimental chemical potential μ, it is possible to rewrite the total energies with the intrinsic chemical potential μ 0 , where intrinsic means a state that is free from the control of chemical potential, by noting that what the shift of the chemical potential μ ¼ μ 0 þ Δμ does is only the constant shift of potential Δμ.Then, we rewrite them as E i ðNÞ ¼ E 2) yields Equation ( 3) is an important consequence, since only quantities that can be calculated from first principles are involved.Thereby, we use Eq. ( 3) to calculate the absolute binding energy E ðbulkÞ b . Note that the chemical potential μ 0 is calculated by assuming the Fermi distribution at a finite electronic temperature.It should be emphasized that Eq. ( 3) is valid even for semiconductors and insulators.In an arbitrary gapped system, the common chemical potential μ is pinned at either the top of the valence band or the bottom of the conduction band, or located in between them.For all the possible cases, exactly the same discussion above is valid.Thus, we conclude that Eq. ( 3) is a general formula to calculate the absolute binding energy E ðbulkÞ b of solids.Especially for metals, Eq. ( 3) can be further reorganized by noting a rigorous relation derived with the Janak theorem [23]: where n is an occupation number of a one-particle eigenstate on the Fermi surface, dn ¼ −ds=S is defined with the area of the Fermi surface S and an infinitesimal area ds, and the surface integration is performed over the Fermi surface.By inserting the above equation into Eq.( 3), we obtain the following formula: 3) and ( 4) should result in an equivalent binding energy in principle; however, the convergence is different from each other as a function of the system size, as shown later on.
Since E ð0Þ i ðNÞ and μ 0 in Eq. ( 3) can be calculated by a conventional approach with the periodic boundary condition, we now turn to discuss a method of calculating E ð0Þ f ðN − 1Þ in Eq. ( 3) based on the total energy calculation including many-body effects.Core electrons for which a core hole is created are explicitly included in the calculations to treat multiple splittings due to chemical shift, spin-orbit coupling, and exchange interaction between core and spin-polarized valence electrons, and to take account of many-body screening effects.The creation of the core hole can be realized by expressing the total energy of the final state by the sum of a conventional total energy E DFT within DFT and a penalty functional E pen as with the definition of E pen : In Eq. ( 6) the projector P is defined with an angular eigenfunction Φ of the Dirac equation under a spherical potential and a radial eigenfunction R obtained by an atomic DFT calculation for the Dirac equation as where Y is a spherical harmonic function and α and β are spin basis functions.The variational treatment of Eq. ( 5) with respect to ψ leads to the following Kohn-Sham equation: where T is the kinetic operator and v eff the conventional Kohn-Sham effective potential originated from E DFT .If a large number (100 Ry was used in this study) is assigned for Δ in Eq. ( 7), the targeted core state Φ M J specified by the quantum numbers J and M is penalized through the projector P in Eq. ( 10), and becomes unoccupied, resulting in the creation of a core hole for the targeted state.Since the creation of the core hole is self-consistently performed, the screening effects by both core and valence electrons, spinorbit coupling, and exchange interaction are naturally taken into account in a single framework.It is also straightforward to reduce the projector P to the nonrelativistic treatment.
After the creation of the core hole, the final state has one less electron, leading to charging of the system.In the periodic boundary condition, a charged system cannot be treated in general because of the Coulombic divergence.The neutralization of the final state may occur in a metal, and theoretically such a neutralization can be justified as shown by Eq. ( 4).However, it is unlikely that such a charge compensation takes place in an insulator during the escape time of the photoelectron (∼10 −16 sec) [15].To overcome the difficulty, we propose a general method of treating the charged state based on an exact Coulomb cutoff method [24].It is considered that the created core hole is isolated in the sample, resulting in violation of the periodicity of the system.The isolation of the core hole can be treated by dividing the charge density ρ f ðrÞ for the final state into a periodic part ρ i ðrÞ and a nonperiodic part ΔρðrÞ½≡ρ f ðrÞ − ρ i ðrÞ, which, when integrated over the unit cell, is exactly −1, where ρ i ðrÞ is the charge density for the initial state without the core hole.Then, as shown in Fig. 2(a), the Hartree potential V H ðrÞ in the final state is given by where V ðPÞ H ðrÞ is the periodic Hartree potential calculated using the periodic part ρ i ðrÞ via a conventional method using a fast Fourier transform for the Poisson equation.On the other hand, the nonperiodic Hartree potential V ðNPÞ H ðrÞ is calculated using ΔρðrÞ and an exact Coulomb cutoff method by where ΔρðGÞ is the discrete Fourier transform of ΔρðrÞ and ṽðGÞ is given by ð4π=G 2 Þ½1 − cosðGR c Þ, which is the Fourier transform of a cutoff Coulomb potential with the cutoff radius of R c [24].If ΔρðrÞ is localized within a sphere of a radius R, as shown in Fig. 2(b), the extent of the Coulomb interaction is 2R at most in the sphere, which leads to R c ¼ 2R.In addition, a condition 4R < L should be satisfied to avoid the spurious interaction between the core holes.In practice, we set R c ¼ 1 2 L, and investigate the convergence of the binding energy as a function of L. With the treatment the core hole is electrostatically isolated from the other periodic images of the core hole even under the periodic boundary condition.
We implemented the method in a DFT software package OPENMX [25], which is based on norm-conserving relativistic pseudopotentials [26,27] and pseudoatomic basis 026401-3 functions [28].A generalized gradient approximation [29] to the exchange-correlation functional and an electronic temperature of K were used.The details of the implementation are given in the Supplemental Material [30].All the molecular and crystal structures used in the study were taken from experimental ones.Figures 3(a) and 3(b) show relative binding energies of core levels in gapped systems and metals including a semimetal (graphene), respectively, as a function of intercore hole distance.For the gapped systems the convergent results are obtained at the intercore hole distance of ∼15, 20, and 27 Å for cubic boron nitride (diamond), bulk NH 3 , and silicon, respectively.This implies that the difference charge ΔρðrÞ induced by the creation of the core hole is localized within a sphere with a radius of R ¼ L=4, e.g., ∼7 Å for silicon.In fact, the localization of ΔρðrÞ in silicon can be confirmed by the distribution in real space and the radial distribution of a spherically averaged Δρ, as shown in Figs.4(a) and 4(b).The deficiency of the electron around 0.3 Å corresponding to the core hole in the 2p states is compensated by an increase of electron density around 1 Å, which is the screening on the same silicon atom for the core hole.As a result of the short-range screening, the nonperiodic Hartree potential V ðNPÞ H ðrÞ deviates largely from −1=r, as shown in Fig. 4(c).In Fig. 3(a) it is also shown that the energy of the bulk NH 3 calculated with Eq. ( 4) converges at a value which is larger than that with Eq. ( 3) by 1.2 eV, implying that Eq. ( 4) cannot be applied to gapped systems.On the other hand, for the metallic cases we see that Eq. ( 4) provides a much faster convergence than Eq. ( 3), and both Eqs. ( 3) and ( 4) seem to give a practically equivalent binding energy, while the results calculated with Eq. ( 3) for TiN and TiC do not reach the sufficient convergence due to computational limitation [39].Therefore, Eq. ( 4) is considered to be the choice for the practical calculation of a metallic system because of the faster convergence.By compiling the size of the unit cell achieving the convergence into the number of atoms in the unit cell, the use of a supercell including ∼500 and 64 atoms for gapped and metallic systems in three dimensions might be a practical guideline for achieving a sufficient convergence by using Eqs.( 3) and (4), respectively.The calculated values of binding energies are well compared with the experimental absolute values as shown in Table I for the gapped and metallic systems, and the mean absolute (relative) error is found to be 0.4 eV (0.16%) for the eight cases.We see that the splitting due to spin-orbit coupling in the silicon 2p states is well reproduced.In addition, binding energies of a core level for gaseous molecules are shown in the Supplemental Material [30], where the mean absolute (relative) error is found to be 0.5 eV (0.22%) for the 23 cases.
In summary, we proposed a general method to calculate absolute binding energies of core levels in metals and insulators in the framework of DFT.The method is based on a penalty functional and an exact Coulomb cutoff method.The former allows us to calculate multiple splittings due to chemical shift, spin-orbit coupling, and exchange interaction, while the latter enables us to treat a charged system with a core hole under the periodic boundary condition.It was also shown that especially for metals Eq. ( 4) involving the neutralized final state is equivalent to Eq. ( 3) involving the ionized final state, and that Eq. ( 4) is computationally more efficient than Eq. ( 3).The remarkable agreement with the absolute binding energies measured in XPS demonstrates the validity of the proposed method for a variety of materials.For a better description of a case where an exchange interaction plays a dominant role in the splitting, a good approximation to the exchange-correlation functional should be adopted our method provides a natural way to examine the resulting total energies, while a possible error by the pseudopotentials and the dependency of chemical potential on surface structures should also be addressed in future work.Considering the importance of the XPS measurement in materials researches, the proposed method is anticipated to play an indispensable role in quantitatively analyzing absolute binding energies of core levels in solids.
ð0Þi
ðNÞ þ NΔμ and E f ðN − 1Þ ¼ E ð0Þ f ðN − 1Þ þ ðN − 1ÞΔμ using the intrinsic total energies E ð0Þ i ðNÞ and E ð0Þ f ðN − 1Þ by assuming the common chemical potential μ for both the initial and final states due to a very large N. Inserting these equations into Eq.( which allows us to employ the total energy of the neutralized final state E ð0Þ f ðNÞ instead of that of the ionized state.For metals, Eqs. (
3
is the integration over the first Brillouin zone whose volume is V B , f ðkÞ μ the Fermi function, and ψ ðkÞ μ the Kohn-Sham wave function of a two-component spinor.
FIG. 1 .
FIG. 1. Schematic energy diagram for a sample and a spectrometer in the XPS measurement.
FIG. 2 .
FIG. 2. (a) Treatment of the Hartree potential in a system with a core hole under the periodic boundary condition.(b) Configuration to calculate the nonperiodic part of the Hartree potential V ðNPÞ H by the exact Coulomb cutoff method for Δρ.
FIG. 3 .
FIG.3.Calculated binding energies, relative to the most converged value, of (a) gapped systems and (b) a semimetal (graphene) and metals as a function of intercore hole distance.The reference binding energies in (a) and (b) were calculated by Eqs.(3) and (4), respectively, for the largest unit cell for each system.
FIG. 4 .
FIG. 4. (a) Difference charge density Δρ in silicon, induced by the creation of a core hole in the 2p states, where the unit cell contains 1000 atoms and the intercore hole distance is 27.15 Å.(b) Radial distribution of 4πr 2 Δρ, where Δρ is a spherically averaged Δρ.(c) Radial distribution of VðNPÞ H which is a spherically averaged V ðNPÞ H .
PRL 118 ,
026401 (2017) P H Y S I C A L R E V I E W L E T T E
|
v3-fos-license
|
2021-09-28T01:09:36.470Z
|
2021-07-08T00:00:00.000
|
237749698
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-9268/11/3/30/pdf",
"pdf_hash": "b8a46e4847121098445220c496f74898f7889646",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1125",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "6033ad7d9d663e00649e8f151cfa062d791b767f",
"year": 2021
}
|
pes2o/s2orc
|
A New VCII Application: Sinusoidal Oscillators
: The aim of this paper is to prove that, through a canonic approach, sinusoidal oscillators based on second-generation voltage conveyor (VCII) can be implemented. The investigation demon-strates the feasibility of the design results in a pair of new canonic oscillators based on negative type VCII (VCII − ). Interestingly, the same analysis shows that no canonic oscillator configuration can be achieved using positive type VCII (VCII + ), since a single VCII + does not present the correct port conditions to implement such a device. From this analysis, it comes about that, for 5-node networks, the two presented oscillator configurations are the only possible ones and make use of two resistors, two capacitors and a single VCII − . Notably, the produced sinusoidal output signal is easily available through the low output impedance Z port of VCII, removing the need for additional voltage buffer for practical use, which is one of the main limitations of the current mode (CM) approach. The presented theory is substantiated by both LTSpice simulations and measurement results using the commercially available AD844 from Analog Devices, the latter being in a close agreement with the theory. Moreover, low values of THD are given for a wide frequency range.
Introduction
There has always been an interest in designing sinusoidal oscillators due to several applications in different areas such as communication, instrumentation, biomedical, etc. [1][2][3]. Compared to LC and RLC sinusoidal oscillators, RC-active type oscillators are advantageous from the integration point of view. In the early implementations of RC-active sinusoidal oscillators, operational amplifiers (Op-Amps) were used as active elements [4][5][6]. A systematic approach was introduced in [5] to design Op-Amp-based oscillators with a single active element and the minimum number of passive elements. The design method of [5] resulted in Op-Amp-based oscillator configurations composed of one active device, two capacitors and four resistors.
However, the limited frequency performance and slew rate of Op-Amps as well as their high power consumption imposed a restriction in the application of Op-Ampbased sinusoidal oscillators. A literature survey shows that, after revealing the potential capabilities of current-mode (CM) signal processing, efforts have been made to design RC-active sinusoidal oscillators using various CM active building blocks (ABBs) . Undoubtedly, second-generation current conveyor (CCII) as the main ABB of CM signal processing is the most widely used one for this purpose. Different approaches were employed to realize CCII-based oscillators. For example, in [8], the Op-Amps were replaced with composite current conveyors, resulting in CM oscillators. Unfortunately, this approach did not reach a simple realization because each amplifier could only be implemented with at least two CCIIs and two resistors. The extension of the approach presented in [5] was employed in [9] to synthesize CCII-based oscillators. Although the resulting sinusoidal oscillators enjoyed a canonic structure with the minimum possible number of elements, they were still not readily cascadable, i.e., they required additional voltage buffers to be actually usable in a real-world application. Most of the other CM oscillator realizations reported in using different ABBs instead of CCIIs also suffered from a large number of active and/or passive elements.
Recently, the dual circuit of CCII, called second-generation voltage conveyor (VCII), has attracted the attention of researchers [35][36][37][38][39][40][41][42][43][44]. In particular, the recent study reported in [35,36] showed that this device helps to benefit from CM signal processing features and overcome the limitations in CCII-based circuits. Particularly, unlike CCII, there is a low-impedance voltage output port in VCII which allows it to be easily cascaded with other high-impedance processing blocks, without the need for extra voltage buffers in voltage output applications. Compared to CCII, VCII has proven superior performance in many applications [37]; up to now, this device has not been employed in the realization of sinusoidal oscillators.
However, the VCII, combining the advantages of CM processing with a voltage-mode interfacing, could provide sinusoidal oscillators operating up to higher frequency than Op-Amp-based ones. Moreover, breaking the gain-bandwidth tradeoff, it could ease decoupling oscillation frequency and oscillation condition even at the higher end of the spectrum. Among the possible implementations of VCII-based sinusoidal oscillators, those requiring the minimum number of (active and) passive components, so-called canonic, are of particular interest to minimize silicon area and power consumption. The aim of this work is only to present possible VCII-based canonic sinusoidal oscillator realizations, replicating the general approach presented in [5,9] which, as previously anticipated, has been used to synthesize Op-Amp-based and CCII-based sinusoidal oscillators. We will show that it is possible to implement sinusoidal oscillators with a minimum number of elements using a single negative type VCII (VCII − ), two resistors and two capacitors, so demonstrating a new practical application of the VCII. The notable advantage of the proposed VCII − -based oscillator is that it is easily cascadable from port Z of VCII − , alleviating the need for any extra voltage buffer. Moreover, THD values are low also for higher frequency oscillators. However, the results of this study show that the applied approach does not reach any canonic configuration using positive type VCII (VCII + ). The effect of non-idealities in the VCII has been considered, and the proposed approach has been tested by both simulations and measurement results.
The organization of this paper is the following: in Section 2, an introduction on the VCII as active building block as well as the basics of the general configuration of the VCII-based oscillator is introduced. Section 3 proposes, in detail, the study on the possible realizations of VCII-based oscillators, and the effects of non-idealities in VCII are considered in Section 4. Simulations and measurement results are given in Section 5. Finally, Section 6 concludes the paper.
General Configuration of the VCII-Based Oscillator
The symbolic representation and internal structure of VCII are shown in Figure 1. In this block, Y is a low-impedance (ideally zero) current input terminal. The current entering into Y node is transferred to X terminal which is a high-impedance (ideally infinity) current output port. The voltage produced at X terminal is transferred to Z terminal which is a low-impedance (ideally zero) voltage output terminal. The relationship between port currents and voltages are given by: v Z = αv X , i X = βi Y and v Y = 0. In the ideal case we have α = 1 and β = ± 1. If β = 1 we are considering a VCII + , whereas if β = −1 we have a VCII − .
Using the approach presented in [5,9], the general configuration of an RC-active oscillator based on a single VCII is shown in Figure 2, where N GC represents 4-terminal network consisting of only capacitors and conductances. Using the approach presented in [5,9], the general configuration of an RC-active oscillator based on a single VCII is shown in Figure 2, where represents 4-terminal network consisting of only capacitors and conductances. The characteristic equation (CE) of the whole system can be calculated replacing, in the circuit of Figure 2, the equivalent model of a VCII of Figure 1b and considering a fictitious input at the node (of course, no input signal will be present in an actual oscillator circuit), as shown in Figure Using the approach presented in [5,9], the general configuration of an RC-active oscillator based on a single VCII is shown in Figure 2, where represents 4-terminal network consisting of only capacitors and conductances. The characteristic equation (CE) of the whole system can be calculated replacing, in the circuit of Figure 2, the equivalent model of a VCII of Figure 1b and considering a fictitious input at the node (of course, no input signal will be present in an actual oscillator circuit), as shown in Figure 3a at the building block level and Figure 3b The characteristic equation (CE) of the whole system can be calculated replacing, in the circuit of Figure 2, the equivalent model of a VCII of Figure 1b and considering a fictitious input at the Y node (of course, no input signal will be present in an actual oscillator circuit), as shown in Figure 3a Using the approach presented in [5,9], the general configuration of an RC-active oscillator based on a single VCII is shown in Figure 2, where represents 4-terminal network consisting of only capacitors and conductances. The characteristic equation (CE) of the whole system can be calculated replacing, in the circuit of Figure 2, the equivalent model of a VCII of Figure 1b and considering a fictitious input at the node (of course, no input signal will be present in an actual oscillator circuit), as shown in Figure 3a at the building block level and Figure 3b The configurations in Figures 2 and 3 can hence be seen as a positive feedback system for which the current transfer function (TF) is given by: Since A(s) = ±1 and β(s) = i f (s)/i out (s), (1) becomes: However, since from Figure 3b i out = −i X and in an oscillator circuit there is no input (i in = 0), we have i f = i Y and the TF is given by: From (3), we can derive the condition of existence (CE) as: By assuming v Z = v X , v Y = 0, the transconductance functions of the passive network in Figure 2 can be expressed by a rational expression as: where N X (s) and N Y (s) are the numerators at X and Z nodes, respectively, while D(s) is a common denominator. Using (5) and (6) in (4), the CE becomes: In (7), the plus and minus signs are for VCII − and VCII + respectively. To ensure a pure sinusoidal oscillation, the CE in (7) should be a second-order polynomial with purely imaginary roots. This requires the network N GC to include at least two capacitors. It has to be noted that, in Figure 2, by using a VCII + rather than a VCII − , at least three capacitors are required to provide a phase shift to generate a positive feedback loop. Therefore, no canonic oscillator is possible using VCII + , and for the following, we will consider the VCII in Figure 2 as a VCII − . By then assuming a network with only two capacitors, Equation (7) will be in the form: In order to start the oscillation, the following commonly known criteria must be satisfied: with c = 0, a = 0, so that, according to the Barkhausen criterion, purely imaginary poles for the closed-loop transfer function are obtained. The oscillation frequency is:
Oscillator Circuits
In this section we analyze the possible VCII − -based oscillators based on the scheme of Figure 2. The passive N GC is assumed as a general n-node network consisting of b possible branches between two nodes. Each node is a junction where two or more branches are connected, and each branch is an admittance connected between two nodes represented as: In the following, we analyze the CE to see if oscillation is possible for the particular case study of a five-node network. From this analysis we see that for a four-node network it is not possible to obtain a second-order polynomial for (7), whereas for a six-node network (or more) only non-canonic oscillators using more than the minimum number of passive components are possible.
N GC as a Five-Node Network
In Figure 4 we assume N GC as a five-node network. We start analyzing this network by performing KCL at node Y as reported in the following: Since no current is flowing into Y 8 and Y 9 , these admittances can be assumed as open circuit (Y 8 = Y 9 = 0). Routine analysis of Figure 4 results in i 3 as: Using (13)- (14), we have: Similar analysis for i x results:
Oscillator Circuits
In this section we analyze the possible VCII − -based oscillators based on the scheme of Figure 2. The passive NGC is assumed as a general -node network consisting of b possible branches between two nodes. Each node is a junction where two or more branches are connected, and each branch is an admittance connected between two nodes represented as: In the following, we analyze the CE to see if oscillation is possible for the particular case study of a five-node network. From this analysis we see that for a four-node network it is not possible to obtain a second-order polynomial for (7), whereas for a six-node network (or more) only non-canonic oscillators using more than the minimum number of passive components are possible.
NGC as a Five-Node Network
In Figure 4 we assume NGC as a five-node network. We start analyzing this network by performing KCL at node Y as reported in the following: Since no current is flowing into Y8 and Y9, these admittances can be assumed as open circuit (Y8 = Y9 = 0). Routine analysis of Figure 4 results in i3 as: Using (13)-(14), we have: Similar analysis for ix results: Using (15) and (16) in (7), the CE of the five-node network is found as: Figure 4. The N GC as a five-node network.
Using (15) and (16) in (7), the CE of the five-node network is found as: It can be noticed that CE does not depend on Y 7 , . . . , Y 10 which means that these branches can be assumed to be open circuit. For the other branches we can make different choices. If two branches have non-zero admittances, the following CEs are possible: In the general case, (18) can be expressed as By assuming Y a = sC a + G a and Y b = sC b + G b , (19) can be written as: From (20) it is not possible to have imaginary roots. Therefore, in case of two non-zero branches, no oscillation is possible.
Finally, we investigate the possibility of achieving oscillations from (17) in the case that three branches of N GC present non-zero admittance. For In both these cases, the CE has the general form of (19).
, the CE is obtained as: The CEs of (21) and (22) do not result in pure imaginary roots; therefore, these cases cannot give oscillator topologies.
Considering instead the cases ( , the CE has the following general form: It is easy to verify that the CE in (23) cannot be associated with an oscillator topology if only two capacitors are used (we need three of them at least).
Finally, for (Y 1 = Y 3 = Y 5 = 0) and (Y 2 = Y 4 = Y 6 = 0), the CEs will be given by (24a) and (24b), respectively: which are equations with the general form: In (25), oscillation condition is related to the choice of Y c and Y a or Y b as a capacitance.
In order to design an oscillator with the minimum number of components, we now have to verify the choice of the components in (25). It can be demonstrated that it is possible to have a minimum of two capacitors and at least two resistors in order to have a constant term in the constituting equation. In fact, with this choice we obtain a complete polynomial. In this case, having only three branches of the type sC + G with C ≥ 0, G ≥ 0, it is a matter of choosing an admittance between Y a , Y b , Y c of the type sC + G; the two remaining admittances will be a capacitance sC, and a conductance G. Inserting all possible combinations of options into (25), two sets of CEs which show imaginary roots are obtained. For For From (26) and (27), the oscillation condition (C o ) and oscillation frequency (ω 0 ) for the two cases are obtained respectively as: Thus, the minimum number of elements necessary to obtain an oscillator based on the scheme of Figure 2 is four, being two of these capacitors and two resistors. Considering the two cases (Y 1 = Y 3 = Y 5 = 0) and (Y 2 = Y 4 = Y 6 = 0) and the possible choices for Y a and Y b , we obtain a total of four canonic oscillators, corresponding to the following CEs: However, this number is reduced again to two if we consider that from each of the cases (Y 1 = Y 3 = Y 5 = 0) and (Y 2 = Y 4 = Y 6 = 0) we obtain two equal oscillators if we exchange the order of the elements which are connected in series. These two configurations are shown in Figure 5, and the corresponding transfer functions, oscillation frequencies ω 0 and oscillation conditions are reported in Table 1. The oscillation frequencies and oscillation conditions in (28) show a strong interdependence since they are functions of the same parameters. Since the oscillation condition requires that the sum of the ratios of the capacitances and of the conductances is constant and equal to 1, a possible strategy for frequency tuning requires varying both resistors or both capacitors, maintaining their ratio constant. For example, a ratio of 2 between C a and C c can be obtained by using two parallel capacitors equal to C a to obtain C c ; all three capacitors can be varied together; thus their ratio remains constant unless there are mismatches and the effect of parasitics.
Analysis of Parasitic Effects: A Case Study
The only two possible canonic topologies for the VCII-based oscillator are synthesized in Figure 6, where ZA and ZB are a series-connected RC network and a parallel-connected RC network; we define tA = RACA and tB = RBCB as the time constants associated with these networks. The two oscillator topologies shown in Figure 6 where Ri = 1/Gi. From Figure 6, the oscillation condition can be obtained as: where and are the VCII current and voltage gains (ideally both equal to 1), and the oscillation frequency is given by: T I (s) = −
Analysis of Parasitic Effects: A Case Study
The only two possible canonic topologies for the VCII-based oscillator are synthesized in Figure 6, where Z A and Z B are a series-connected RC network and a parallel-connected RC network; we define t A = R A C A and t B = R B C B as the time constants associated with these networks. The two oscillator topologies shown in Figure 6 correspond to the cases: Type II : where R i = 1/G i . From Figure 6, the oscillation condition can be obtained as: where β and α are the VCII current and voltage gains (ideally both equal to 1), and the oscillation frequency is given by: The oscillation condition and the oscillation frequency are affected by the non-idealities of the VCII, i.e., finite port impedances, gain errors (a < 1, |b| < 1) and poles of the voltage and current buffers. In order to analyze the effects of these non-idealities on the oscillator behavior, a model of a real VCII has been developed and implemented (see Figure 7), able to take into account the non-idealities. The oscillation condition and the oscillation frequency are affected by the non-idealities of the VCII, i.e., finite port impedances, gain errors (a < 1, |b| < 1) and poles of the voltage and current buffers. In order to analyze the effects of these non-idealities on the oscillator behavior, a model of a real VCII has been developed and implemented (see Figure 7), able to take into account the non-idealities. In the general case, we can model the VCII with the first-order transfer functions and complex port impedances The oscillation condition and the oscillation frequency are affected by the non-idealities of the VCII, i.e., finite port impedances, gain errors (a < 1, |b| < 1) and poles of the voltage and current buffers. In order to analyze the effects of these non-idealities on the oscillator behavior, a model of a real VCII has been developed and implemented (see Figure 7), able to take into account the non-idealities. In the general case, we can model the VCII with the first-order transfer functions and complex port impedances In the general case, we can model the VCII with the first-order transfer functions and complex port impedances In order to better understand the effects of non-idealities and to compare the performance of the two topologies in Figure 5, different cases have been considered under the hypothesis that the ideal design has been carried out starting from the oscillation condition (34). When the non-idealities of the VCII are taken into account, Equation (34) becomes By a simple inspection of the impedances Z A and Z B given by (33), and of the port impedances (38)- (40), it is evident that the Type I canonic oscillator should be less affected by non-idealities. In fact, in this case Y x can be absorbed in Z A (G x and C x are summed to 1/R 5 and C 5 , respectively), and Z y and Z z in Z B : a parallel RC network is used in parallel to a port impedance modeled as an RC parallel network, and a series impedance is connected in series to port impedances modeled as RL series networks. In contrast, for the Type II canonic oscillator, a series network is connected in parallel to the parallel RC port impedance, and a parallel RC network is connected in series to LR series port impedances, thus non-ideal port impedances alter Z A and Z B more significantly. The Type I canonic VCII-based oscillator seems therefore more suited to a practical realization, and it has been selected for further analysis.
Resistive Port Impedances
If only the resistive parasitics G x , R y and R z in (39)- (41) are considered, the oscillation condition for the Type I canonic oscillator becomes: It is evident from (42) that the effect of the port impedances is limited, since they are simply summed to the ones from the NGC network (that have to be chosen as much larger than the corresponding parasitics to make them negligible). The oscillation frequency in Table 1 is modified as follows: where R 1 = R 1 + R y + R z and G 5 = 1/R 5 = G 5 + G x , and the oscillation condition becomes α |β| If the parasitic capacitance C x at the X terminal is also considered, Equations (43) and (44) have to be slightly modified by considering G 5 = C 5 + C x instead of C 5 . Inductances L y and L z can be neglected in several applications and have not been considered in the following. However, for the sake of completeness, we report below the expression for the oscillation frequency when inductive parasitics are also considered:
Single-Pole Transfer Functions
If the non-ideal transfer functions in (36) and (37) are also considered in addition to the terminal resistive parasitics in (38)- (40), the denominator of the oscillation condition in (34) becomes of fourth degree: α |β| sG 5 R 5 as 4 + bs 3 + cs 2 + ds + e = 1 (46) Prime variables are considered for R 5 , G 5 and R 1 to account for parasitic resistances R y and R z and admittance Y x , as in the previous subsection, and we have A real value is obtained for the left-hand side, under the hypothesis of a purely imaginary denominator. By equating to zero the real part of the denominator at ω = ω 0 , we get: where c is given by (47c). The approximation 4a c 2 << 1 is justified under the hypothesis that the parasitic time constants τ x and τ z are significantly lower than the time constants τ A = R 5 C 5 and τ B = R 1 C 3 . Finally, the oscillation frequency ω 0 can be expressed in terms of the ideal value ω 0 , by using the expression of coefficient c: Under the simplifying assumptions τ x = τ y = τ par and τ A = τ B = τ, the relative error on the oscillation frequency (1 − ω o /ω o ) can be readily expressed as a function of the ratio τ par /τ, thus providing a design guideline for the bandwidth of the VCII transfer functions.
The graph in Figure 8 shows that errors lower than 10% can be obtained if the time constant ratio is lower than 0.06.
Ry and Rz and admittance Yx, as in the previous subsection, and we have A real value is obtained for the left-hand side, under the hypothesis of a purely imaginary denominator. By equating to zero the real part of the denominator at = , we get: where c is given by (47.c). The approximation << 1 is justified under the hypothesis that the parasitic time constants and are significantly lower than the time constants = ′ ′ and = ′ . Finally, the oscillation frequency ′ can be expressed in terms of the ideal value , by using the expression of coefficient c: Under the simplifying assumptions x = y = par and A = B = , the relative error on the oscillation frequency (1 − ⁄ ) can be readily expressed as a function of the ratio / , thus providing a design guideline for the bandwidth of the VCII transfer functions. The graph in Figure 8 shows that errors lower than 10% can be obtained if the time constant ratio is lower than 0.06.
Experimental Results
The performance of the Type I canonic oscillator of Figure 5a has been verified by both LTSpice simulations and experimental results. In particular, the approximated ex-
Experimental Results
The performance of the Type I canonic oscillator of Figure 5a has been verified by both LTSpice simulations and experimental results. In particular, the approximated expression for ω 0 in (49) has been checked for different values of τ and τ x = τ z , and errors lower than 1% have been found.
Then, we have used the commercially available AD844 to configure a VCII − as shown in Figure 9. A single VCII is realizable using two AD844 ICs, whose Spice model can be found in [45]. The situation is quite different in the case of an integrated design, where a single VCII block can be exploited to design the oscillator, as shown in the previous sections.
The circuit was supplied with a dual ±5 V voltage, achieving a total power consumption of 14 mA.
Firstly, simulation of the topology in Figure 5a has been carried out to evaluate performance in terms of robustness to parasitics, and to estimate the achievable THD. In particular, the circuit has been designed with C 3 = 2C 5 = 2 nF and R 5 = 2R 1 = 15 kΩ, and an oscillation frequency f 0 = 10.6 kHz was expected.
However, AD844 parasitics can slightly change the oscillation frequency and/or cause failing of oscillation condition: in this case, starting from the nominal design, the resistance R 1 can be changed (to 7.3 kΩ in the present case, see the schematic in Figure 10) to allow fulfillment of oscillation condition in (41): the obtained oscillation frequency is f 0 = 10.8 kHz, as shown in Figure 11.
A model for the VCII composed of AD844 components, shown in Figure 9, has been extracted from Spice simulations according to the equations presented in Section 4. At terminal X, we have found C x = 5.5 pF in parallel with a resistor R x = 3 MΩ. Purely resistive input impedances have been extracted at node Y (R y = 50 Ω) and Z (R z = 15 Ω). Finally, a dominant pole has been found for both the transfer function α(s) at f = 49 MHz (corresponding to τ x = 3.25 ns), and for β(s) at f = 764 MHz (τ z = 208 ps). lower than 1% have been found.
Then, we have used the commercially available AD844 to configure a VCII − as shown in Figure 9. A single VCII is realizable using two AD844 ICs, whose Spice model can be found in [45]. The situation is quite different in the case of an integrated design, where a single VCII block can be exploited to design the oscillator, as shown in the previous sections.
The circuit was supplied with a dual ±5 V voltage, achieving a total power consumption of 14 mA.
Firstly, simulation of the topology in Figure 5a has been carried out to evaluate performance in terms of robustness to parasitics, and to estimate the achievable THD. In particular, the circuit has been designed with = 2 = 2 nF and = 2 = 15 kΩ, and an oscillation frequency f0 = 10.6 kHz was expected. However, AD844 parasitics can slightly change the oscillation frequency and/or cause failing of oscillation condition: in this case, starting from the nominal design, the resistance can be changed (to 7.3 kΩ in the present case, see the schematic in Figure 10) to allow fulfillment of oscillation condition in (41): the obtained oscillation frequency is f0 = 10.8 kHz, as shown in Figure 11.
A model for the VCII composed of AD844 components, shown in Figure 9, has been extracted from Spice simulations according to the equations presented in Section 4. At terminal X, we have found Cx = 5.5 pF in parallel with a resistor Rx = 3 MΩ. Purely resistive input impedances have been extracted at node Y (Ry = 50 Ω) and Z (Rz = 15 Ω). Finally, a dominant pole has been found for both the transfer function ( ) at f = 49 MHz (corresponding to = 3.25 ns), and for ( ) at f = 764 MHz ( = 208 ps). The element values used for the different design case studies, the simulated THD and the oscillation frequency evaluated with both the LTSpice AD844 non-linear model and with the VCII linear model, including parasitics, are summarized in Table 2. The linear model is accurate enough to be used for circuit design, and excellent simulated performance has been achieved in terms of THD with the proposed VCII topology. The element values used for the different design case studies, the simulated THD and the oscillation frequency evaluated with both the LTSpice AD844 non-linear model and with the VCII linear model, including parasitics, are summarized in Table 2. The linear model is accurate enough to be used for circuit design, and excellent simulated performance has been achieved in terms of THD with the proposed VCII topology. The element values used for the different design case studies, the simulated THD and the oscillation frequency evaluated with both the LTSpice AD844 non-linear model and with the VCII linear model, including parasitics, are summarized in Table 2. The linear model is accurate enough to be used for circuit design, and excellent simulated performance has been achieved in terms of THD with the proposed VCII topology. Figure 11. Simulated output spectrum of the oscillator shown in Figure 10. Finally, experimental verification of performance has been carried out, exploiting the test bench shown in Figure 12: for data acquisition, the Digilent Analog Discovery 2™ board was used [46]. The design of Figure 5a was implemented as the reference topology for the oscillator. Measurements were carried out in the range (10-10 6 ) Hz and are reported in Table 3. In agreement with simulation results, the oscillator shows a very low Figure 11. Simulated output spectrum of the oscillator shown in Figure 10. Finally, experimental verification of performance has been carried out, exploiting the test bench shown in Figure 12: for data acquisition, the Digilent Analog Discovery 2™ board was used [46]. The design of Figure 5a was implemented as the reference topology for the oscillator. Measurements were carried out in the range (10-10 6 ) Hz and are reported in Table 3. In agreement with simulation results, the oscillator shows a very low THD value even at 1 MHz (considering 10 harmonics). The average relative frequency error between measured and ideal values is −5.2% and is comparable with tolerances of the passive components. THD value even at 1 MHz (considering 10 harmonics). The average relative frequency error between measured and ideal values is −5.2% and is comparable with tolerances of the passive components. An example of the output signal, both in the time and frequency domains, is reported in Figure 13a,b for a frequency of 1 MHz. Figure 14 shows the THD and frequency error trends vs. frequency. An example of the output signal, both in the time and frequency domains, is reported in Figure 13a,b for a frequency of 1 MHz.
An example of the output signal, both in the time and frequency domains, is reported in Figure 13a,b for a frequency of 1 MHz. Figure 14 shows the THD and frequency error trends vs. frequency.
Conclusions
By means of a systematic analysis, the possibility of realizing VCII-based oscillators is studied and demonstrated. The investigation results in a pair of new canonic oscillators based on VCII − . However, it is shown that, using the systematic approach, no oscillator configuration is possible using VCII+. The two found oscillator configurations are the only possible ones which use only two resistors, two capacitors and a single VCII − . Compared to Op-Amp-based oscillators, designed using the same systematic approach which employs two capacitors and four resistors, the proposed VCII-based oscillator is preferred in terms of low number of capacitors and resistances. Another interesting feature of the found VCIIbased oscillator is that the produced sinusoidal output signal is easily available through the low output impedance Z port, while the CCII-based oscillators designed using the same systematic approach requires an additional voltage buffer for practical use. Simulations and experimental results using AD844 as VCII are reported to validate the theory.
A comparison with oscillator topologies based on different ABBs, with particular attention to canonic topologies, is reported in Table 4. The table reports the type of active building block (ABB) the oscillator is based on, the number of active and passive components, specifying how many of them are grounded, the availability of a quadrature output and the independence of oscillation condition from oscillation frequency that allows tuning the oscillator acting on a single component. It has to be noted that the independence from the oscillation condition on oscillation frequency often requires additional passive (and sometimes also active) components, thus resulting in non-canonic topologies. Notable exceptions are the oscillators of [21,26] that use complex ABBs with gain, whose value contributes to satisfying the oscillation condition.
|
v3-fos-license
|
2024-04-12T15:57:06.000Z
|
2024-04-01T00:00:00.000
|
269048463
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/isagsq/article-pdf/4/2/ksae015/57191230/ksae015.pdf",
"pdf_hash": "e22408fc3d9ccde170049ee9ae98405c91dec7fc",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1126",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "1998fd557a748aaa252f6276857f1845604faf3a",
"year": 2024
}
|
pes2o/s2orc
|
It is Who You Know: The Influence of Faith-Based Donor Networks on the Antitrafficking Work of Faith-Based Organizations
Faith-based organizations (FBOs) are prevalent actors in antitrafficking work, in part due to the substantial resources existing within faith-based donor networks. FBOs are often funded by churches, other FBOs, and individual donors, which make up donor networks partly secluded from mainstream development funding. Drawing on research practices and concepts from institutional ethnography, I explore the specific composition of the donor networks of three antitrafficking FBOs in Thailand and Cambodia. I demonstrate how the character of donor networks shapes the antitrafficking work of the FBOs. The analysis shows that the three different donor networks contribute to distinctly different approaches to faith in antitrafficking. This article thus contributes to understanding the varied ways in which faith shapes the work of FBOs and sheds light on how the intertwinement of religious ideas and material resources influences the particular antitrafficking work of FBOs. Les organisations fondées sur la foi (OFF) sont des acteurs très importants dans la lutte contre la traite d’êtres humains, en partie grâce aux ressources importantes que possèdent leurs réseaux de donateurs. Les OFF sont souvent fondées par des églises, d’autres OFF ou les donateurs eux-mêmes. Aussi, les réseaux de donateurs sont en partie isolés du financement de développement dominant. En me fondant sur les pratiques de recherche et les concepts de l’ethnographie institutionnelle (EI), je m’intéresse à la composition spécifique des réseaux de donateurs de trois OFF de lutte contre la traite d’êtres humains en Thaïlande et au Cambodge. Je démontre que l’identité des réseaux de donateurs façonne le travail de lutte contre la traite d’êtres humains des OFF. L’analyse montre que les trois réseaux de donateurs contribuent
Introduction
Faith-based organizations (FBOs) are prevalent actors in development aid in general ( Davis 2019 ; Dotsey and Kumi 2019 ; Haugen 2019 ) and antitrafficking work in particular ( Frame et al. 2019 ; Lonergan et al. 2020 ).One particular characteristic of Christian antitrafficking FBOs is their distinct funding patterns: In contrast to secular NGOs, FBOs are primarily funded by individual donors, churches, and other FBOs ( Schnable 2015 , 2016 ; Henriksson 2023 ).These faith-based donor networks can be expected to influence the work of FBOs, but previous research has not extensively explored the effects of these dynamics on antitrafficking practice.To further the understanding of antitrafficking FBOs, studies have explored their motivations and experiences ( Frame 2019 ; Pinkston 2019 ; Henriksson 2021 ) and the distinctive characteristics of their antitrafficking work ( Frame 2017 ; Graw Leary 2018 ; Lonergan et al. 2020 ; Henriksson 2023 ).While such perspectives are both important and illuminating, they are not sufficient to understand how and why antitrafficking FBOs design their work in particular ways.The relational context in which they are situated and the specific influence that their donors have on them also have to be considered.This article sheds light on the messy reality of the intersection between (religious) ideas and material resources and on how their intertwinement matters through an exploration of how donor relationships shape the antitrafficking work of FBOs in Thailand and Cambodia.
The character and composition of donor networks vary distinctly between FBOs, and the category of FBOs is also heterogeneous.Thus, when studying FBOs, it is important to note that FBOs can be affiliated to various faith traditions ( Clarke 2006 ; Jeffery et al. 2017 ).In addition, FBOs with the same religious affiliation can still vary in terms of their faith-infusion, that is, in terms of how central faith is to their identity and their activities ( Sider and Unruh 2004 ).This article focuses on Christian antitrafficking FBOs with varying degrees of faith-infusion.
Foreign aid interventions rely on successfully maintaining relationships, and interventions ultimately fail because they lose validation ( Mosse 2004 ; Mosse and Lewis 2005 , 2006 ).These relationships are both local and international.Illustrating the influence of international relationships, research has found that local NGOs experience discursive containment as they are disciplined by external ideas, mainly through larger international NGOs ( Yea et al. 2014 ).For example, ideals concerning aid effectiveness constrain the ideas and practices of development practitioners ( Campbell and Teghtsoonian 2010 ).FBOs and other NGOs employ an array of tactics to safeguard autonomy from donors ( Mitchell 2014 ).This is true for development actors in general, but also within the subfield of antitrafficking.More research, however, is needed to explore how antitrafficking FBOs navigate the (potentially conflicting) demands and expectations from their specific donor networks.
In this article, I examine the influence of the specific, faith-based donor networks of antitrafficking FBOs.I explore how these relationships shape the design of programs and intervention strategies, as well as the ways in which faith is activated in the organizations and in their programming.Exploring these dynamics, I address the following research questions: (1) What do the donor networks of FBOs look like?(2) How do these donor networks shape the role of faith in the antitrafficking work of the FBOs?(3) What are the effects on the design of their antitrafficking work?To answer these questions, I have conducted case studies of three Christian antitrafficking FBOs, one in Thailand and two in Cambodia.Using analytical tools drawn from institutional ethnography (IE), I find that the three FBOs have distinct donor profiles, which carries implications for how the FBOs approach human trafficking.Their three distinct faith-based donor networks have expectations about how the issue of human trafficking should be approached and about the role of religion in the programs that they fund.Such expectations and conditionalities, which may also diverge between different donors funding the same FBO, need to be navigated, and sometimes negotiated or resisted, by the three FBOs.The donor network composition matters, where more churches and (Christian) individuals lead to a higher emphasis on faith in antitrafficking work as well as a more individualist focus in programming.A more diverse donor network, with institutional funding and occasional secular donors as well as faith-based donors, tends to generate ambivalence concerning the role of faith in antitrafficking but also tensions arising from conflicting demands and expectations on program content as well as administration.This article thus sheds new light on the varied ways in which faith, in tandem with money, shapes the antitrafficking work of FBOs.
This article continues by reviewing the existing research on donor networks of FBOs, and the influence of donors on NGOs and FBOs, before presenting the analytical approach of the study, namely IE.I also provide a description of the studied FBOs, and the empirical material that this study builds on.I then present the analysis of the three FBOs; first, I explore the donor networks of each FBO, and then continue by discussing how expectations concerning faith in antitrafficking are managed by the FBOs, and how this shapes how antitrafficking work is done by each of them.The article ends with a concluding discussion about key findings and takeaways.
Previous Research: FBOs and Faith-Based Donor Networks
FBOs make up a considerable share of antitrafficking NGOs in Southeast Asia ( Frame et al. 2019 ; Lonergan et al. 2020 ; Henriksson 2023 ).Despite this, institutional funding (funding from government or inter-governmental donors) is less accessible to FBOs than to their secular counterparts ( Davis 2019 ).This is partly due to a trend where a larger share of development funding is awarded to fewer and larger organizations ( Banks and Brockington 2020 ).Christian FBOs, in particular in the field of antitrafficking, differ from most NGOs in terms of their donor profiles, as Christian FBOs are primarily funded by individuals, churches, and other FBOs, while secular NGOs to a higher degree are funded by institutional donors (e.g., UN agencies or Western government donors) or other NGOs ( Henriksson 2023 ).This is not necessarily a problem for Christian FBOs, as many have extensive national and international faith-based networks, which makes them less dependent on institutional donor funding ( Clarke 2006 ).Five out of the ten largest international development alliances are Christian FBOs ( Haugen 2019 ), pointing toward the substantial financial resources that FBOs can access through their networks.Research points toward the willingness for charitable donations and activism across the FBOs' networks, as church attendees are more likely to volunteer and to make individual donations as a result of prevailing social norms in these contexts.These faithbased networks also facilitate the founding of new NGOs, made possible through volunteerism and available global ties ( Schnable 2015 ).Thus, it is known that FBOs have distinct donor networks, but less is known about the variations in donor profiles between FBOs, and how these shape the work of FBOs.It has been found that donors in concert with other stakeholders shape the ideas, beliefs, and practices of development practitioners on the ground ( Campbell and Teghtsoonian 2010 ).Within faith-based networks, religion provides ways of thinking that legitimize development work, and it also provides networks for recruiting donors and volunteers.In addition, it provides modes of action that link FBOs, supporters, and local aid recipients ( Schnable 2016 ).A shared religious faith helps to connect people across cultural and geographical distances to work toward a shared objective ( Reynolds 2013 ; Reynolds and Offutt 2013 ).In contrast, within mainstream international development, even though FBOs are prevalent actors, secular norms where religion is treated as a personal matter that is either irrelevant or harmful for development are dominant ( Hallward 2008 ; Dragovic 2017 ; Butcher and Hallward 2018 ).As a result, FBOs often find it challenging to access funding from mainstream development donors, which increases their reliance on faith-based donors.
Another important context for this article is that the modern development paradigm with its emphasis on doing development through projects that can lead actors to reduce the complexity of development issues in order to fit frameworks for results-based management within short project timeframes ( Scott 2021 ).In line with this, donor demands can stifle creativity among implementors.Here, the terms on which funding is provided determine the room for maneuvering, flexibility, and creativity of implementing development actors ( Lewis et al. 2021 ).It is important to consider how this plays out for FBOs, considering their ways of legitimizing their work in relation to their specific donor networks.
These previous findings about the different ways that donors impact the practical work of development actors on the ground are important starting points for this article.So too are findings about the existence of active and resourcerich faith-based donor networks.This article builds on these findings to investigate how variations in the composition of faith-based donor networks influence how antitrafficking is done by FBOs.
Methods and Material: An IE of Antitrafficking FBOs
The Assumptions, Concepts, and Approach of IE IE is a feminist-inspired research approach that draws on a variety of ethnographic methods to link, describe, and explicate tensions embedded in everyday experiences and examines how people's ordinary practices are linked to a larger fabric, which is not visible from the everyday ( Campbell and Teghtsoonian 2010 ; Rankin 2017a ).One of the scholars who developed IE was Dorothy Smith (2005) as she and others set out to develop a feminist understanding and appreciation for people's embodied experiences-what happens to them, what they do, and what it feels like.By doing so, Smith aimed to make visible the people who "disappear" in objectified knowledge ( Walby 2007 ).IE starts from people's everyday experiences to show how these experiences come to be, and how the practices in an organization or an institution are organized in a particular way ( Smith 2006 ).
IE scholars view an institution as a complex of cultural rules within a thematic field, which is supported and rationalized through actors, norms, and policies ( Smith 2001 ; Teghtsoonian 2016 ; Rankin 2017a ; Tummons 2017 ).The institution, or the thematic field, may be experienced differently depending on an individual's positionality.Antitrafficking can be understood as a thematic field of development aid that is "translocal" in the sense that different local sites are connected by organizing from a distance by ruling social relations ( Teghtsoonian 2016 ; Rankin 2017a ; Tummons 2017 ).One example of translocal organizing from a distance within antitrafficking is the Palermo protocols on human trafficking ( UNODC 2006 , 2018 ), and the Trafficking in Persons (TIP) reports issued by the US State Department ( USA-StateDepartment 2019 ), which many governments and antitrafficking organizations view as normatively influential ( Riback 2018 ).
I view IE as a guiding principle of exploring the social world, and thus it is important to note the particularities of how I gathered material and conducted the analysis.Below, I describe how I apply IE's analytical concepts: standpoint, ruling relations, and problematic.I then continue to describe the studied cases and the material.
STANDPOINT
In IE, a standpoint is a methodological starting point from which one can begin to explore the embodied experiences of an institution, or a field of practice ( Smith 2005 ).The experiences of people from whose standpoint the exploration starts are a grounded gateway into the inner fabrics of the social organization that may otherwise be concealed to the scholar.The concept of standpoint thus refers to a particular position within an institution or in a social organization ( Rankin 2017b ).This assumes that people are experts on the conditions of their own life, and research should therefore start from their experiences.I have chosen the stand-point of the staff of the three Christian antitrafficking FBOs in Thailand and Cambodia as the starting point of my analysis.Most of them are Christian, but not all, even though their organizations have a Christian identity.Some are part of management, while others are field staff.Their experiences provided crucial and valuable information about what and who influences the design of their antitrafficking work.The concept of standpoint is interlinked with the analytical concepts of ruling relations and problematics, which are explained below.The studied Christian antitrafficking FBOs will be presented in a later section.
RULING RELATIONS
Within IE, ruling relations are one of the key analytical concepts.Smith describes ruling relations as "objectified forms of consciousness and organization, constituted externally to particular people and places, creating and relying on textually based realities" ( Smith 2005 ).Ruling relations are the manifestation of power of the actors within the thematic field, as in the case of this article concerning the role of faith in antitrafficking, which is "at once present and absent in the everyday" ( Smith 2005 , cited in Rankin 2017b ).Ruling relations are formed through the collective actions, inactions, discourses, texts, and norms of actors within the thematic field.These ruling relations are often textual, but they can also be carried out via implicit norms revealed in interviews and practices and shape how concepts should be understood and translated into action ( Campbell and Teghtsoonian 2010 ).Ruling relations permit, legitimize, or forbid particular forms of social action ( Tummons 2017 ), and they shape the everyday experiences of the staff of the FBOs.
Ruling relations are analytically noticed as they are activated in the local setting, and experienced by people from which standpoint one chooses to explore.Practically, this means looking for ruling texts or influential instructions and thereby identifying tensions between everyday experiences and dominant rules and norms ( Rankin 2017b ).In my study, ruling relations have primarily been uncovered by interviewing people within the FBOs, and with people surrounding the FBOs, exploring the extent to which they have experienced rules, norms, and expectations that order their everyday actions.I have also observed the work of the FBOs in the communities where their programs are implemented ( Tummons 2017 ).Actual texts have thus not been the primary focus, as I have followed Williams and Rankin's concept of "phantom texts" to describe the absence of such evidence, and instead relied on people's talk and activities ( Williams and Rankin 2015 ).
PROBLEMATICS
When engaging with analysis, the starting point in IE is identifying key problematics , that is, discrepancies or tensions between textual (phantom or otherwise) realities and reallife experiences ( Rankin 2017b ).Many times, ruling relations are accepted without any tension, are not perceived as problematic by the ruled subjects themselves, and unnoticeably shape the everyday of the FBOs.There are, however, also problematics, or tensions, between contradicting ruling relations that need to be managed by the FBOs.Sometimes, the problematics lies in that the ruling relations are in opposition to the core values or beliefs of the FBOs, and other times they clash with the wider norms of society.My analysis centers around the problematics that emerge from the standpoint of the staff of the three studied FBOs.Consequently, it is the response of the FBO staff to these problematics, stemming from their relationships to their donors, that shapes the antitrafficking work of the FBOs.
To capture the complex relationships in which the FBOs are situated, and the problematics that arise from these, I asked FBO staff to describe their organization's relationships with other organizations and actors.These could be in the local field or at the national or international level.These were captured in relationship maps drawn by FBO staff during group interviews, and in tandem with interviews, observations, and documents, these maps have been used to explore the relational experiences of the staff of FBOs in antitrafficking.To identify problematics, I have looked for instances when the participants describe the existence of tension, friction, or contradiction in their everyday work.For the latter stages of the analysis, I used methods aligning with thematic analysis ( Braun and Clarke 2006 ), where I have read through the indexed material looking for patterns in problematics.I then categorized these problematics into themes.This procedure allowed easier navigation in the vast and dense material, and helped me to identify the main problematics.The main problematic that I found through the analysis centered around the role of faith in antitrafficking.From this problematic, certain consequences in terms of shaping the work of the FBOs followed.These problematics have then been explored further, focusing on how the FBOs manage the problematics that occur, and how this shapes their antitrafficking work.
THE THREE STUDIED FBOS
The particular FBOs in this study can be regarded as small or medium-sized FBOs ( Davis 2019 ). 1 They have all been founded by Westerners but are today increasingly or exclusively staffed by national staff.I have given the studied FBOs three pseudonyms, which means that their identity is not revealed, while at the same I am able to provide more information about what kind of organizations they are.Some key information about the the three studied FBOs are summarized in table 1 .
Christian Community Trafficking Prevention (CCTP) is a small to medium-sized national faith-centered Christian-Evangelical development organization in Cambodia.Categorizing the organization as faith-centered means that CCTP has a clear Christian identity, which is reflected in their mission and vision statements ( Sider and Unruh 2004 ).CCTP does not engage in proselytizing but attempts to find ways to engage with religion and religious leaders in constructive ways for the benefit of community development. 2ntitrafficking is not their only focus, but the main objective in a specific geographic area where they work. 3aith Unite Against Trafficking (FUAT) is a medium-sized faith-affiliated Christian antitrafficking organization with a primary objective to work with antitrafficking in Cambodia, but it also works with international and regional advocacy.The organization is categorized as faith-affiliated ( Sider and Unruh 2004 ).FUAT has over the years de-emphasized its Christian identity and is now looking for more religiously inclusive ways of describing itself. 4he Christian Way Out of Trafficking (CWOT) is a faithpermeated Christian-Evangelical antitrafficking organization focusing on sex trafficking, and in particular on helping survivors of sex trafficking in Thailand.The categorization as faith-permeated ( Sider and Unruh 2004 ) signals that for CWOT faith is seen as essential in all aspects of the organization, and its work.CWOT works to help women who have been victims of human trafficking with therapy, legal work, and repatriation to their home countries.
EMPIRICAL MATERIAL
The FBOs were purposively selected with the aim of finding variations in geographic (urban and rural) and strategic focus (categories of human trafficking victims; source and destination contexts).While all are affiliated to Christian Protestant and Evangelical faith traditions, I looked for variations in faith-infusion (i.e., how central faith is to the identity and activities of the organization) based on Sider and Unruh's typology ( 2004 ).
I conducted semi-structured interviews with seventeen FBO staff over Zoom during the period of January to June 2021.The interviews focused on the FBOs' work against trafficking and their main influences, inspirations, and collaborations.Following these interviews with staff of the three FBOs, I then spent about 4 weeks doing observations of their work and on-site interviews in January (Thailand) and March (Cambodia) 2022.During the field visits, I followed the staff of FBOs as they carried out their activities.To learn more about, and from, the relationships of the FBOs, I also interviewed secular NGOs, government officials, and donors of the FBOs.In connection to the abovementioned observations, I interviewed beneficiaries of the activities or people who lived in the areas of operation of the FBOs.In total, I have conducted fifty-eight individual semi-structured interviews, three group interviews with FBOs, twenty-one observations of FBO activities, thirty-eight document analyses, and three relationship mapping exercises with the studied FBOs.
Participants were provided with information about the purpose of the study both orally and in writing. 5The partic-ipants, and the studied FBOs, were granted anonymity and confidentiality to protect them from potential negative reactions from employers, donors, governments, or their communities.That is why I use pseudonyms for the FBOs (CCTP, FUAT, and CWOT) throughout the presentation and discussion of the findings.About half of the respondents were women, and 20 percent were from other countries than Thailand or Cambodia, for example, the United Kingdom or the United States.
My experiences as a development practitioner, and with faith-based actors, contributed to filter the knowledge that I have generated ( Guillemin and Gillam 2004 ), but it has also proven crucial when detecting veiled religious ideas and practices that were downplayed to evade criticism from a development context infused by secularism ( Hallward 2008 ).It has also helped me navigate the development aid jargon, and provided some common ground with many of my participants.My male, academic, and European vantage point nevertheless placed me as an outsider in many situations but working with gatekeepers (i.e., other antitrafficking organizations) willing to introduce me, and interpreters and cultural guides, allowed me to partially bridge this distance.My research was supported by two development organizations working with antitrafficking in Southeast Asia.These supporting partners assisted me with gaining access to the field, but other than that they did not have any significant influence on the research.
Findings: Ruling Relations of Faith-Based Donor Networks in Antitrafficking
In this section, I describe the specific dynamics between each of the three FBOs and their donors, from the standpoint of the staff of each studied FBO.All three FBOs are embedded in ruling relations, emphasizing the role of faith in antitrafficking and ultimately encouraging them to acknowledge the importance of faith as a force for change, as well as the importance of religious leaders for countering human trafficking.However, the FBOs experience different ruling relations on faith in antitrafficking.These differing ruling relations lead to different problematics, which the FBOs resolve in distinct ways, with different implications for their practices.The FBO believes that they share strategic goals with their FBO donors, as well as the intermediate FBO donors: "We are not that much different."7This FBO is not, at the time of study, supported in any substantial way by individual donors or churches directly.However, they are currently in the process of diversifying their funding, including finding revenue from local consultancy work. 8From the standpoint of CCTP, faith is the foundation for their cooperation with their FBO donors.This view is supported by a representative of the main donor, who remarked that "if they [CCTP] would become a secular NGO we would most likely phase out the cooperation over a period of years ." 9hus, the shared faith identity is perceived as a key condition for support.
Historical relationships also contribute to the links between faith-based donors and FBOs.CCTP cooperates with donors that align with them in terms of both faith and strategy, leading to relationships that are relatively harmonious. 10While the donors have input to the strategy of CCTP, its staff seem to think that they are largely in agreement: "Yeah, the donor also give input to our strategic plan.But [there is] not too much compromise with the donor's strategic plan ." 11The donor describes the cooperation in terms of a partnership, while at the same time recognizing that they are a donor: "We have really tried to build up the partnership aspect [and] that has enabled [CCTP] to discuss challenges a bit more freely than maybe if you're talking with the donor that you do not have the same partnership with as we have." 12Representatives from CCTP describe the relationship with their donor having high degrees of transparency and accountability, which they think is not the same for non-faith-based implementordonor partnerships. 13n addition, staff of CCTP feel they have the power to suggest changes to the program as long as they can refer needs in the community. 14However, given the requirements attached to the sub-granting of institutional funding, the ability to manage funds and write effective reports are important criteria for funding.Reports are due biannually, and in addition, there are quarterly follow-up meetings and regular project visits by the donor.The main donor explains that funding is preconditioned with following development best practices. 15When institutional funds are not involved, the donor says that "the strings are not as tight on them, and they have a little bit more flexibility ." 16Likewise, when the donor FBO reports to their individual sponsors and churches, the focus is more on stories of change rather than on demonstrating how the funds have been used. 17In summary, CCTP is primarily funded by other FBOs with access to institutional funding, with a long history of partnership with CCTP.Staff of CCTP align with the larger priority and worldview of their donors.
NAVIGATING THE ROLE OF FAITH IN ANTITRAFFICKING IN A FAITH-CENTERED WAY
The faith-centered CCTP has a clear faith identity and draws explicitly on religious values for positive change as it emphasizes religious literacy, i.e., the ability to understand how different faith traditions interpret core concerns of development ( Deneulin and Rakodi 2011 ), and the potential contributions of faith and religion to development of society.
The faith identity is the foundation of the relationship to their main donor.The donors are projecting ruling relations concerning the role of faith in identity, which CCTP needs to manage and resolve.When (indirectly) receiving institutional funds, this means that religious proselytizing is off the table.However, this is fundamentally not a key problematic for the FBO as this aligns with the values of the FBO. 18To reconcile the problematic of their explicit faith identity within the boundaries for faith in antitrafficking set by mainstream international development, the FBO instead chooses a kind of lifestyle evangelism. 19In this approach, faith and religion are important aspects, and religious leaders are strategically sought out in the communities where the FBO works. 20CTP highly values its Christian identity and actively seeks out other partners who share a similar religious identity, while at the same time working across religious divides. 21oncerning their active pursuit of other Christian partners and donors, I observed when the leader of the FBO took part in a prayer group consisting of other Christian leaders working in Cambodia.The FBO leader saw the prayer group as an opportunity to expand his networking among Christian organizations, but also to have a space to share struggles among peers and grow spiritually. 22This also contributes to affirming the faith identity of the organization.
Parallel to expectations about faith identity and the role of faith in antitrafficking, the faith-based donor channeling institutional funds have clear instructions about adhering to dominant development paradigms.One such paradigm, as expressed by a donor representative, is imperative to address structural and underlying political roots of human trafficking: 23 "[T]he main thing would be if they [the FBOs] have some activities that are not in line with the rights-based approach, then we would ask them to reconsider that." 24In order to maintain institutional funding, the FBO must excel within the development paradigm of RBA .However, working to realize human rights in authoritarian Cambodia can be a risky venture as it can be perceived as challenging the government's authority.The consequences of challenging the government can be hard to calculate.This is a problematic that needs to be resolved.CCTP is therefore forced to take a nonconfrontational approach to the authorities. 25FBO staff explains the supporting role they play in relation to the local authorities: "[Our] role is to make sure that government is working on their role.It's like playing the role of inspector who encourage the government officer to keep their responsibility that they have to implement, and also playing role as a coach and mentor." 26In essence, there is considerable freedom for CCTP to design their antitrafficking response within the boundaries of no proselytizing and adhering to a rights-based approach.This means that their faith identity, and religious literacy, can be activated.
CCTP believes that they, as faith-based actors, have a certain religious literacy. 27The staff therefore assumes that they are well-suited to work in religiously influenced communities, often in contrast to secular organizations' neglect of the issue of religion and religious actors.The importance placed on religion for change is illustrated by this staff member's statement: "I think that the role of religion is very important.Our project is working with Pagoda because I think that the religion has a very powerful effect on all people."28This ability, and priority by FBOs, was noticeable during my observations of CCTP working with local Buddhist religious leaders to spread awareness about human trafficking.The antitrafficking message was delivered with the Buddhist terminology, theology , and authority , and as such, it legitimized the role and work of the Christian antitrafficking FBO in the community and mobilized the community for action. 29rom the standpoint of the faith-centered FBO, it is natural to draw on the values and worldviews of religion as a force for societal change.As one staff of the FBO explains: We believe that we are the same image of God, and all people have the same value and no one [can] abuse the value because it means we are not respectful of our God.And in Buddhism there is some law that ban human trafficking also, does not allow people to abuse [other people]. 30e quote illustrates how the FBO assumes that religion is inherently positive, and how each religion's worldview can be mobilized against human trafficking.This is in stark contrast to many secular actors who view Christian FBOs as narrow-minded: "The Christian worldview says we're right, and all the other r eligions ar e false [and then] ther e' s no investment in understanding those values ." 31Being a faith-based actor in an antitrafficking field dominated by secular views on the role of faith creates a problematic that needs to be resolved.From the standpoint of CCTP, the resolution lies in activating and drawing on religiously motivated values in society.Even if secular NGOs can be trained to take religion seriously in antitrafficking, what sets the faith-centered FBO apart is the importance they themselves place in their faith identity, and how believe they can draw on it:
No matter how highly you are educated, if you don't know dignity, love, what is good
, what is wrong, this problem is still happening.I am not saying that believers are perfect, or good people [but] at least they have God's word to reflect on . 32e statement above exemplifies the high regard for religiosity and religiously inspired values, and the choice of CCTP to activate it for change whenever they can.
THE DONOR NETWORKS OF FUAT
FUAT is a Christian faith-affiliated organization.FUAT, just like CCTP, has a diverse portfolio of cooperating partners in their work against trafficking, such as the police, local authorities, various government ministries, and other NGOs and FBOs.However, in contrast to CCTP, its donors include a mix of secular donors and faith-based donors.In addition, relationships between donors and FUAT are described as more strained than those between CCTP and its donors. 33he majority of the donors of FUAT are faith-based and Christian, and many of them channel institutional funding.The donors are from, for example, the United States, Canada, and several European countries. 34The nature of relationships varies between donors, but when the staff of FUAT describe their relationships with their donors, they describe them as a little strained: "Some donors are a bit aggressive and they want to change our program." 35nother staff member reveals the various administrative demands from donors to the same program: "We also have some other challenges if donors use different ways, different formats, in one project.In one project we have three or four donors." 36One example is about a Canadian donor requesting photocopies of all receipts, as this is required by their institutional back-donor. 37The mix of different donors leads to many different requirements to manage.One of the donors reflects on the differences in requirements depending on the source of the funding: "So, it's just the different ways we monitor and the ways we report are different.But the impact, yeah, it's hard for me to see what would be the difference." 38The quote signals the absence of a difference in quality or impact, but rather a difference in administrative demands.On the other hand, the staff of FUAT also describe their organizations as quite capable to resist requests from donors: "The Executive Director is very forward thinking in the world of development and would not allow a donor to drive a project." 39In summary, FUAT is mostly funded by other FBOs, where many of them channel institutional funding, but also have secular donors.Sometimes, the relationship with some donors is contentious, and the need to manage a diversity of expectations and administrative routines is demanding.
NAVIGATING THE ROLE OF FAITH IN ANTITRAFFICKING IN A FAITH-AFFILIATED WAY
The faith-affiliated FUAT has an ambivalent relationship to faith in antitrafficking, and to their faith identity, yet emphasizes religious literacy as a resource in their programming.They are embedded in opposing ruling relations on faith in antitrafficking.From secular and institutional donors, they are pressured to de-emphasize their faith identity and minimize the role of faith in antitrafficking work.De-emphasizing their Christian identity has been debated both internally and among donors to the FBO, as one staff member recalls: "So, we lost some donors because they want an organization that really have a strong tie with Christianity." 40 At the same time, the faith-based donors and faith-based partners also put pressure on FUAT in the opposite direction, asking for a more explicit faith identity: "Some will say you"re too faith-based.And other donors will say, you"re not faith-based enough." 41The choice FUAT has made is to de-emphasize their specific Christian identity while retaining a religious literacy and openness to faith worldviews and to talk about faith in inclusive terms. 42This ambivalence is manifested at their office where there are few religious symbols, signaling that the best way to remain inclusive is to remove religious symbols, mimicking the secular view that the absence of religion is the neutral stance. 43he gradual de-emphasizing of a Christian identity has resulted in strategy changes, manifested through a shift from 34 Interview with staff of FUAT, 1; Document of FUAT, 5. 35 Interview with staff of FUAT, 3. 36 Interview with staff of FUAT, 1. 37 Interview with staff of FUAT, 6. 38 Interview with donor of FUAT, 1. 39 Interview with staff of FUAT, 2. 40 Interview with staff of FUAT, 3. 41 Interview with staff of FUAT, 6. 42 Observation of FUAT, 6; Interview with group of staff from FUAT, 1. 43 Observation of FUAT, 5.
working with religious leaders specifically to instead regarding them as part of the community in general.In this process, church leaders have received less attention as strategic partners, as one staff member relates: "Before our prevention program, we had a church program [. ..]But currently we do not work with churches anymore.Community prevention in relation to churches have been finished since I joined ."44FUAT still engages with religious leaders as important actors, but not Christian leaders in particular.Within the same FBO, the general principle of recognizing the role of faith leaders is interpreted differently, plausibly due to the process of downplaying their religious identity.In a group discussion, the following was said: "We do not focus on Pagodas or those who work in other religions.We just work focusing on them as community members and sometimes we miss that as well ." 45This can be contrasted with a statement from the leader of the organization: Yeah, these are communities of faith and the [faith] leaders need to know that it is part of their mandate as religious leaders to keep people in their community safe.[…] there are many reasons to embracing the faith piece. 46e differences in emphasis within the FBO itself reveal that negotiating different ideas about the role of faith in antitrafficking is difficult.The leader of the FBOs is trying to balance a faith identity without being associated with conservative American faith alliances, as staff shared with me during interviews. 47FUAT needs to make sure that their faithbased donors do not perceive them to be secular.To balance between the expectations, the FBO seeks to highlight religious values that align with the secular ideals of international development.Such values can be justice or dignity, which are easily connected to scripture and can be communicated externally. 48Faith and religious values are important to FUAT in many ways, and the founder states in an interview that the organization was founded from a faith inspiration and that "a lot of my own inspiration comes from faith ." 49At the same time, FUAT and like-minded FBOs join the secular criticism concerning FBOs that are "heavy handed with Christianity " in their antitrafficking work. 50Following this criticism, FUAT views religious conversions within their antitrafficking work as inappropriate. 51he opposing and contradicting ruling relations from their diverse donor portfolio pull FUAT in different directions concerning what kind of antitrafficking work they should focus on.One FBO staff member with long experience in the organization reflects: "Some donors only provide support for trafficking cases but not for sexual exploitation.Some donors want us to change our vision or we will not get the support." 52The FBO also has difficulties with donors insisting that they can only help children and not adults, something that excludes adolescents, and parents or guardians of the children as well.Thus, their donors collectively, yet not in a coordinated way, push for reducing the complexities of human trafficking and narrowing the focus of the work, while FUAT attempts to maintain complexity: I think that is where few organizations focus because it's more complicated [. ..] from a donor level.And I think that' s wher e [we are] great because [we] focus on all areas." 53he faith-affiliated FUAT is pressured by secular peer NGOs, donors, and policymakers.Former colleagues question the choice of its staff being associated with an FBO: "Oh, I can't believe you're working for a Christian organization.I never thought you would do that." 54Furthermore, FUAT wants to play a role in arenas dominated by secularism, and this requires adaptation of how they present themselves and their work.As the leader of the FBO explains: I have realized that if [FBOs] are going to sit at the table as professionals, [FBOs] have to change their language.I think that has been a lesson for a lot of faith-based antitrafficking organizations who want to come to the table like government cir cles, UN cir cles, academia cir cles they actually need to professionalize their faith. 55us, as exemplified here, the staff of FUAT adopt the secular logic that faith-based antitrafficking work needs to be professionalized ( Tomalin 2018 ; Lonergan et al. 2020 ).Or as one staff member reflects: "Some would say that we should have maybe moved completely out of [our faith identity] if we wanted to be fully professional ." 56This further illustrates the difficulties of managing the conflicting ruling relations on faith in antitrafficking by downplaying their religious identity.At the same time, the FBO tries to maintain its competency in dealing with religious leaders and religious topics, and maintaining good relationships with faith-based donors.As the leader of the FBO sums it up: "It's been a constant balance, frankly." 57And this balancing act requires them to choose wisely, when possible, among their donors."Well, there are faith-based [donors] but we are a little bit picky as well, because we want only donors who are really open." 58 This signals that FUAT is contemplating the need to adjust their donor portfolio further in order to maintain their current balancing act on faith in antitrafficking.
THE DONOR PATTERNS OF CWOT
CWOT is a faith-permeated antitrafficking organization.Just like the two previous FBOs, CWOT demonstrates a diversity of cooperating partners in their antitrafficking work such as businesses, NGOs, police, and the immigration department.In contrast to the other two FBOs, CWOT has more Christian partners such as churches and other FBOs.In terms of the donor relationships of CWOT, they consist of a mix of churches, 59 FBOs, and Christian individuals: "The majority would be churches, individuals, probably people that find us on the web, but I would say probably the majority would come through a network of churches ." 60However, CWOT also receives funding for specific expenses for survivors of trafficking from INGOs or UN Agencies: "International Organization for Migration will pay for medical care for women ." 61aith-based donor networks have substantial resources thanks to the religiously mandated donations within churches and congregations ( Schnable 2015 , 2016 ; Davis 2019 ).From the standpoint of the staff of the FBO, there is the confidence that their activities will be funded one way or the other: "[The director] raises money from America, but actually we never lack money." 62On their website, CWOT thanks their individual donors for contributing to purchasing a building for their operations, where a donor offered to match donations up to the sum of $100,000. 63Due to the fairly reliable inflow of donations from faith-based donors, staff of CWOT do not feel like they are influenced by their donors.This sentiment is expressed by the leader of the FBO: "I value donors, but I will not compromise for donors ." 64This conviction that they are not adjusting their program to donors' demands is also echoed in another interview: "I have not heard that we have ever adjusted our program to fit a big donors' opinion.In fact, a big donor donates to us because they like what they see." 65 The general picture conveyed is a harmonious relationship with donors: "I think that for the most part, the relations with donors are very beneficial, and very supportive ." 66WOT owns a so-called "freedom business," where survivors of trafficking are employed.The FBO sells most of their products in Western countries with the help of volunteers. 67The volunteers and donors hear about the work of CWOT, and they then offer to contribute with their time or money: "For the most part, people have come to us, they like what we're doing and they want to support it and they offer assistance ." 68Generally, CWOT can rely on a number of long-term donors with whom they have good relationships: "Where I've asked for funding has been with churches or organizations that we already have a relationship with and they've given in the past." 69n terms of maintaining the relationship with their donors, the requirements for reporting back vary but are generally not very demanding as explained by the FBO representative: "Some are happy with an end of the year report [about] what we did.Some are happy with just a thank you." 70 Research has pointed out that even though FBOs have access to a lot of generous donors, maintaining those relationships is also costly, and FBOs spend significant amounts of money on fundraising ( Davis 2019 ).Often, the donors pick one project that they wish to support.So, the donors generally get to choose which project will be funded, but usually this support comes with few strings attached. 71Instead, the basis of communication relies on the website, newsletters, prayer requests and thank you letters. 72This type of reporting and communication to faith-based donors is in stark contrast to how reporting and communication is done to institutional donors.From the standpoint of CWOT, the communication is based on a foundation of trust where the donors have patience when it comes to reporting.However, there are exceptions where impatient individual donors have demanded replies within a short time period. 73n summary, CWOT is primarily funded by churches, individuals, and other FBOs, and only marginally by institutional donors.In general, their relationship to their donors is longterm, and experienced as supportive and uncomplicated.The faith-permeated FBO has a clear faith identity, emphasizing the role of faith in antitrafficking, but in contrast to the other two FBOs, they are actively proselytizing through their antitrafficking work. 74This position on faith in antitrafficking stems from expectations from their donor base, consisting of individuals, churches, and other FBOs who have like-minded views on the role of faith in antitrafficking.
One of the most important donors of CWOT has a mission statement about inviting people to become Christians, and this goal shapes everything the donor gets involved in, even in antitrafficking. 75Staff of CWOT talks about donations as an act of obedience to God, which is an important discourse within faith-based donor networks. 76The communication to their donors is filled with prayer requests, intertwined with calls for donations.The prayer requests are sometimes about helping people escape exploitation, but also about the spiritual journeys of their clients, as exemplified by a prayer request letter: "Pray for [survivors] to grow in faith [. ..] for [those] who struggle with addictions, demonization, and prostitution [. ..] for the trafficked victims to find freedom and for justice in the courts ." 77These prayer requests also reach potential volunteers within the faith-based donor networks, who then sign up to help out.As one of the long-term volunteers explained: "My church sent me, I'm not with an official organization.I just raised money [to fund her time with CWOT]." 78Thus, there is an expectation shared by CWOT and their donor network that Christian spirituality and antitrafficking are intertwined.
The spiritual economy within faith-based donor networks means that CWOT frequently takes on volunteers who finance their own stay and work for several years within the organization.CWOT's reliance on volunteers from their faithbased donor networks exposes CWOT to ruling relations shaping, and limiting, their response to human trafficking.These volunteers are members of the churches that are part of the donor network of CWOT and share the same worldview. 79The donor profile with their spiritually mandated donations and the ruling relations on faith in antitrafficking that they project makes it difficult from the standpoint of CWOT to, even if they wanted to, leave out the spiritual component when communicating the results.
CWOT's recognition of the importance of religious dimensions is also demonstrated in the way they take note of how traffickers use religion to manipulate their victims to remain in servitude: "Something that we really brought to the table was recognizing the role of witchcraft as a threat, as a contr olling for ce with trafficked women." 80Following this view, the faith-permeated CWOT uses faith in their counseling program.The strategic importance of spirituality is explained by one of the FBO staff working with survivors of trafficking: "The spiritual warfare aspect [. ..] in order for women to be free, to be able to sleep through the night, to not be harassed by evil spirits, is something that we've had to do actually intentionally strategically deal with ." 81Thus, Christian activities such as Bible studies and devotions are mandatory for survivors in 74 Interviews with staff of CWOT, 1 and 4; Observation of CWOT, 6. 75 Document from CWOT, 14. 76 Interview with staff of CWOT, 1. 77 Document from CWOT, 8. 78 Interview with group of staff of CWOT. 79Interview with a group of staff from CWOT, 1; Documents from CWOT, 18 and 19.
their programs, and this has resulted in several conversions to Christianity. 82One FBO staff member explains: "I think having an aspect of faith in your programs is helpful [like] having times of worship, prayer and Bible teaching is really important.It' s r eally crucial ." 83When observing CWOT, I participated in devotions, but also saw books and other materials on the topic of spiritual counseling.These observations confirmed the centrality of faith in their antitrafficking programs. 84For CWOT, most topics are seen as related to faith.For instance, teaching on topics such as finances draws on religious principles, and is explained within a faith discourse. 85Another illustration of the centrality of faith in their strategic thinking is a story told by the leader of the FBO about a prophetic dream she had.This prophetic dream was shared among the leadership team and ended up being the pivotal factor in shaping which activities that should be continued.In CWOT, the consensus interpretation of the dream was that God intervened and showed the future path for the organization. 86he faith-based donor network of CWOT, with an emphasis on churches and individuals, means that they need to maintain a certain kind of non-structural antitrafficking work.FBOs are often seen as non-political and have in some regards fewer constraints from their donor base as long as they avoid what their donors view as contentious issues ( Butcher and Hallward 2018 ).FBOs may want to avoid to be associated with what their donors perceive as controversial political issues. 87One NGO representative with insight into Christian FBOs, such as CWOT, critiques this avoidance of politics among the donor base of FBOs: "[. ..] the minute that you start to talk about community development [. ..] that's like communism.." 88She goes on to describe how individual donors of FBOs generally view human trafficking: "It's a clear moral, evil, it's emotionally compelling and people just want do something about it." 89 The combined effect is a pressure on CWOT to reduce the complexity of human trafficking to a black-andwhite issue of good versus evil.In this donor environment, to get funding you need a compelling story, but the story needs to be adapted to the expectations and worldviews of the donors. 90If CWOT presents an alternative agenda, one which is not in line with the expectations of their donor network, they risk disappointing them, which impedes the success of fundraising ( Reynolds and Offutt 2013 ).This dynamic makes it difficult to get individual donors to support less dramatic types of interventions, such as activities addressing the structural issues that allow human trafficking to continue.
CWOT is also pushed toward reducing complexities due to their reliance on volunteers from their faith-based donor network, who influence the organization to a large degree.The influence of volunteers has been a problem and therefore the FBO is attempting to limit the scope of the volunteers' influence: "We still depend on foreign volunteers, we have this dilemma of seeing the organization shifting with every new group that come through." 91The volunteers also contribute to reducing the complexity of the solutions to human trafficking, as the volunteers are not as experienced, and when they go home, new volunteers need to be trained.CWOT manages the pressure from their donors to reduce complexity, with the complex reality of human trafficking, by advocating for individual survivors' rights of getting compensation, or other types of assistance from the authorities. 92Thereby CWOT steers clear of addressing the structural causes for human trafficking.Advocating for the rights of individuals is not a sensitive political issue for the donor base of churches and individuals.Within the boundaries of the donors' simplified view on human trafficking, CWOT acknowledges the spiritual, emotional, social, financial, and physical challenges that the survivors need to overcome.As the leader of CWOT explains: It turns out to be a lot more complex and there's no one solution.So, you have to address it from many angles.If you only address one part of it […] it's not going to stand and they'll end up falling again and getting re-trafficked . 93OT calls this a victim-centered approach, and this is compatible with maintaining a good relationship with Thai government officials: "The relationship that we have with [authorities] While religion in antitrafficking is not viewed as inherently problematic, the FBO also directs criticism toward individuals and organizations who are too aggressive in their approach and call them "Bible thumpers." 96Attitudes to human trafficking within the church also need to be addressed, since partnering with churches is very important for CWOT.Sometimes, churches are hesitant to engage in antitrafficking due to judgmental attitudes toward women in the sex industry, and it is important for CWOT to contribute to changing these attitudes. 97When marketing their handicraft produce, or at their coffee shop, CWOT tones down their Christian identity, since they know that for some partners being too upfront with Christianity can be a deal breaker. 98The FBO staff explains how stereotypical views of Christian antitrafficking work can sometimes hinder cooperation: "One of the reasons I don't want [Christian] labels is because people have stereotypes, they don't want to know the truth because it's easier to exclude us from the table ." 99To summarize, CWOT is challenged by explicit expectations to use faith in antitrafficking from their faith-based donor network, while at the same time encountering stereotypical views of Christians by secular actors.This happens parallel to the efforts by CWOT to challenge simplistic and judgmental views of human trafficking by their donors.
Conclusions
In this article, I have demonstrated how the specific composition of faith-based donor networks and the rules and expectations they create shape how the three FBOs do antitrafficking work.In particular, I have considered the impli-92 Interview with staff from CWOT, 1. 93 Interview with staff from CWOT, 1. 94 Interview with group of staff from CWOT, 1. 95 Documents from CWOT, 1 and 2. 96 Interview with staff from CWOT, 5. 97 Interview with staff from CWOT, 2; Document from CWOT, 12. 98 Observation of CWOT, 4; Documents from CWOT, 7. 99 Interview with staff from CWOT, 1.
cations for how the FBOs choose to emphasize faith in antitrafficking.The three FBOs have, within the broader pattern of faith-based donor networks, three distinct compositions of donors, creating different ruling relations concerning faith in antitrafficking.
The donors of CCTP are other FBOs, primarily channeling institutional funds, and CCTP has the most homogenous donor portfolio of the three FBOs.The main basis for the relationship with donors is shared faith identity, however, the sub-granting of institutional funds creates firm boundaries for faith in antitrafficking.CCTP resolves the problematics of faith in antitrafficking by steering clear of explicit proselytizing but drawing on religiously inspired values for positive change, and by highlighting the significant role of religion and religious leaders for countering human trafficking.
FUAT has the most diverse donor profile of the studied FBOs.The donors of CWOT are FBOs with and without access to institutional funds, churches, and a few secular donors.There are competing ruling relations on faith in antitrafficking arising from these diverse donor relationships.FUAT resolves these competing ruling relations by adopting an ambivalent and pragmatic faith identity, and an ambivalence toward the role of faith in antitrafficking.However, their faith identity preserves their religious literacy, which is activated in their antitrafficking work.This change in stance reveals how FBOs are influenced by ideas on the role of religion in development coming from Western secularism.The heterogeneous donor network also subjects FUAT to ruling relations on siloing antitrafficking efforts to certain groups such as children or women, which they resolve through adapting communications about their work to their donors.This is the effect of donors reducing the complexity of human trafficking.
The donor network of CWOT largely consists of individuals and churches sharing their Christian worldview.While drawing from many different sources, the donor network is also clearly Christian, with a few exceptions.CWOT is to a high degree reliant on volunteers in fundraising, communications, as well as implementation.Out of the three FBOs, CWOT most strongly emphasizes faith in their antitrafficking work.Donors expect and encourage that beneficiaries of CWOT have spiritual experiences bringing them from darkness to light.The effects of these ruling relations are that CWOT intertwines Christianity with their antitrafficking work.One consequence of this dynamic is the difficulty in addressing structural matters relating to human trafficking because it is not compatible with the donors' expectations, moral judgments, and understanding of human trafficking.These expectations, however, do not always match the realities on the ground.This problematic is resolved through focusing on a victim-centered approach.
The findings of this article concerning the subtle and manifest influences of donor networks, which vary depending on their composition, are important for policymakers, donors, and FBOs to reflect upon.A key question for policymakers and donors (and donor networks collectively) is whether the effects of their influences on FBOs and other antitrafficking actors are desirable.FBOs can draw on these findings to reflect on whether they realize the effect donors have on their identity and the role of faith in the antitrafficking work, and whether this change is desirable or not.
In conclusion, the article highlights the varied ways that specific donor networks create ruling relations that pull the FBOs in different directions concerning the role of faith in antitrafficking, and with regards to the design and focus of their work.The article has demonstrated how faith ideals and money interplay to shape the antitrafficking work of the FBOs.In particular, it sheds light on the importance of considering the varied and specific donor relations of FBOs to understand their antitrafficking practices.It is thus the FBOs' particular networks that provide them with internal as well as external incentives for action or inaction.In sum, it is about who you know, and in particular, which donors you know.
NAVIGATING THE ROLE OF FAITH IN ANTITRAFFICKING IN AFAITH-PERMEATED WAY
Table 1 .
Summary of cases is very important.I saw them changing to [have] a victim-center ed appr oach ." 94Successes in helping survivors are proudly communicated to donors: "[We] were able to assist [survivor] with her documents, with emergency needs, and with the time to heal her soul and body.Without intervention and assistance [she] would have died in Thailand." 95 Thus, CWOT focuses on individual-level aspects of human trafficking.
|
v3-fos-license
|
2020-04-23T09:08:48.314Z
|
2020-04-14T00:00:00.000
|
218821348
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2071-1050/12/8/3165/pdf",
"pdf_hash": "e244f9d87ce7fb4fba40e94e3fcaf64b1ccd9e09",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1127",
"s2fieldsofstudy": [
"Environmental Science",
"Geology",
"Engineering"
],
"sha1": "d36b194978d768de33b73429ada883cd31ed0216",
"year": 2020
}
|
pes2o/s2orc
|
Analysis of Rock Raw Materials Transport and its Implications for Regional Development and Planning. Case Study of Lower Silesia (Poland)
: The movement of rock raw materials from source to demand areas is carried out predominately with road and railway transport. The latter is less damaging to infrastructure, the environment and society and is cheaper for longer distances, but it is also less flexible and not widely used. The Lower Silesia region in southwestern Poland is an important producer of rock raw materials and the principal provider of igneous and metamorphic dimension stones and crushed rocks in the country. A multicriteria scoring scheme has been developed and applied to identify mines presently using road transport, that are predisposed to switch to or include a railway form of transport. Four criteria have been proposed, C1—distance to railway loading point, C2—annual production of rock raw material, C3—economic reserves, and C4—type of rock raw material. The scoring scheme (classification) was developed based on the results of descriptive statistics for mines presently using railway or combined road and railway forms of transport. Two scenarios were analyzed, one with equal weights (0.25) and the other with higher significance of C1 = 0.40 and C2 = 0.30, and lower significance of C3 = 0.20 and C4 = 0.10. In the result, 24 mines were identified and ranked in terms of their potential to introduce railway transport. The proposed methodology can be used universally for other regions and countries, and the results will be included in drawing up regional spatial development policies. mines and general geological structure of the Lower Silesia Voivodeship.
Introduction
Rock minerals such as dimension rocks and crushed stones (DSCR), as well as sands and gravels are raw materials that are considered to be key resources that enable proper functioning of the economy and satisfy the living standards of the society. Demand for these rock raw materials, which due to their properties are used in the construction industry including buildings, roads and railroads, is related to the economic growth. According to a study by the British Geological Survey, the production of DSCR between 2013 and 2017 increased by 8.6% from 1,110,895.7 thousand Mg to 1,206,066.4 thousand Mg in European countries [1]. Poland, in 2017, was the seventh largest producer of dimension stones and crushed rocks, and the third largest of all aggregates.
Source areas of these rock raw materials (this term is used in our paper to cover all types of rock mineral raw materials) are conditioned by geology, and are usually unevenly distributed across a given area (e.g., region or a country). This is especially true for magmatic, metamorphic and sedimentary dimension stones and crushed rocks. Deposits of sands and gravels are more common and evenly distributed. In contrast, demand areas for these rock raw materials, urban areas and transport infrastructure construction sites, may be located at considerable distances, even hundreds of kilometers away from available sources. The need to move these materials from source to demand areas exerts pressure on existing transport networks, roads and railways, which are the typical means of transport for rock raw materials. Road transport, using tipper and semitrailer tipper trucks is usually used for shorter distances (tens of kilometers), whereas rail transport is used for longer distances (hundreds of kilometers). The factors determining the type of transport used include: the structure of supply and demand for rock raw materials, the availability of a given transport infrastructure, and the cost of transport (usually given per km). Comparative analysis of costs for road and railway transport of rock raw materials was carried out by Gawlik et al. [2] and Kryzia and Kopacz [3]. Whereas, Nowakowski et al. [4] and Chęciński [5] focused on the problem of logistics of road transport and optimization of road transport routes. Beuthe et al. studied demand for different transport models (road, rail and inland water) for ten groups of commodities including mineral and building materials [6]. Łochańska and Stryszewski studied the demand and supply structure and concluded that in the case of rock raw materials, transport costs are higher that mining costs, with geological settings determining the locations of rock raw material quarry operations [7,8]. Blachowski analyzed the magnitude of rock raw material road transport sources in Lower Silesia (Poland) [9], and Kendal et al. assessed the energy and environmental costs of cement production in the USA with a comparison of large mining operations (megaquarries) and smaller scattered mines, and found out that transition to such megaquarries increases these costs by up to 50% [10]. Elsewhere, Andrés and Padilla analyzed the energy intensity of transport for various types of commodities in Spain, including minerals and building materials [11]. A different aspect was analysed by Robinson and Kapo who investigated potential locations for recycled aggregates in relation to natural aggregate sites, transport network and population density [12]. Hill applied distance to road and railway networks as criteria to analyze and map rock aggregate opportunity areas over New Zealand [13]. Generally, railway transport of low value, high volume commodities such as rock raw materials is less conflictual than road transport. The latter is known for excessive levels of pollution and noise, damage to public roads, safety risks (accidents) and increased traffic. These factors generate conflicts with local communities.
The Lower Silesia region in Poland is the principal supplier of magmatic and metamorphic dimension stones and crushed rocks and is a major producer of sands and gravels that are transported to demand areas within the region as well as to other parts of the country. Thus, the two aims of this study are, first to statistically and spatially assess the scale of road and rail transport of rock raw materials in the Lower Silesia region of Poland, and secondly to propose and apply a method for the identification and selection of rock raw material quarrying operations predisposed to change to rail or combined road and rail forms of transport. The proposed method could be applied universally in other regions and countries.
Description of Study Area
Lower Silesia is one of the 16 voivodeships (highest level administrative units) of Poland, located in the southwest part of the country and bordering with the Czech Republic (in the South) and Germany (in the West). The region has an area of 19,946.7 sq. km with 29.6% of it covered by various forms of forest. Nature protection areas constitute 18.2 % of the region's total surface, and when Nature 2000 sites are included this percentage rises to 35 % [14]. The mining of nonferrous metals (copper and silver ore), energy minerals (brown coal) and numerous rock raw materials (many unique to the country) is an important part of the region's developing economy, which is one of the strongest in Poland.
Geology and Mining of Rock Raw Materials
The geological structure of Lower Silesia is mosaic and varied. This is the result of polyphasic geological evolution that lasted from the upper Proterozoic up to the Quaternary. The main geologic-tectonic structures run from the northwest to the southeast, and are the Fore-Sudetic Monocline (in the North of the region), the Fore-Sudetic Block and the Sudetes Mountains in the South, separated from the Block by the Sudetic Marginal Fault [15]. Some studies also suggest that the latter Sustainability 2020, 12, 3165 3 of 14 two are one structure, the Sudetic Block [16]. The extent of the three main geological units has been shown in Figure 1.
Sustainability 2020, 12, x FOR PEER REVIEW 3 of 15 one structure, the Sudetic Block [16]. The extent of the three main geological units has been shown in Figure 1. The Sudetes Mountains are composed of various igneous, metamorphic and sedimentary rocks dating back to the Precambrian to the Cenozoic. These rocks build numerous smaller tectonic units separated by faults, and together form a mosaic geological surface. The Fore-Sudetic Block consists of two structural levels. The older one is built of metamorphic and igneous rocks. It is partly covered by deposits of sedimentary rocks making up the younger structural level. The unit is also characterized with mosaic composition, and numerous secondary elements have been identified there including gabbro, granite, and serpentinite massifs, metamorphic and other structures [15]. The Fore-Sudetic Monocline is composed of thick layers of Permian-Mesozoic origin lying inconsistently on a folded Paleozoic subsurface. The Permian-Mesozoic deposits generally lie at an angle of several degrees towards north and northeast. This geological context is the reason for the rich and diversified mineral resources documented in Lower Silesia. The dimension stones and crushed rocks deposits are predominately associated with rock massifs of the Sudetes and the Fore-Sudetic Block parts of the region. The sand and gravel deposits are more evenly distributed and associated predominately with Quaternary deposits, the most valuable of which are documented in fluvial formations of major river valleys and in glacial formations [17]. There are 258 documented deposits of all types of dimension stones and crushed rocks (132 with active mining permits), 477 documented deposits of sands and gravels, and numerous deposits of other industrial minerals, such as bentonite, feldspar, kaolin, white burning clays, glass sands. The region is the primary zone of dimension stones and crushed rocks of both igneous and metamorphic origin, with 77% of national economic reserves and The Sudetes Mountains are composed of various igneous, metamorphic and sedimentary rocks dating back to the Precambrian to the Cenozoic. These rocks build numerous smaller tectonic units separated by faults, and together form a mosaic geological surface. The Fore-Sudetic Block consists of two structural levels. The older one is built of metamorphic and igneous rocks. It is partly covered by deposits of sedimentary rocks making up the younger structural level. The unit is also characterized with mosaic composition, and numerous secondary elements have been identified there including gabbro, granite, and serpentinite massifs, metamorphic and other structures [15]. The Fore-Sudetic Monocline is composed of thick layers of Permian-Mesozoic origin lying inconsistently on a folded Paleozoic subsurface. The Permian-Mesozoic deposits generally lie at an angle of several degrees towards north and northeast. This geological context is the reason for the rich and diversified mineral resources documented in Lower Silesia. The dimension stones and crushed rocks deposits are predominately associated with rock massifs of the Sudetes and the Fore-Sudetic Block parts of the region. The sand and gravel deposits are more evenly distributed and associated predominately with Quaternary deposits, the most valuable of which are documented in fluvial formations of major river valleys and in glacial formations [17]. There are 258 documented deposits of all types of dimension stones and crushed rocks (132 with active mining permits), 477 documented deposits of sands and Sustainability 2020, 12, 3165 4 of 14 gravels, and numerous deposits of other industrial minerals, such as bentonite, feldspar, kaolin, white burning clays, glass sands. The region is the primary zone of dimension stones and crushed rocks of both igneous and metamorphic origin, with 77% of national economic reserves and 90.5% of annual production [18]. Lower Silesia has most of the national igneous rocks such basalt, granite, melaphyre, porphyry, as well as the only resources of gabbro and syenite in the country. The same can be said about metamorphic rocks, with the highest amount of the amphibolies, serpentine, hornfels, migmatite, marble [19]. Out of 231 deposits of igneous and metamorphic dimension stones and crushed rocks in Poland, 204 have been documented in Lower Silesia [18].
The Lower Silesia region provides approximately 45% of all DSCR in the country and between 85% to 100% of different types of igneous and metamorphic DSCR. Mining of sands and gravels provides between 7% and 8% of national consumption annually, whereas other rock raw materials such as bentonite, white burning ceramic clays and refractory clays, schists, magnesites, kaolin and feldspar are mined only in Lower Silesia.
Railway Infrastructure
The length of the Polish railway network is 19,132 km and decreased by 26% between 1991 and 2016. This trend is similar to other European countries, such as France and Germany. However, in the same period, 4500 km of railways have been modernized through the construction of second tracks, and another 500 km have been electrified [20]. The average density of railway network in Poland is 6.2 km per 100 km sq. The length of railway lines in Lower Silesia is 1763 km and its density is 8.8 km per 100 km sq. (second position in the country) [20]. The original length of this network was over 2900 km [21]. In terms of the infrastructure condition, 58% of the railway network Lower Silesia is classified as good, 21% as satisfactory and 21% as unsatisfactory [22]. Presently, there are 24 railway sidings used by rock raw material mines and 17 railway loading points where rock raw materials are transported over a short distance from the mines using trucks. The self-government of Lower Silesia is in the process of acquiring and modernizing approximately 400 km of abandoned national railway lines, predominately with the intention of passenger transport [21]. The railway network, location of rock raw material loading infrastructure, and mines with active permits are presented in Figure 2. The average distance of commodity freight in Poland is 238 km [23]. The total volume of commodities transported in 2018 amounted to 250,000,000 Mg, and 52,000,000 Mg (approx. 20.8%) of which constituted rock raw materials. The freight of rock raw materials increased by 8,000,000 Mg compared to 2017 due to greater demand and new investments in infrastructure [24].
Materials and Methods
The primary aim of this research is to analyze and assess the volume or road transport of rock The average distance of commodity freight in Poland is 238 km [23]. The total volume of commodities transported in 2018 amounted to 250,000,000 Mg, and 52,000,000 Mg (approx. 20.8%) of which constituted rock raw materials. The freight of rock raw materials increased by 8,000,000 Mg compared to 2017 due to greater demand and new investments in infrastructure [24].
Materials and Methods
The primary aim of this research is to analyze and assess the volume or road transport of rock raw materials within and from Lower Silesia and the potential of railway to transport these commodities. In general, the proposed methodology involved four steps: (1) data collection, (2) data validation and statistical analysis, (3) multicriteria analysis and (4) mapping and interpretation of results. The scheme of the research methodology and techniques used in the study are shown in Figure 3. Input data on the production of rock raw materials was acquired from the Polish Geological Institute database published annually in the Polish Minerals Yearbooks [18]. Data on rock raw materials transport from mines was collected through questionnaires and interviews (personal, email and telephone). Information on roads that are heavily used for truck haulage of rock raw materials was acquired through query of local authorities responsible for road infrastructure and mining companies. All 30 administrative units (poviats) and all the operating rock raw material mines were examined. The information on railway network and rock raw material sidings and loading points (in operation and potential) was obtained from regional authority (Marshal Office and Institute for Territorial Development), as well as the national Office of Rail Transport.
The following analytical techniques were used to prepare, process and analyze the data: desk research, descriptive statistics, geospatial mapping and spatial analysis in a geographical information system (GIS), as well as multicriteria scoring and ranking techniques.
Statistics describing production and transport of rock raw materials were presented in the form of graphs and tabular summaries. The following statistics, maximum, minimum, mean and median values, as well as upper and lower quartile values were calculated and used to derive a scoring system for the criteria determining the potential of rock raw material mines to introduce railway system of transport.
The assessment of mines using road transport of rock raw materials only that could potentially include or switch to rail transport was based on a scoring procedure that included the following four criteria: (1) distance to railway loading point, (2) annual production of rock raw materials, (3) size of available economic reserves (prognosed lifetime of a mine), (4) type of rock raw material mined. Each criterion was assessed on a three points scale where one is the lowest score and three is the highest score. In addition, value of zero was used to indicate mines that do not meet a given criterion. To be Input data on the production of rock raw materials was acquired from the Polish Geological Institute database published annually in the Polish Minerals Yearbooks [18]. Data on rock raw materials transport from mines was collected through questionnaires and interviews (personal, email and telephone). Information on roads that are heavily used for truck haulage of rock raw materials was acquired through query of local authorities responsible for road infrastructure and mining companies. All 30 administrative units (poviats) and all the operating rock raw material mines were examined. The information on railway network and rock raw material sidings and loading points (in operation and potential) was obtained from regional authority (Marshal Office and Institute for Territorial Development), as well as the national Office of Rail Transport.
The following analytical techniques were used to prepare, process and analyze the data: desk research, descriptive statistics, geospatial mapping and spatial analysis in a geographical information system (GIS), as well as multicriteria scoring and ranking techniques.
Statistics describing production and transport of rock raw materials were presented in the form of graphs and tabular summaries. The following statistics, maximum, minimum, mean and median values, as well as upper and lower quartile values were calculated and used to derive a scoring system for the criteria determining the potential of rock raw material mines to introduce railway system of transport.
The assessment of mines using road transport of rock raw materials only that could potentially include or switch to rail transport was based on a scoring procedure that included the following four Sustainability 2020, 12, 3165 6 of 14 criteria: (1) distance to railway loading point, (2) annual production of rock raw materials, (3) size of available economic reserves (prognosed lifetime of a mine), (4) type of rock raw material mined. Each criterion was assessed on a three points scale where one is the lowest score and three is the highest score. In addition, value of zero was used to indicate mines that do not meet a given criterion. To be considered in the final ranking a mine had to obtain at least one point in each criterion. The following formula was applied to calculate the score (1): where, S-total score for mine "i", C-criterion "k", w-weight of criterion "k" n-number of criteria. Classification for criteria 1 and 2 was based on the results of statistics calculated for mines using railway or a combination of road and railway transport. Classification for criterion 3 was based on the amount of available economic reserves.
Two scenarios were analysed, in the first one each criterion had equal weight, whereas in the second scenario the weights were differentiated. Based on the final weighted score, a ranking of mines using road transport only was developed and presented as a list.
During the study, geospatial database and GIS analytical functions were used to determine the distance from mines to railway loading points in two ways, as a straight-line distance and along an existing road network. GIS was also used to map statistics such as length of roads presently laden with truck haulage of rock raw materials, total annual production of rock raw materials in all mines, and annual production of rock raw materials in mines using road transport only. These statistics were calculated and presented for middle (poviat) level administration units, where each one was assigned statistical values stated above and represented on thematic maps (proportional symbol maps).
Analysis of Rock Raw Material Transport
There have been 213 active rock raw materials mines in year 2018 in Lower Silesia. Among them, 85 are exploit deposits of dimension stones or crushed rocks, 105 are sand and gravel pits, and 23 are other mined rock raw materials (gypsum and anhydrite-2, ceramic and refractory clay minerals-7, limestones and marls for cement industry-3, feldspar-2, kaolin-1, dolomite-1, magnesite-1, mica schist-1).
The maximum reported production from a single mineral deposit was 2,676,000 Mg and the minimum 1000 Mg, with a mean value of 256,595.6 Mg and a median value of 69,880 Mg. However, the largest quarry produced 3,700,000 Mg of crushed rocks (2,676,000 Mg migmatite and 1,024,000 Mg amphibolite).
Based on the results of desk research and inquiries, in the region, there are 37 mines using rail transport of rock raw materials, and 26 mines using a combined form of transport, i.e. road transport to the railway loading point and rail from there (among these one mine uses the combined system occasionally and has not be included in further statistics for this type of transport). The remaining 150 quarries and sand and gravel pits use road transport only. General statistics describing production and transport of rock raw materials are shown in Tables 1 and 2 Table 1 are aggregated by the type of transport used. The statistics in Table 2 are given by the group of rock raw materials and transport type. The graph in Figure 4 presents production values for mines using railway transport and those using road transport to nearby railway loading point in descending order, whereas the graph in Figure 5 shows the same statistic for mines using road transport only. Figure 6 presents calculated distances to nearby railway loading points in a straight line, and along the actual route for 25 mines that combine road and rail transport. The latter is on average 30% longer. The descriptive statistics for the analysed 25 cases, calculated from information collected from mines and local geological and mining authorities, are presented in Table 3. The statistics calculated for actual road network used a range of 2.1 km to 30.2 km, with a mean value of 8.68 km and a median value of 8.0 km. Five sites transport rock raw materials for more than 10 km to railway loading point, as shown in Figure 6 and 11 for more than 8.0 km (median value). Figure 6 presents calculated distances to nearby railway loading points in a straight line, and along the actual route for 25 mines that combine road and rail transport. The latter is on average 30% longer. The descriptive statistics for the analysed 25 cases, calculated from information collected from mines and local geological and mining authorities, are presented in Table 3. The statistics calculated for actual road network used a range of 2.1 km to 30.2 km, with a mean value of 8.68 km and a median value of 8.0 km. Five sites transport rock raw materials for more than 10 km to railway loading point, as shown in Figure 6 and 11 for more than 8.0 km (median value). Figure 6 presents calculated distances to nearby railway loading points in a straight line, and along the actual route for 25 mines that combine road and rail transport. The latter is on average 30% longer. The descriptive statistics for the analysed 25 cases, calculated from information collected from mines and local geological and mining authorities, are presented in Table 3. The statistics calculated for actual road network used a range of 2.1 km to 30.2 km, with a mean value of 8.68 km and a median value of 8.0 km. Five sites transport rock raw materials for more than 10 km to railway loading point, as shown in Figure 6 and 11 for more than 8.0 km (median value). Figure 7 presents graphically, on four proportional symbol maps, the total amount of all rock raw materials produced in each poviat of the region shown in Map A, the amount of rock raw materials produced in mines using road transport only shown in Map B, the approximate length of roads heavily used for the transport of rock raw materials in Lower Silesia's poviats shown in Map C and the actual location and size of rock raw materials in mines using road transport only shown in Map D. raw materials produced in each poviat of the region shown in Map A, the amount of rock raw materials produced in mines using road transport only shown in Map B, the approximate length of roads heavily used for the transport of rock raw materials in Lower Silesia's poviats shown in Map C and the actual location and size of rock raw materials in mines using road transport only shown in Map D.
Comparing Maps B and C, there is a visual relationship between poviats with the greatest production of rock raw materials in mines using road transport only and the total length of roads indicated as overloaded with the transport of theses commodities. In addition, Maps C and D show that the production and road transport of rock raw materials are concentrated in some parts of the region (in single large mining operations or clusters of medium sized mines). These results provide graphical and statistical information on the intensity of rock raw materials quarrying and transport in Lower Silesia, as well as background information for the multicriteria analysis of railway potential to take over share of rock raw material transport. Comparing Maps B and C, there is a visual relationship between poviats with the greatest production of rock raw materials in mines using road transport only and the total length of roads indicated as overloaded with the transport of theses commodities. In addition, Maps C and D show that the production and road transport of rock raw materials are concentrated in some parts of the region (in single large mining operations or clusters of medium sized mines). These results provide graphical and statistical information on the intensity of rock raw materials quarrying and transport in Lower Silesia, as well as background information for the multicriteria analysis of railway potential to take over share of rock raw material transport.
Analysis of Conditions Suitable for Rail Transport of Rock Raw Materials
The statistics obtained in part 4.1 were used for the classification of criteria and determination of scores assigned to each of the analysed mine operations. The classes and the associated scores for criteria are shown in Table 4. For criterion 1-distance from existing or potential railway loading point-the class intervals and the associated points were determined from descriptive statistics obtained in the previous stage. The maximum points (3) were assigned to mining sites with railway loading point at a distance of less than 6.0 km, 0 points were given to mines located more than 9.5 km from a loading point.
For criterion 2-annual production-the class intervals and the associated points were also determined from descriptive statistics calculated in the previous stage. The maximum points (3) were assigned to mines with an annual output of more than 900,000 Mg and 0 points to mines with annual output of less than 100,000 Mg. These correspond roughly to the third (upper) quartile (Q3) and first (lower) quartile (Q1), respectively, for active mines using rail or combined road and rail means of transport, as shown in Table 1. This also reflects the capacity of trains used for transport of rock raw materials. This issue is further elaborated in the latter part.
For criterion 3-economic reserves in place-the class intervals and the associated points were determined from descriptive statistics. The following values were obtained for quarrying operations using road transport only: mean 5,451,574 Mg, max. 64,343,000 Mg, min. 82,000 Mg, median (Q2) 2,077,000 Mg, Q1 642,000 Mg and Q3 5,016,000 Mg. Thus, maximum points (3) were assigned to mines with economic reserves of more than 5,000,000 Mg and 0 points to mines with economic reserves of less than 100,000 Mg.
For criterion 4-type of rock raw material mined-the maximum points were assigned to rock raw mineral deposits of national or regional importance, 2 points to other crushed rocks deposits, 1 point to sand or gravel deposits and 0 to other mineral deposits.
Two scenarios were analysed. In the first one (scenario A), all four criteria had equal weights (0.25), in the second scenario (B), criterion 1 had the greatest weight of 0.4, criterion 2 had the weight of 0.3, criterion 3 had the weight of 0.2 and criterion 4 had the weight of 0.1. Weights of criteria were adopted based on the review of literature presented in the introductory part of this paper [2][3][4][5]7,8,12,13,25] and discussion regarding quarry operations. The first two criteria were discussed the most frequently and indicated as the most significant. Therefore, distance to available infrastructure and annual output were weighted higher than the other two criteria, i.e., economic reserves and type of rock.
Analysis of the Potential for Railway Transport of Rock Raw Materials
Out of the 150 considered mines using road transport only, 24 scored at least one point in each criterion and were included in the final ranking presenting potential of introducing railway transport. The greatest number of deposits was rejected due to low annual production (104), and the smallest number because of the type of rock (18). The results sorted from the highest to the lowest final weighted score for mines named alphabetically 'a' to 'y' is presented in Table 5. Among these, 18 represent sand and gravel operations, four-dimension stones and crushed rocks and two other rock raw materials. The maximum theoretical score in both scenarios was three. The calculated maximum score in Scenario A was 2.5 (mines 'o', 'y' and 'h'), the calculated minimum score 1.5 (mines 'i' and 'm'). The median score was 2.0, whereas values of the upper and lower quartiles were 2.25 and 1.75, respectively. The maximum calculated score in scenario B was higher than that in scenario A at 2.8 (mines 'o' and 'y') and the calculated minimum value was 1.6 (mines 'i' and 'm'). The median score for Scenario B was 2.2, whereas values of the upper and lower quartiles were 2.3 and 2.0, respectively. Four mines were ranked differently in Scenarios A and B. In the scenario B, with greater influence of the distance to the potential railway loading point and greater influence of annual production, mine 'x' was ranked one position higher, whereas mines 'h', 'e' and 'j' were ranked lower. Mine 'x' moved up in the ranking because of the smaller weight of criterion 4 (type of rock raw material), whereas mines 'h' and 'e' dropped in the final ranking due to low scores and higher weight of criterion 2 (annual production). Mine 'j' fell in the final ranking because of low scores in the first two criteria (C1 and C2). Otherwise the results for both analyses were consistent.
Taking into account the greater importance of distance to potential loading point and annual output, as well as the statistics (median and upper quartile values) of the multicriteria ranking (Scenario B), the feasibility of the top 11 mines (five ('h', 'c', 'd', 'f' and 'g') of which on the condition of increased predicted production) introducing railway as a solution for rock raw material transport could be analyzed. The three mines with the highest weighted scores are large sand and gravel operations with annual output between 750,000 Mg and 1,000,000 Mg. In this group there are also sand and gravel pits with annual productions of up to 600,000 Mg and dimension stone and crushed rocks mines with productions between 150,000 Mg and 350,000 Mg. The distance to potential loading points in all these cases is less than 6 km.
The standard capacity of a train carrying rock raw material is 2400 Mg, calculated as the product of the standard number of wagon tipplers (40) and their capacity (60 Mg) or 1600 Mg if smaller wagon tipplers are used (40 Mg). The capacity of road trucks, i.e., gross vehicle mass (GVM), varies between 3.5 Mg (3500 kg) for small trucks to 16-20 Mg (16,000-20,000 kg) for larger self-unloading hopper trucks with three axles as shown in photo 1 and even 25-30 Mg for the largest four axle trucks. The small vehicles are used for short distances and local purposes (small volumes), while the larger trucks are used for distances that exceed even hundreds of kilometers. The use of the larger and largest trucks may often be prohibited due to regulations on permissible load capacity for public roads. The advantages of road transport include flexibility, i.e., the ability to deliver rock raw materials directly to a construction site, and lower costs for shorter distance. Railway transport greatly increases the amount of rock raw materials that can be transported, and its costs become competitive to road transport as the distance increases. This is connected with additional costs such as train parking fees. In addition, unless a mine has its own siding, the rock raw material has to be delivered to the railway loading point by road transport. Gawlik et al. have estimated that for road vehicles with 16 Mg capacity, up to 109 km cost of road transport is lower than that of railway transport, and that cost advantage of railway increases with this distance [2]. Kryzia and Kopacz assessed this distance to vary between 113 km and 324 km depending on the assumed cost criteria [3]. Early studies, e.g. by French [25], determined for the case study of Indiana (USA), road transport as cheaper up to the distance of 35 miles (approx. 56 km), railroad transport competitive for distances of up to 230 miles (approx. 370 km) and inland water transport (barge), wherever available, the cheapest above this value.
All of these studies point out that comparative cost calculations are sensitive to numerous factors such as: fuel price fluctuations, toll road charges, legal regulations regarding the permissible load capacity of road vehicles, or price discounts that can be offered by railway freight operators. Thus, railway transport may be competitive even at distances shorter than 100 km, especially in cases of large volumes or rock raw materials than need to be delivered. We should note that it is estimated that construction, e.g., of 1 km of new road, requires up to 30,000 Mg of rock raw materials. This translates into 1200 road truck trips with a capacity of 25 Mg [4].
In addition, the results of our investigations have indicated that even if a mine uses railway infrastructure, a portion of the production is transported with road vehicles, and road haulage will not be replaced entirely by railway. For example, the results of the query of individual mining operations in part Methodology indicate that approximately 40% of mines with railway infrastructure carry between 67% and 75% of their production by rail, 50% use equally rail and road transport and approximately 10% use rail to transport up to 35% of their production.
The presented study and its results are likely the first attempt to quantitatively assess the potential to change road transport to road and railway transport of rock raw materials based on a set of weighted criteria. The results form a foundation for further feasibility studies, where further factors could be investigated such as the condition of the connecting roads, topography, accessibility to loading points, the proportion of production to be transported beyond boundaries of the region, etc.
Conclusions
In this research, the current situation regarding the transport of rock raw materials in Lower Silesia region of Poland being the principal provider of DSCR and significant producer of sands and gravels has been investigated and statistically described based on the query of public administration, mining and transport authorities, as well as all the active mining operations.
It has been established that the majority of rock raw material mines use road transport only (150), 26 mines combine road and railway transport, and 37 use railway infrastructure on mine premises. The road transport accounts for 70% of active mine production and exerts excessive pressure on the road infrastructure condition, transport safety and environment quality.
Therefore, a method has been proposed to identify mining operations that could consider the introduction of a combined road to railway loading point and railway form of transport. The results also constitute knowledge base for local and regional authorities willing to introduce measures aimed at reducing the share of road transport of rock raw materials.
To determine the potential of railway transport, four criteria and two scenarios were proposed and analysed. The criteria included: the distance to the railway loading point, the annual production of rock raw materials, available economic reserves (prognosed lifetime of a mine), and the type of rock raw material. Among the 24 active mines producing rock raw materials not for local (domestic) purposes, 11 have been ranked as the most matched to introduce this new form of transport.
The results demonstrate a multicriteria suitability ranking method that could be used universally for other areas of interest.
|
v3-fos-license
|
2018-09-15T22:01:13.899Z
|
2018-08-29T00:00:00.000
|
52126727
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1371/journal.pone.0203090",
"pdf_hash": "24ab09467b6387d722eb5ffe9a76e8c779a58dc9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1129",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "24ab09467b6387d722eb5ffe9a76e8c779a58dc9",
"year": 2018
}
|
pes2o/s2orc
|
Evolutionary history and spatio-temporal dynamics of dengue virus serotypes in an endemic region of Colombia
Dengue is a prevalent disease in Colombia and all dengue virus serotypes (DENV-1 to -4) co-circulate in the country since 2001. However, the relative impact of gene flow and local diversification on epidemic dynamics is unknown due to heterogeneous sampling and lack of sufficient genetic data. The region of Santander is one of the areas with the highest incidence of dengue in Colombia. To provide a better understanding of the epidemiology of dengue, we inferred DENV population dynamics using samples collected between 1998 and 2015. We used Bayesian phylogenetic analysis and included 143 new envelope gene sequences from Colombia, mainly from the region of Santander, and 235 published sequences from representative countries in the Americas. We documented one single genotype for each serotype but multiple introductions. Whereas the majority of DENV-1, DENV-2, and DENV-4 strains fell into one single lineage, DENV-3 strains fell into two distinct lineages that co-circulated. The inferred times to the most recent common ancestors for the most recent clades of DENV-1, DENV-2, and DENV-4 fell between 1977 and 1987, and for DENV-3 was around 1995. Demographic reconstructions suggested a gradual increase in viral diversity over time. A phylogeographical analysis underscored that Colombia mainly receives viral lineages and a significant diffusion route between Colombia and Venezuela. Our findings contribute to a better understanding of the viral diversity and dengue epidemiology in Colombia.
Introduction
Dengue disease is highly prevalent in tropical countries due to climate, population growth, unplanned rapid urbanization and increased travel and trade [1,2]. Consequently, the global burden of dengue disease is high: a total of 58.4 million (23.6 million-121.9 million) apparent a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 cases in 2013 and 1.14 million (0.73 million-1.98 million) disability-adjusted life-years [3]. Dengue is caused by four closely related viruses referred to as serotypes (DENV-1, DENV-2, DENV-3, and DENV-4), which are further subdivided into genotypes [4]. A higher risk of severe dengue is attributed to the co-circulation of multiple serotypes due to antibody-dependent enhancement of infection [2] and to some particular strains [5]. Therefore, documenting serotype prevalence and the dynamics of genetic variants help forecast epidemic impact, ensuing epidemic management and preparedness.
A total of six Colombian states (out of 32) accumulate more than half of the cases in the country: Valle del Cauca (19%), Santander (12%), Antioquia (8%), Norte de Santander (7%), Huila (6%), and Tolima (6%) [7,9,12]. The region of Santander is located in the northeast of Colombia and comprises the states of Santander and Norte de Santander. Previous studies reported changes in the predominance of serotypes over time in this region [11,[13][14][15]. The argument over whether there is a genetic basis or there are random fluctuations that explain the temporal distribution of serotypes has not been settled yet.
Previous studies [16][17][18][19] that focused on the molecular epidemiology of dengue in Colombia included sequence data collected up to 2008 and did not include DENV-4. Consequently, many aspects of the local epidemiology remain unclear. In the present study, we obtained 143 new envelope gene sequences from serum samples collected in the region of Santander and six sequences from serum samples collected in other regions between 1998 and 2015. We provide insights into the evolution and population dynamics of all four dengue serotypes in the region of Santander, which help understanding dengue epidemiology in Colombia and might be relevant for future control programs, including vaccination.
Study sites
Serum samples were collected from the metropolitan area of Bucaramanga in the state of Santander and the province of Ocaña in the state of Norte de Santander; these constitute the region of Santander that is located in the central northern part of Colombia [Fig 1]. Bucaramanga is the capital city of Santander and together with three nearby municipalities constitutes the seventh largest metropolitan area of the country (around 1.4 million inhabitants). It has an average altitude of~1,200 meters above the sea, an annual mean temperature of 25˚C and an average amount of annual precipitation of 1,041 mm. The city's urban mass transportation system moves over 100,000 passengers daily and extends to the Metropolitan area [20]. Moreover, Bucaramanga is a transportation hub for the northeast of Colombia and it has a bus terminal where 1.4 million passengers commute to diverse regions of the country on a yearly basis [21]. Bucaramanga's airport has national and international traffic operations, which mobilized more than 1.2 million people in 2010. At least 94% of dengue cases from Santander were reported in the metropolitan area of Bucaramanga [7,9], and it was one of the participant cities in the dengue vaccine CYD-TDV clinical trial [22].
The province of Ocaña is a conurbation of 10 municipalities in an area of 2,065 meters above the sea in the state of Norte de Santander. Ocaña is its capital and its third largest municipality (around 98,229 inhabitants in 2014). This city has an annual mean temperature of 22˚C and an average amount of annual precipitation of 1,000 mm. Cucuta is the capital city of Norte de Santander, it is bordered by Venezuela which makes it an important commercial city. Ocaña is located at a distance of 197 km (122 mi) from Cucuta. Ocaña has a small airport with only regular flights to Cucuta [23].
Viral strains
DENV strains were obtained from the collection of the Laboratorio de Arbovirus, Centro de Investigaciones en Enfermedades Tropicales (CINTROP), Universidad Industrial de Santander, Bucaramanga. Viruses were isolated by culturing in C6/36 mosquito cells from patient sera collected in previous studies [11,14,15,24,25]. Sera were collected either for routine dengue laboratory diagnosis at medical institutions-for which an Institutional Ethics Committee approval was not required-or collected from patients enrolled in cross-sectional clinical trials, which were approved by the Research Ethics Committee of the Universidad Industrial de Santander. In the latter, an informed consent from each patient was obtained. All virus samples were analyzed anonymously. We selected samples to represent years in which each virus serotype was recorded between 1998 and 2015: 137 samples from the region of Santander (60 DENV-1, 33 DENV-2, 39 DENV-3, and 11 DENV-4) and six samples that were available from the Valle del Cauca, Bolivar, and Cesar states (southwestern and northeastern regions).
Full-length envelope gene (E-gene) sequencing
Viral RNA was extracted directly from supernatants of DENV-infected C6/36 cells using the QIAamp Viral RNA mini kit (Qiagen). The RNA was transcribed to cDNA using RevertAid reverse transcriptase (Thermo Scientific, USA) and random hexamer primers (Thermo Scientific). The full E-gene (~1485 pb) was amplified by PCR using DENV serotype specific primers [S1 Table] and Thermo Scientific Taq DNA Polymerase according to manufacturer Evolutionary history and phylogeography of DENV from Colombia instructions. Each reaction of DNA amplification produced overlapping amplicons of 863-1520 nucleotides in length. Amplicons were sequenced using a Sanger sequencing commercial service (Macrogen DNA Sequencing Service, Seoul Korea). Sequence assembly was performed with CLC Genomics Workbench 4.5 (CLC Bio, Denmark) and sequences were deposited in GenBank [S2 Table].
Data selection
Available full-length E-gene accessions with known location and sampling date were retrieved from GenBank (last accessed in March 2017). The total dataset (n = 9.669) was aggregated by serotype (3676 of DENV-1, 2925 of DENV-2, 1746 of DENV-3 and 1322 of DENV-4) and analyzed separately. Identical sequences from the same country and year were removed using the UCLUST algorithm in the USEARCH v.10.0.240 software [26]. The filtered datasets (3452 of DENV-1, 2789 of DENV-2, 1645 of DENV-3 and 1100 of DENV 4 Sequences) were combined with novel Colombian isolates, aligned using MUSCLE v.3.8 [27] and visualized using Bioedit v.7.2.5 software [28]. Preliminary phylogenetic analyses were done using Maximum Likelihood (ML) methods with a non-parametric bootstrap using PhyML v.3.1 software [29] in order to identify genotypes and to focus the analysis in the genotypes that were observed in Sequences from the same year were downsampled when they fell within a monophyletic group by country and were overrepresented. The resulting datasets (DENV-1 = 118, DENV-2 = 118, DENV-3 = 66 and DENV-4 = 77) had only sequences from the region of the Americas because the phylogenetic trees showed a single introduction of each genotype into this region from non-American countries.
A Likelihood Mapping Analysis in TREE-PUZZLE v.5.3 [30] software and a substitution saturation analysis in DAMBE v. 6 [31] showed that there was enough phylogenetic information in each dataset [S1 Fig]. All datasets were free of recombination signal following a pairwise homoplastic index test using SplitsTree v.4.12.6 [32] (pairwise homoplastic index 0.5 for each dataset).
Data analysis
Haplotype diversity and nucleotide diversity were calculated using the software DNAsp v.5 [33]. For each dataset, the best-fit model of nucleotide substitution was selected based on Akaike Information Criterion (AIC) using the APE package v.3.4.2 [34] and the R v.3.3.0 software [35]; The Tamura and Nei plus invariant and discrete gamma models were the best-fit nucleotide substitution models for DENV-1 and -2, and the generalized time reversible plus invariant and discrete gamma models were the best-fit nucleotide substitution models for DENV-3 and -4. Regression of root-to-tip genetic distance against sampling time [S3 Table] using TempEst v 1.5 [36], showed that there was sufficient temporal signal in each dataset to proceed with phylogenetic molecular clock analyses. The best-fit clock model (strict vs. relaxed) and the best-fit demographic model (among constant size, exponential growth, skyride and skygrid models), were selected via path sampling (PS) and stepping-stone (SS) methods (100 steps with 5 million iterations each) [37]. In all cases, the uncorrelated lognormal (UCLN) relaxed-clock model was preferred. For the model-based demographic inference, the non-parametric skygrid model was preferred for DENV-1 and DENV-3, and the parametric Exponential growth model for DENV-2 and DENV-4 [S4 Table].
Viral demographic curves were reconstructed for each serotype using either the isolates from Colombia (n = 235) sampled between 1982 and 2015 or the isolates from the region of Santander (n = 160) collected between 1998 and 2015 in order to have a measure of dengue genetic diversity and its fluctuations over time.
The pattern of DENV spread was identified using a standard continuous-time Markov chain (CTMC) coupled with the Bayesian stochastic search variable selection (BSSVS) procedure [38]. We assumed that the transition rates between locations were reversible (symmetrical model). Different schemes of discrete geographical locations were used [S3 Fig] to account for any sampling bias: in the first scheme, the country of isolation was used. In the second scheme, neighboring countries were grouped in regions except for Colombia and Venezuela: Andes (Bolivia, Ecuador and Peru), Central America, Greater Antilles, Lesser Antilles, North America and Southern Cone (Argentina, Brazil and Paraguay). In the final scheme, the Andean region and Southern cone regions were merged into one region named South American region. A Bayes factor test (BF > 9) was used to recognize well-supported diffusion rates using the SpreaD3 v0.9.6 software [39]; well-supported diffusion routes concerning Colombia were compared among schemes.
Bayesian approach was implemented in BEAST v1. 8
DENV genotypes circulating in Colombia
A total of 235 full-length E-gene sequences from Colombia viruses isolated in Colombia were used in this study: 193 from the region of Santander (160 from the metropolitan area of Bucaramanga and 33 from the Province of Ocaña); 31 sequences were from the states of Antioquia (n = 21), Valle del Cauca (n = 6), Bolivar (n = 2), Guaviare (n = 1), and Cesar (n = 1); and 11 sequences for which the exact state was not available.
DENV-1
The MCC tree for DENV-1 included 79 E-gene sequences from Colombia and evidenced several introductions intro the country [Fig 2]. Two different virus strains isolated in 1985 (AF425616) and 1996 (AF425617) were related to viruses circulating in the Lesser Antilles and
DENV-4
The MCC tree for DENV-4 included 25 E-gene sequences from Colombia and evidenced at least two introductions [ Fig 5]. The majority (n = 19) of strains were from the Santander Evolutionary history and phylogeography of DENV from Colombia
Well-supported migration rates concerning Colombia
We estimated the most significant migration routes among geographical locations using a discrete Bayesian phylogeographic analysis. To account for any sampling bias, the reconstructions were done using three different geographic schemes-detailed in methods. Well-supported diffusion routes with respect to Colombia were compared among schemes. In all cases, the recovered phylogeographical pattern underscored a strong link between Colombia and Venezuela in terms of viral diffusion. Likewise, the Caribbean region played an important role in the diffusion of DENV2-DENV4. Lastly, Central America was relevant for the diffusion of DENV2, whereas South America was relevant for the diffusion of DENV4. The most homogenous schema (scheme C) was also analyzed under an asymmetrical model as a proxy for directionality [ Fig 6]. This analysis showed that most DENV lineages where introduced into Colombia from the other regions. Colombia appeared to be important for the diffusion of DENV3 into Venezuela and the Greater Antilles and had a significant bidirectional diffusion of DENV-2 lineages with Venezuela and Central America.
DENV serotypes demographic history
The Bayesian demographic reconstructions for Colombia are shown in Fig 7. Overall, we observed a steady increase in the effective population size (Ne) over time for DENV-2 and Evolutionary history and phylogeography of DENV from Colombia DENV-4. There were oscillations in the case of DENV-1 and more pronounced ones in the case of DENV-3, but the general trend is of increased diversity over time. In the case of DENV-3, there is an increase in diversity peaking during the period 2001-2005, a sharp drop after that and a steady increase from 2009 onwards. The demographic reconstruction for the region of Santander, which accounted for 82% of the Colombian sequence data, followed the same dynamics as that recovered with the dataset from the Country. There was an agreement between the increase of Ne of DENV-3 and the re-introduction of this serotype into the country in 2001 that led to high DENV-3 prevalence in the following years. The same type of pattern was not observed for DENV-1, which was highly prevalent during the periods of 1998-1999 and 2007-2008. DENV-2 and DENV-4 are respectively the most and less prevalent serotypes isolated in the region and although there have been changes in the prevalence of both of them over time, we did not observe any visible difference in the growth of Ne.
Discussion
This study is the first large-scale analysis of the spatial and temporal dynamics of DENV in Colombia during the period 1998-2015 using newly sequenced and curated genetic data retrieved from GenBank. The novel sequences were sampled in distinct locations of Colombia, but most of them came from the region of Santander. We documented the frequent introduction of dengue lineages into Colombia, the ongoing co-circulation of multiple serotypes and lineages within the country and the local viral population dynamics.
Our phylogenetic analysis confirmed the circulation of one genotype per serotype (DENV-1: Genotype V, DENV-2: Asian/American Genotype, DENV-3: Genotype III and DENV-4: Genotype II) and it is consistent with the results of other studies [16][17][18][19]. The presence of DENV-4 genotype I was documented in Brazil in 2013 [47], but we did not find evidence of its circulation in Colombia. Despite multiple strain introductions of DENV-1, DENV-2 and DENV-4, our data indicated that only one lineage predominates up to date, suggesting the turnover of viral lineages over time. With the data at hand we could not address if such turnovers are due to stochastic die-off or other factors; however, the re-emergence of DENV-2 in an dengue-endemic setting from Indonesia was due to the loss or decrease of herd immunity during a 5-year period where DENV-1 predominated [48]. Similarly, changing serotype prevalence has been associated with lineage extinction and lineage replacements in Thailand and with dengue epidemics in Singapore [49,50,51,52]. Here, we recorded two distinct lineages of DENV-3 that apparently were introduced at the same time, that had the same level of circulation and that overlapped in time and space. It may be the case that, the complete susceptibility of the population to this serotype at the time of the introduction allowed two successful independent transmission chains. Similarly, the co-circulation of two DENV-3 lineages has been reported in the metropolitan area of Medellin, Colombia [18] and the co-circulation of different lineages has been demonstrated for DENV-1 and DENV-2 in Brazil [53,54].
Our estimated tMRCAs for each serotype preceded (between 5-7 years) the respective official epidemiologic reports by the National Institute of Health from Colombia [9,12]. This is likely the outcome of a preliminary silent circulation of virus (i.e., individuals experiencing mild to asymptomatic infections) coupled with local passive surveillance for dengue [2]. The DENV surveillance in Colombia relies on the immediate reporting of fatal cases and the weekly routine reporting of symptomatic ones by the health care providers [55]. Consequently, the true burden of dengue might be underestimated once local physicians may fail to report cases quickly and routinely. Such passive surveillance includes neither a thorough serotyping nor the genetic characterization of strains owing to financial costs and time constraints. All fatal cases are laboratory-confirmed, but only around 10-20% of laboratory confirmations remain mandatory for the non-fatal ones [55]. Thus, health care providers belonging to the public health sentinel surveillance system send only a small fraction of collected serum samples to the National Institute of Health for laboratory confirmation and serotype identification. As a consequence, an epidemic has often reached or passed its peak before it has been recognized or before it can be controlled, resulting in a vague idea of the influence of the serotypes behind it. The increased regional commerce and travel in dengue hyper-endemic Latin America likely drive frequent exchanges and importation of DENV through human movement. For instance, it has been shown an effect of air travel in increased dengue transmission in the Americas and Asia [43, 53,56]. Here, We observed frequent viral lineage exchange among Latin American countries and persistent co-circulation of dengue viral lineages. Human flow in Colombia is also particularly significant as the country is the largest sending country of migrants in South America [57,58]. However, the country seems to act as a sink instead of a source for DENV lineages. The strong association with Venezuela probably reflects geographical proximity, connectivity through trade and migratory history (A significant migrant wave occurred in the mid 80's to Venezuela, primarily motivated by its economic boom and by the economic difficulties of Colombia at that time). However, we also do not rule out the possibility that this association could be a result of intense sampling in the northeast of Colombia. For example, the DENV-4 sequence from Valle del Cauca (southwestern Colombia) was related to those from Peru and Ecuador. Nevertheless, the region of Santander represents a point of transit with the neighboring country of Venezuela and thus substantial morbidity due to cross-border exchange of dengue-infected people is expected to continue.
Given that DENV transmission is similar to other arboviruses, the aforementioned high levels of viral exchange are consistent with the rapid and recent establishment of chikungunya (CHIKV) and Zika (ZIKV) in the region [59,60,61]. However, whereas the dynamics of DENV are complicated by the four serotypes and complex susceptibility profiles of local populations, recurring CHIKV and ZIKV outbreaks will be functionally related to the turnover rate of susceptible humans in addition to reintroduction rates [62]. These observations underscore that arboviral diseases are a recurring public health problem in Colombia and recognizing the importance of their molecular investigation will strengthen diagnosis and epidemiological integrated surveillance.
The population dynamics of DENV can be driven by viral introduction events that could eventually lead to lineage turnovers as have been documented in other settings [43,53,63,64,65]. For instance, regular stochastic clade replacement led to recurring homotypic dengue outbreaks in Malaysia [66]. In the same vein, constant DENV introductions and in situ evolution contribute to viral diversity in Singapore where replacement of a predominant viral clade, even in the absence of a switch in the predominant serotype, hinted a possible increase in transmission [52]. In our study, and in particular in the Santander Region, we observed the consistently co-circulation of the four serotypes over consecutive years. This underscores an intricate pattern of competition that does not result in complete serotype displacement and may explain the increase of diversity over time for the four serotypes. We observed much rapid rise in the Ne of DENV-3 followings its introduction that was also reflected in the predominance of this serotype in the period 2001-2004. Thus, in the Santander Region, both in situ evolution and the recurrent introduction of lineages drive local dengue dynamics.
This study had limitations: most of the samples were collected from a single region and this limits our knowledge on the country-wide viral genetic diversity. Likewise, samples from other countries and times are quite heterogeneous and this might also hide additional routes and the relevance of specific regions from the establishment and transmission of dengue in Colombia and the Americas. Furthermore, giving scarcity of resources, limited data on serotype prevalence restricts the conclusion that could be drawn on the dynamics of DENV. Larger series with long-term follow-up are needed to confirm the effect that a particular serotype might have during epidemics. Nonetheless, even with these limitations, we showed that passive laboratory-based disease surveillance studies allowed us to have a general picture of the diversity and dynamics of DENV.
Conclusion
We characterized the spatial-temporal dynamics of all dengue serotypes in a highly endemic area of Colombia, documented the co-circulation of a single genotype of each serotype, and the unapparent circulation of every serotype prior its first detection in the country. This study also showed that genetic diversity of serotypes circulating in the country continues to grow due to in-situ evolution and recurring introductions of viral strains from different countries in the region. Our study advances the countrywide genomic surveillance to lay the groundwork for the introduction of dengue vaccines and future control initiatives and underscores the continued need for a sensitive surveillance system aiming to collect reliable baseline data. Trees were rooted using Yellow fever virus; every tree included representative taxa for each serotype and representative taxa for the corresponding genotype. Colored tips represent the samples from Colombia: DENV-1 Genotype V in blue, DENV-2 Genotype Asian/American in red, DENV-3 Genotype III in green, and DENV-4 Genotype II in yellow. (SVG) S1
|
v3-fos-license
|
2019-04-03T13:10:15.789Z
|
2018-09-01T00:00:00.000
|
92588599
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://revistas.unicordoba.edu.co/index.php/revistamvz/article/download/1371/pdf",
"pdf_hash": "eaae86c8c802740deeeb5e974aa527a3c3ad8f22",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1131",
"s2fieldsofstudy": [],
"sha1": "eaae86c8c802740deeeb5e974aa527a3c3ad8f22",
"year": 2018
}
|
pes2o/s2orc
|
Detection of equine herpesvirus 1 and 4 and its association with latency-associated transcripts in naturally infected horses in Colombia
Objective. Determine the presence of antibodies and viral genomes of EHV-1 and EHV-4, as well as to detect the presence of latency associated transcripts (LATs) in a selected population of Colombian horses. Materials and methods. Serum samples, submandibular lymph nodes and trigeminal ganglion were obtained from 50 horses and analyzed. Sera were evaluated for the presence of antibodies against EHV-1 and EHV-4 while tissues were initially evaluated for the presence of viral genome by nPCR. Finally, samples were used for the detection of LATs through RT-PCR. Results. In general, 6/50 samples showed antibodies to EHV-1 and 44/50 were positive for EHV-4. As for viral genome detection, 10/50 samples were positive for EHV-1 and 30/50 were positive for EHV-4; in addition, 22/35 horses positive for EHV DNA were positive for LATs. The use of these tests led to eight possible combinations of results. Conclusions. The evidence used shows that horses can have simple viral infection, co-infections with both viruses, latency due to the presence of LATs and the simultaneous presence of LATs and viral genome replication at a given time. It contributes to the understanding of the behavior of the disease in Colombia and calls attention to the importance of implementing complementary diagnoses to the serology for the control of these viruses.
INTRODUCTION
Equine herpesvirus 1 (EHV-1) and 4 (EHV-4) are important pathogens that have a significant economic impact on horse populations worldwide.They are responsible for a variety of diseases, including respiratory disease, abortion, neonatal disease and myeloencephalitis (1).The two viruses have seroprevalence rates ranging from 9 to 28% for EHV-1 and 90 to 100% for EHV-4 (1,2).
Both viruses are members of the family Herpesviridae, subfamily Alphaherpesvirinae and genus Varicellovirus (2).The viruses have double stranded DNA genomes containing 150,223 and 145,597 base pairs (bp) in length, respectively (1,3), and 79 and 80 open reading frames (ORFs), respectively (1).Infection by these alphaherpesviruses produces a respiratory disease characterized by fever, anorexia and nasal and ocular discharge, which, together with bacterial proliferation, may contribute to the production of rhinopneumonitis (1,2).Specifically, EHV-1 replication occurs in epithelial cells of the upper respiratory tract and in local lymph nodes, resulting in leukocyte-associated viraemia (2).Such viraemia leads to viral replication in the endothelial cells of blood vessels and in the central nervous system and pregnant uterus, thus triggering abortions and paresis (2,4).It has been proposed that EHV-1 has neuropathogenic and non-neuropathogenic strains differentiated by a single mutation in ORF30 encoding viral DNA polymerase (2,5).
EHV-1 and EHV-4 can be diagnosed directly through the detection of virus in clinical samples (nasal swabs, serum and buffy coat samples) by virus isolation in cell culture, conventional or real time PCR (11,12) or indirectly through the detection of EHV-1 or EHV-4 antibodies in serum or cerebrum spinal fluid (CSF) by viral neutralization, complement fixing or EHV-1/ EHV-4 type specific ELISA based on recombinant antigens of glycoprotein G (gG), thus allowing differentiation between them (11,12).However, these tests only indicate whether the horse has been exposed to the virus (12,13).PCR assays are more sensitive and rapid, have replaced the time-consuming procedure of virus isolation (13,14), however these tests are unable to differentiate between a replicating or non-replicating virus (14,15).For this reason, diagnosis of latently infected horses is important because they represent virus reservoirs (16), as these tests are based on molecular detection of LATs and quantification by real time PCR at a DNA and mRNA level, in order to differentiate viral states (10,14).
In Colombia, the first report of EHV-1 was in 1992, when the virus was isolated from samples of an aborted fetus whose dam had come from Argentina (17).In 2007 and 2008, (18,19) a seroprevalence was reported in the regions of Antioquia and Meta, with co-infection with both viruses.The objective of this study was to evaluate the presence of EHV-1 and EHV-4 in a selected population of Colombian horses, through serology, detection of viral genomes in submandibular lymph node and trigeminal ganglion samples and detection of LATs in both tissues.
MATERIALS AND METHODS
Animals and sample collection.The serum and tissue samples were taken from horses (n = 50) slaughtered in a commercial slaughterhouse.The equine abattoir is located in the department of Cundinamarca in the center of Colombia, where it receives horses from all over the country.Of the fifty horses, 24 were males and 26 were females, with an age distribution that varied between 5 to 25 years old estimated by photos of incisor dentition.Blood samples (10-15ml) from each slaughtered horse were taken and eran machos y 26 hembras, distribuidos en una edad entre 5 y 26 años estimado por fotos de la dentición incisiva.Muestras de sangre (10-15ml) de cada caballo sacrificado fueron tomadas y transportadas al laboratorio, donde se centrifugaron a 2500rpm por 10 min y el suero fue almacenado a -70°C.Nódulos linfáticos submandibulares (SLN) y ganglio trigémino (TG) se recuperaron por disección de la cabeza de cada caballo,se transportaron a 4°C al laboratorio y se almacenaron a -70°C.Para evitar contaminación cruzada entre tejidos de diferentes caballos, se empleó instrumental desechable en la toma de cada muestra.El estatus de vacunación de los animales para EHV-1 y EHV-4 era desconocido.
Antibodies detection for EHV-1 and EHV-4.Serum samples were tested for the presence of antibodies to EHV-1 and EHV-4 using the SVANOVIR EHV-1/ EHV-4 Ab kit (Svanovir, Sweden), according to manufacturer's instructions.The results were recorded at 450nm using a (BioTek® Power Wave XS OD) ELISA reader.Samples were considered positive at >0.2 and negative at lower than 0.1.
Nested PCR (nPCR).Twenty-five milligrams of each tissue (TG and SLN) were excised and the DNA was extracted using the QIAamp® DNA mini kit (QIAGEN®), in accordance to the manufacturer´s instructions.The nPCR amplified a conserved region of the gB gene for both EHV-1 and EHV-4, primers reported by Borchers et al, 1993 (20).EHV-1 primers for the first PCR reaction were (P1 5´-TCTACCCCTACGACTCCTTC-3´ and P2 5´-ACGCTGTCGATGTCGTAAAACCTGAGAG-3´) and for the second PCR reaction were (P3 5´-CTTTAGCGCTGATGTGGAAT-3´ and P4 5´-AAGTAGCGCTTCTGATTGAGG-3´), which amplified a region of 1474 and 771pb, respectively (20).EHV-4 primers for the first PCR reaction were (P1 5´-TCTATTGAGTTTGCTATGCT-3´and P2 5´-TCCTGGTTGTTATTGGGTAT-3') and for the second PCR reaction were (P3 5´-TGTTTCCGCCACTCTTGACG-3´and P4 5´-ACTGCCTCTCCCACCTTACC-3´), which amplified a region of 952 and 600pb, respectively (20).Reactions were performed in a total volume of 25 µl containing 0.25 µl of Taq polymerase (5 U/µl) (Go taq flexi -Promega®), 5x Taq buffer (2.5 µl), 2 mM MgCl 2 , 0.5 mM dNTP, 1 µl of each primer (20 µM) and 2 µl of extracted DNA.Positive and negative controls were included.The PCR reactions were performed on a Biorad® -DNA thermocycler using a protocol consisting, in a first round, of denaturation at 94ºC for 5 min, followed by 40 cycles including a denaturation at 94ºC for 1 min, annealing at 57ºC or 60ºC for 1 min for EHV-1 and EHV-4, respectively, and extension at 72ºC for 1 min, with a final extension at 72ºC for 10 min.One microliter of the first amplification reaction was used for the nPCR.The second-round PCRs were performed as described above, with annealing steps at 56ºC or 55ºC for 1 min for EHV-1 and EHV-4, respectively.
The positive controls used corresponded to a Colombian EHV-1 cell culture-isolate from 1992 (17) and an EHV-4 positive clinical sample.DEPCtreated water was used as a blank control.
LATs detection by nPCR.TG and SLN samples testing positive or negative for EHV-1 or/and EHV-4 DNA detection were subjected to gene-63 and gen-64 LAT amplification by another nested PCR, taking into account that the primers for genes 63 and 64 amplify sequences for EHV-4 and EHV-1, respectively (21,22).For this purpose, RNA was extracted from tissues samples with the RNeasy® kit (QIAGEN), in accordance with the manufacturer´s instructions.All RNA samples were treated with DNase to digest any contaminating viral DNA.Complementary DNA (cDNA) synthesis reaction was performed using random hexamer primers (Promega®) and M-MLV (Invitrogen®), according to the manufacturers' instructions.For the gen 63 nested PCR, the first reaction was performed with primers (63eF-5´ GGGGCAAGGGCTCTAAACCT-3´and 63eR-5´ CAGGAGACACCAGCAACGAC-3´) and the second PCR reaction with primers (63iF 5´-CAAACTCCCGCAGGTTGTATC-3' and 63iR 5'-ACTTTGGACAGCGAGGGTGAA-3'), in order to amplify a sequence of 532 and 253bp, respectively (10).For the Gen 64 nPCR, the primers for the first PCR were (64eF 5´-GGAGACCGCGTCCAGCACTA-3´and 64eR 5´-CTCCGAGGGAAGCCAGACCT-3') and for the second PCR reaction were (64iF 5´-GGACCCCCTGGGCGTTGAGG-3´and 64iR 5´-CCGCGGAGACTGCCACACTC-3´), which amplified a sequence of 1108 and 500 pb, respectively (10).The reactions were performed in a total volume of 25µl containing 0.25 µl of Taq polymerase (5 U/µl) (Go taq flexi -Promega®), 5x Taq buffer, 2 mM MgCl 2 , 0.5 mM dNTP, 1 µl of each primer (20 µM) and 2µl of cDNA.PCR reactions were performed on a Biorad® -DNA thermocycler.The first-round PCR consisted of an initial denaturation step at 95ºC for 5 min, followed by 40 cycles of denaturation at 94ºC for 30 seg, annealing at 55ºC and 64ºC for 45 seg for Gen 63 and Gen 64, respectively, and extension at 72ºC for 1min, with a final extension at 72ºC for 5 min.One microliter of the first amplification reaction was used for the nPCR.The secondround PCR was performed as described above, with the annealing step at 58ºC and 60ºC for 45 seg for Gen 63 and Gen 64, respectively.As positive controls, RK-13 cells were infected with EHV-1 (isolated in 1992 -Colombia) and at 48 hours post infection, the cells were harvested, pelleted, and then the RNA was extracted.For EHV-4, a positive clinical sample (by PCR) was employed.DEPC-treated water was used as a negative control for the DNA extraction and PCRs.Controls were included in each PCR run, and the hibridación a 57°C o 60°C para EHV-1 y EHV-4, respectivamente, y una extensión a 72°C por 1 min, con una extensión final a 72°C por 10 min.Un microlitro de la primera amplificación se empleó para la nPCR.La segunda PCR se realizó como se mencionó anteriormente, con etapas de hibridación a 56°C o 55°C por 1 min para EHV-1 y EHV-4, respectivamente.Los controles positivos empleados correspondieron a un cultivo celular infectado con el aislamiento de 1992 (17) y para EHV-4 a una muestra clínica positiva.Agua DEPC se empleó como control blanco.
RESULTS
Serological reactivity.Serological assessment showed that 44/50 (88%) and 6/50 (12%) of horses were positive for EHV-4 and EHV-1 antibodies, respectively (Figure 1).It is noteworthy that all horses positive for EHV-1 antibodies were also positive for EHV-4.Among the seropositive horses, no difference was found between males and females for both viruses.Likewise, no age differences were observed in the presence of antibodies against EHV-1 and EHV-4 (p>0.05).
Analysis of the results for both ELISA and PCR for EHV-1 shows that the most frequent combination found was negative (38 horses).Likewise, EHV-1 positive results for all tests or positive for viral DNA in both tissues and negative for ELISA were never observed (Table 2).In contrast, for EHV-4 the most frequent combination was positive for all tests, together with negative for viral DNA detection and positive for ELISA.
LATs detection.LATs of genes 63 and 64 were detected by RT-PCR in 22/50 (45%) of horses evaluated that horses were positive for virus detection by PCR.Gene-63 transcript was detected only from trigeminal ganglion in 17/28 (60%) of EHV-4 positive horses; whereas gene 64 transcript was amplified only from submandibular lymph node samples from 5/7 (71%) EHV-1 positive horses.However, transcripts for genes 63 and 64 were detected simultaneously only for one horse.
present in the country, as previously reported (18,19), and in terms of prevalence it is in agreement with studies from other countries in the latest years (9), where the prevalence of EHV-1 was lower (12 to 21%) than EHV-4 (88 to 100%).The results of our study echoed previous reports (18,23) showing that virus distribution was unaffected by sex and age.In contrast to other studies (19), ours shows the presence of EHV-1 and EHV-4 viral genomes in sera-negative samples; this finding suggests that serological results do not indicate the stage of the disease because some horses with latent infection have failed to produce antibodies (24).Another possibility is that horses tested by ELISA might have been recently infected with EHV-1 and EHV-4 and have not yet mounted a detectable antibody response.
For the molecular detection of EHV-1 and EHV-4, the recommended samples are respiratory tract cells, serum, lymph nodes tissues and peripheral blood mononuclear cells (PBMC) (11,25), where the infectious virus is replicating.
For the detection of latent virus, trigeminal ganglia and PBMC are the recommended tissues in order to detect markers of latency (LATs) (26, 27, and 28).In this study, EHV-1 and EHV-4 viral DNA were detected from submandibular lymph nodes and trigeminal ganglia, being that the percentage of viral genome detection for EHV-1 in submandibular lymph nodes was higher compared to TG, as reported by Dunowska et al, 2015 (9).In the case of EHV-4, we found more viral DNA from TG, as was reported previously (18).
Consistent with previous studies (16,27), who report latency of 40 and 45% respectively; our study found LATs in 45% of infected horses.However, there are discrepancies between the studies regarding the site of latency for EHV-1 and EHV-4 (9, 27); specifically, EHV-1 has been reported to establish greater latency in lymphoid tissue than in nervous tissue.In the current study, EHV-1 LATs were only found in submandibular lymph nodes, whereas EHV-4 LATs were found in trigeminal ganglia.Several studies have shown that the technique used for detection may influence the ability to identify horses in latency, particularly using conventional PCR (9).Allen et al 2008 (16), estimated the prevalence of EHV-1 latency using ultralow magnetic bead-based detection, sequence-capture, and nested PCR, finding about a 30% greater prevalence compared to conventional PCR.Based on this report, it Se debe notar que los herpesvirus producen infección latente, la que se reactiva bajo la acción de condiciones de estrés (23,28).Para el caso de la población evaluada en este estudio, se debe señalar que fue sometida a un estrés severo generado por viajes de larga distancia, cambios extremos de temperatura, falta de alimentación y hacinamiento; estos factores pueden activar la replicación viral (29).Estos factores particulares del estudio podrían indicar que la detección de LATs en TG y en células de SNL demuestran una forma latente de infección coincidente con la reactivación de la infección activa.Por esta razón, es imposible demostrar la ausencia de una infección activa con EHV-1 y EHV-4 basados únicamente en la detección de LATs, haciendo necesario incluir simultáneamente otras pruebas de diagnóstico, tales como la detección de mRNA que codifica una de las glicoproteínas víricas, especialmente si los eventos de estrés coinciden con el muestreo.
is probable that in our study the number of horses in the latency phase is underestimated.
It should be noted that herpesviruses produce latent infection, which reactivates under stress conditions (22,28).In the case of the population evaluated in this study, it should be noted that it was subjected to severe stress generated by long travel distances, extreme temperature changes, lack of food and overcrowding; these factors could activate viral replication (29) With the techniques used in the present study, eight possible combinations of results were found (i.e., horses testing positive to genome detection in both studied tissues, but negative to ELISA or vice versa).These combinations clearly showed how a single diagnostic test is not sufficient enough to establish the status of a farm or even determine if the horse is under acute or latent infection.In addition, we found that horses seropositive for both herpesviruses also harbored the two viral genomes; this coinfection data is in agreement with previous reports (24,30).It is important to keep in mind that a seronegative horse is not indicative of viral absence, since it could have a latent infection.For this reason, it is important to add to serology the molecular detection of the viral genome and to demonstrate the presence or absence of LATs, which could help to establish a real status of the disease.
In conclusion there is still limited information concerning the presence and the prevalence of EHV-1 and EHV-4 in Colombia.However, the current study not only validated the endemic nature EHV-4, but also highlighted the circulation of EHV-1 with no evidence of prior immunization.Furthermore, although this study did not differentiate between active and latent EHV-1 and EHV-4 infection, the evidence of LATs and the presence on viral DNA in the same sample could be indicating a status of the disease.
Figure 1 .
Figure 1.EHV-1 and EHV-4 serological reactivity by ELISA and viral DNA detection in submandibular and trigeminal tissues in the evaluated population E H V -1 E H V -4 Ethics Committee of the Faculty of Veterinary Medicine and Animal Science of the Universidad Nacional de Colombia, Bogota.
Table 1 .
Detection of EHV-1 and EHV-4 DNA in trigeminal ganglia and submandibular lymph nodes
. These particular study factors could indicate that the detection of LATs in TG and SNL cells demonstrate a latent form of infection coincident with the reactivation of active infection.For this reason, it is impossible to demonstrate the absence of an active EHV-1 and EHV-4 infection based solely on the detection of LATs, making it necessary to simultaneously include other diagnostic tests, such as the detection of mRNA encoding one of the viral glycoproteins, particularly if stress events are coincident with sampling.
|
v3-fos-license
|
2017-06-19T10:15:22.098Z
|
2015-10-07T00:00:00.000
|
5384078
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "b26cf589911567988427c3fe87b0287555cf3fda",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1133",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Biology"
],
"sha1": "b26cf589911567988427c3fe87b0287555cf3fda",
"year": 2015
}
|
pes2o/s2orc
|
Cytotoxicity of a New Nano Zinc-Oxide Eugenol Sealer on Murine Fibroblasts
Introduction: The aim of this study was to evaluate the cytotoxicity of a new nano zinc-oxide eugenol (NZOE) sealer in comparison with AH-26 and Pulpdent root canal sealers. Methods and Materials: The L929 mouse fibroblast cells were cultivated and incubated for 24, 48 or 72 h with different dilutions (1/1, 1/2, 1/4, 1/8, 1/16 and 1/32) of culture media previously exposed to either of the test sealers naming NZOE, AH-26 or Pulpdent. At the end of incubation period, the effect of sealers on cell viability was evaluated using Mosmann’s Tetrazolium Toxicity (MTT) colorimetric assay. The data was compared using the one-way analysis of variance (ANOVA) followed by the Tukey’s post hoc test for multiple comparisons. Results: After 24, 48 or 72 h, both NZOE and Pulpdent sealers inhibited cell viability at 1/1, 1/2 and 1/8 dilutions. Within the 24 and 48 h, the AH-26 sealer reduced the cell viability at all dilutions except the 1/32 solution; however after 72 h even the 1/32 dilution was cytotoxic. Conclusion: The biocompatibility of the nano zinc-oxide eugenol sealer was comparable to Pulpdent sealer and lower than AH-26.
Introduction
se of endodontic sealers with ideal properties is necessary for the success of root canal treatment [1]. An ideal sealer should be biologically compatible and well tolerated by periradicular tissues [2]. Unfortunately, it is difficult to produce sealers with proper physicochemical properties and biological compatibility. Materials that are well tolerated by tissues compromise sealer properties, and vice versa [3]. Zinc-oxide eugenol (ZOE)-based sealers are one of the most common and conventional sealers used in endodontic treatment [4]. These sealers have undergone a lot of modifications and different commercial products of ZOE-based sealers are available.
At present, nano-technology is used to produce a large number of dental materials, including light-cured restorative composite resins and their bonding systems, impression materials, ceramics, dental implant covering layers and fluoride mouthwashes [5,6]. Other advantages of nanoparticles, which have attracted attention in endodontics, are their better penetration into the dental tubules, profound antibacterial properties and decreased microleakage [6][7][8][9][10]. Because of these favorable properties, utilization of nanoparticles in production of endodontic sealers has become the center of interest, recently [11]. Several researchers incorporated quaternized polyethylenimine nanoparticles or chitosan nanoparticles into different sealers and evaluated their biocompatibility, antibacterial and physiochemical properties [12][13][14][15][16][17]. Sousa et al. [18] synthesized and characterized ZOE nanocrystals and evaluated their biological properties for application in dentistry, particularly in endodontics.
Recently, a new endodontic sealer with nano-sized ZOE powder particles (NZOE) has been developed in the Dental Material Research Center, Mashhad University of Medical Sciences, Mashhad, Iran. This sealer is similar to various ZOEbased sealers, but with different sizes of ZOE nanoparticles [19].
When a new dental material is introduced, its biocompatibility should be determined. Any nano endodontic sealer must remain compatible with periapical tissues during long-time contact [14]. Therefore, several biocompatibility tests including cytotoxicity, intraosseous implantations and subcutaneous implantations have been proposed [20]. The aim of this study was to evaluate the cytotoxicity of NZOE sealer in comparison with AH-26 and Pulpdent root canal sealers. Preparation of nano zinc-oxide Eugenol (NZOE) sealer NZOE was prepared via a sol-gel method as described in our previous work [7]. Briefly, a solution of gelatin was prepared by dissolving 10 g gelatin in 150 mL deionized water at 60°C. Then, appropriate amounts of zinc-nitrate [Zn(NO3)2.6H2O] was dissolved in a minimum volume of deionized water at room temperature. The two prepared solutions were mixed and stirred for 8 h while the temperature was kept at 80°C. Finally, the prepared resin was dried at 500°C, in which the pure NZOE powder was obtained.
Preparation of sealer extract
The NZOE sealer was sterilized under UV light for 24 h. Then, all the test sealers were prepared according to the user's manuals and immediately inserted in a 24-well plate before setting (2 wells for each sealer). After that, 2.5 mL DMEM was added to each well and the plate was incubated in the dark for 24 h at 37°C. After incubation, these original extracts (1/1 dilution) were passed through 0.22 μm filters and then serially diluted in fresh DMEM supplemented with antibiotic and 10% FBS. Different dilutions (1/1, 1/2, 1/4, 1/8, 1/16 and 1/32) of each sealer were used for cytotoxicity assay.
Cell culture and treatment L929 mouse fibroblast cells were cultivated in high-glucose DMEM supplemented with 10% FBS and penicillin (100 units/mL) and streptomycin (100 μg/mL) at 37°C in an atmosphere including 5% CO2. Trypsin was used to passage cultures whenever they were grown to confluence. The cells at sub-confluent stage were harvested from culture flask and after checking the cell viability using trypan blue exclusion technique, they were seeded overnight in a 96-well culture plate. Then, to test cytotoxicity of sealers, the culture media was exchanged with fresh one containing varying dilutions (1/1 to 1/32) of each sealer. Three wells were allocated for each dilution of sealers, and the experiment was repeated three times (n=9). Then, the cells were further incubated for 24, 48 or 72 h and observed under light inverted microscope for shape, granulation and anchorage independency [21,22]. Untreated cells were considered as negative control.
MTT cell viability assay
At the end of incubation, the MTT solution (3-{4,5dimethylthiazol-2-yl}-2,5-diphenyl tetrazolium bromide) in phosphate-buffered saline (5 mg/mL) was added to each well of culture plate to make final concentration of 0.5 mg/mL and the cells were incubated for 2 h. Then, the supernatant was removed and the resulting formazan was dissolved by adding 200 μL DMSO to each well. The optical density of formazan dye was read at 545 nm against 620 nm as back ground by Elisa reader (Awareness Technology Inc). The percentage of viable cells in each well was calculated relative to control cells set to 100% [23,24]. Also the IC50 value (Concentration/dilution at which 50% inhibition of cell proliferation was created) was evaluated.
Statistical tests
Data normality was tested with the Kolmogorov-Smirnov test. The results were compared using the one-way analysis of variance (ANOVA) followed by the Tukey's post hoc test for multiple comparisons. The level of significance was set at 0.05.
Discussion
The aim of the present study was to evaluate the cytotoxicity of a newly introduced NZOE sealer in comparison with AH-26 and Pulpdent endodontic sealers. The AH-26 is a popular and commonly used epoxy resin sealer with established toxic properties especially during the first 24 h [4]. The Pulpdent is also a commercially available ZOE-based sealer.
To evaluate the cytotoxicity of endodontic materials, in some studies the materials have been placed in direct contact with cells [25][26][27], while in some other works, the extract of sealers has been mixed with the cell culture media [28][29][30][31][32][33][34]. Direct placement of the sealer in the culture plate may result in physical injuries to cells and increases the risk of bacterial contamination. Therefore, in the present study the second technique, i.e. the sealer extract technique, was used. Since this was the first study that evaluated the cytotoxicity of NZOE sealer, different dilutions of the sealer extract were prepared and used similar to the study by Bin et al. [32].
In the clinical settings, the sealer is immediately placed within the root canal after being mixed. If the sealer comes into contact with periapical tissues, the maximum toxic effect of the sealer occurs before its setting. In the present in vitro study, an attempt was made to simulate the maximum cytotoxic effect of the sealer in the human body. Therefore, the sealers were added to culture media 5 min after mixing and the culture media was placed in contact with the sealer for 24 h to ensure the transfer of all the toxic materials of the sealer into the culture media.
Our results showed that all the three sealers were highly cytotoxic at 1/1, 1/2 and 1/4 dilutions since they had not been Javidi et al.
diluted (1/1) or were diluted minimally (1/2 and 1/4). These dilutions of sealers resulted in about 90% cellular death during the first 24 h. Therefore, more cytotoxicity could not be expected after 48 and 72 h with similar dilutions. However, all sealers at dilutions of 1/8, 1/16 and 1/32 exhibited higher cytotoxic effect after 72 h compared to 24 h incubation. An increase in the cytotoxicity of sealers with time is similar to the results observed in studies by Karapinar et al. [31] and Bouillarge et al. [35].
Among all three sealers at the three time intervals, AH-26 had the highest cytotoxic effect, followed in descending order by Pulpdent and NZOE sealers. In a study by Badol et al. [33], AH-26 showed severe toxicity which became mild after one month while Pulpdent sealer showed severe to moderate toxicity. Until now no study has evaluated the toxic effect of NZOE sealer. However, Sousa et al. [18] evaluated the biological properties of ZOE nanocrystals through intraosseous implantation and reported that the nanocrystals are biocompatible, well tolerated and allow bone formation and remodeling. Several researchers evaluated the biocompatibility of other nanoparticles. Gomes et al. [5] evaluated the tissue response after irrigation with silver nanoparticles and concluded that these particles are biocompatible, especially at low concentrations. Dianat et al. [36] showed that the cytotoxicity of CH nanoparticles was similar to that of conventional CH. Abramovitz et al. [11] revealed that incorporation of 1% quaternized polyethylenimine (QPEI) nanoparticles into the sealers, did not impair their biocompatibility.
Shantiaee et al. [37] compared the cytotoxicity of nanosilver-coated gutta-percha with Guttaflow and normal guttapercha on L929 fibroblasts with MTT assay after 1 h; nanosilver-coated gutta-percha and Guttaflow had the highest and the lowest cytotoxicity, respectively. After 24 h and 1 week, no significant differences were observed. Barros et al. [14] concluded that the incorporation of 2% QPEI nanoparticles into AH-Plus and Pulp Canal Sealer (PCS), modulates the proliferation and differentiation of bone cells, depending on the sealer and the cell type, without increasing the sealer cytotoxicity.
Our results are consistent with those of Bae et al. [38] who showed that in the MTT test the cytotoxic effect of a ZOEbased sealer at 1/2, 1/4 and 1/16 dilutions was less than that of AH-26 sealer. In addition, in a study by Huang et al. [29] the cytotoxic effects of AH-26 sealer at 1/2, 1/4 and 1/8 dilutions were greater than that of ZOE-based sealer.
In the majority of in vitro cytotoxicity studies, the toxic effects of epoxy-resin sealers were high, especially shortly after mixing [20,28,29,32,35,39]. In addition, the cytotoxic effects of ZOE-based sealers were similar, but the toxic effects were lower than that of AH-26 sealer [29,33,35]. Huag et al. [28] reported that the cytotoxicity of AH-26 and AH-Plus sealers on days 1, 2 and 3 are higher than ZOE-based sealers.
Conclusion
In conclusion, the cytotoxicity of the tested nano-sealer was comparable to that of Pulpdent and was lower than AH-26 sealer. Further studies on the possible use of NZOE sealer as a new root canal filling material seems necessary.
|
v3-fos-license
|
2018-04-03T05:47:06.546Z
|
2007-02-16T00:00:00.000
|
23804615
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/282/7/4850.full.pdf",
"pdf_hash": "33e935a1710019c1f1f10bbf69c085826c3b4145",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1134",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "37679c2479996a946b48a2f78252c85ce1ea80b2",
"year": 2007
}
|
pes2o/s2orc
|
The Protective Role of a Small GTPase RhoE against UVB-induced DNA Damage in Keratinocytes*
RhoE, a p53 target gene, was identified as a critical factor for the survival of human keratinocytes in response to UVB. The Rho family of GTPases regulates many aspects of cellular behavior through alterations to the actin cytoskeleton, acting as molecular switches cycling between the active, GTP-bound and the inactive, GDP-bound conformations. Unlike typical Rho family proteins, RhoE (also known as Rnd3) is GTPase-deficient and thus expected to be constitutively active. In this study, we investigated the response of cultured human keratinocyte cells to UVB irradiation. RhoE protein levels increase upon exposure to UVB, and ablation of RhoE induction through small interfering RNA resulted in a significant increase in apoptosis and a reduction in the levels of the pro-survival targets p21, Cox-2, and cyclin D1, as well as an increase of reactive oxygen species levels when compared with control cells. These data indicate that RhoE is a pro-survival factor acting upstream of p38, JNK, p21, and cyclin D1. HaCat cells expressing small interfering RNA to p53 indicate that RhoE functions independently of its known associates, p53 and Rho-associated kinase I (ROCK I). Targeted expression of RhoE in epidermis using skin-specific transgenic mouse model resulted in a significant reduction in the number of apoptotic cells following UVB irradiation. Thus, RhoE induction counteracts UVB-induced apoptosis and may serve as a novel target for the prevention of UVB-induced photodamage regardless of p53 status.
GTPases are regulatory proteins that function as molecular switches, cycling between the active, GDP-bound, and inactive, GTP-bound, states via control by guanine nucleotide exchange factors, GTPase-activating proteins (GAPs), 3 and guanine dissociation inhibitors (1). Rho GTPases act on numerous effector proteins to activate various signaling cascades (2,3) and are specifically known to be involved in regulating cytoskeleton dynamics as well as proliferation and oncogenesis (4 -7). Unlike typical Rho family proteins, Rnd proteins, including RhoE/ Rnd3, remain in the constitutively active, GTP-bound state without GTP hydrolytic regulation or GAP activation (1,8,9). RhoE is known to inhibit RhoA/ROCK (Rho-associated kinase) signaling and block actin stress fiber formation, promoting the loss of actin stress fibers and focal adhesions (8 -10). RhoE was also found to have a novel function in regulating cell cycle progression, independent of its ability to inhibit ROCK I (10,11). Recently, RhoE was identified as a p53 target gene in response to DNA damage that promotes cell survival through inhibition of ROCK I-mediated apoptosis (12).
Normal human skin is covered with stratified epithelium composed mostly of keratinocyte cells, which undergo a continuous process of proliferation, differentiation, and apoptosis (13). Apoptosis is critical for epidermal homeostasis and is required for epidermal turnover and removal of UV-damaged cells (14 -16). It is known that p53 plays an important role in inducing apoptosis in keratinocytes that have sustained UVinduced DNA damage and that this is a key step in protecting cells from transformation (14,16). Moreover, as the first line of defense of the human body, keratinocyte cells have acquired unique coping mechanisms to protect themselves and underlying tissue from UV radiation (17,18). Although the effects of UV radiation have been well studied, there is still a wide range of data supporting different mechanisms of UV response, in part due to the wide variety of model cell lines used to study this complex system (19). Given the importance of p53 in the UV response of keratinocytes and the up-regulation of RhoE by p53 upon DNA damage (12), we investigated the role of RhoE in the UVB response of human keratinocytes. We found that RhoE is an important factor in mediating cell survival in response to UVB irradiation and functions independently of p53.
EXPERIMENTAL PROCEDURES
Cell Culture-HaCaT cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and antibiotics. HaCaT cells stably transfected with a pBabe-U6 plasmid were maintained in 1 g/ml puromycin. Normal human keratinocyte (NHK) cells were isolated from adult abdominal epidermis, similarly to several published methods (20 -22), in a protocol approved by the Massachusetts General Hospital Institutional Review Board (MGH IRB). Fresh specimens were washed thoroughly in phosphate-buffered saline (PBS) and disinfected with 70% ethanol. The subcutaneous fat and dermis were manually removed from the skin, which was then cut into squares (Ͻ1 cm 2 ) and floated on fresh dispase solution (Hanks' balanced salt solution containing 10 mM HEPES, 0.075% sodium bicarbonate, and 50 g/ml gentamicin mixed 1:1 with dispase II solution (Roche Applied Science)) at 4°C for 18 h. The epidermis was then pealed off and trypsinized to isolate the keratinocytes, which were plated in defined kerationcyte-SFM medium (DKSFM, Invitrogen) on collagen I-treated plates and grown at 37°C, 5% CO 2 . Cells were grown until 60 -75% confluence and then split 1:3 (passage 1), grown until 60 -75%, and frozen for future use. NHK cells were used between passages one and six and were collected from three different patients: females of ages 32, 37, and 66. Experiments were repeated using cells from each patient to ensure that consistent responses were obtained from different genetic backgrounds.
siRNA Experiments-Oligonucleotides corresponding to the following cDNA sequences were purchased as siRNA from Dharmacon: 5Ј-CAGATTGGAGCAGCTAC-3Ј (nucleotides 501-518 (R1)) or 5Ј-GTAGAGCTCTCCAATCACA-3Ј (nucleotides 439 -457 (R2)) for RhoE, 5Ј-AGACAATCGGCT-GCTCTGAT-3Ј for GFP control, and 5Ј-ATTGTATGCGAT-CGCAGAC-3Ј as a nonspecific control (Dharmacon). Cells were transfected by nucleofection (amaxa Inc.) using the human dermal fibroblast nucleofector kit and program T-24. Cells were harvested by trypsinization and resuspended in nucleofector solution at ϳ2-4 ϫ 10 6 cells/100 l of solution, electroporated with 100 mol of R1 siRNA, and split to the appropriate number of collagen I-coated 60-mm plates. The cells were allowed to recover for 24 h prior to any further experiments. For stable knockdown of RhoE in HaCat cells, the R2 RhoE siRNA sequence was used to make a stable silencing construct of shRNA in the pBabe-U6 plasmid at the BamHI and XhoI cloning sites. The empty pBabe-U6 vector was used as a control.
UV Irradiation-UV was delivered from a panel of four UVB bulbs (RPR-3000, Southern New England Ultraviolet), which have a peak emission at 312 nm, delivering 90% UVB and Ͻ10% UVA. A Kodacel filter (catalog number K6808, Eastman Kodak Co.) was used to eliminate any UVC light (Ͻ290 nm). For in vitro studies, cells were seeded 24 h prior to irradiation and washed once with PBS immediately prior to irradiation. Irradiation was performed with the cells covered with 1.5 ml of PBS, and the UV dose was monitored with a Photolight IL1400A radiometer equipped with a SEL240/UVB detector. Following irradiation, fresh DKSFM or Dulbecco's modified Eagle's medium was added to the keratinocyte or HaCat cells, respectively. Cells were assayed 16 h after UV irradiation unless otherwise noted.
Northern Blot Analysis-Total RNA was extracted using an Qiagen RNeasy kit with QIAshredder according to the manufacturer's protocol. Samples were quantified and denatured, and equal amounts were electrophoresed through 1% a agarose gel by the formaldehyde denaturation method. RNA was transferred to a nylon membrane (Bio-Rad), UV-cross-linked (Stratagene), and then baked at 80°C for 1 h. Hybridization was performed with 32 P-labeled probes prepared by the random prime DNA labeling method (Invitrogen).
Cell Death Assays-Cell death was measured in vitro using a cell death detection ELISA (Roche Applied Science) and trypan blue exclusion. UV-irradiated cells, both the adherent and the floating, were harvested by trypsinization. Cells used for the ELISA were analyzed according to the manufacturer's protocol. Trypan blue exclusion was performed on cells harvested and resuspended in DKSFM and mixed 1:1 with 0.4% trypan blue. The percentage of dead cells was determined as the number of cells that stain blue versus the total cell count.
Flow Cytometry-Cells were harvested 16 h after UV irradiation and fixed with 70% ethanol while gently vortexing. Cells were stored at 4°C up to 1 week prior to analysis. Just prior to analysis, the cells were washed once with PBS and then incubated in PBS containing 500 g of RNase A for 30 min at 37°C followed by the addition of 25 g of propidium iodide and 15 min of room temperature incubation. Cells were analyzed on a FACSCalibur (BD Biosciences), and the acquired data were analyzed using the FlowJo 6.3.4 software package (Tree Star Inc.).
Detection of Intracellular ROS-UV-irradiated cells were treated with 12 g/ml 2-7-dichlorofluorescein diacetate (Fluka) for 30 min at 37°C. The adherent cells were harvested by trypsinization and resuspended in PBS. Intracellular reactive oxygen species oxidize 2-7-dichlorofluorescein diacetate, resulting in an increase in fluorescence as measured by flow cytometry.
Immunofluorescence-Cells were grown on coverslips and fixed with 3.7% (v/v) formaldehyde followed by permeabilization with 0.2% Triton-X. Actin filaments were visualized by incubating the fixed cells for 1 h at room temperature with rhodamine-conjugated phalloidin (Molecular Probes) (1:500). Images were collected by a Leica TCS4D confocal microscope and processed using Adobe PhotoShop software.
Animal Work-Human RhoE cDNA was cloned into a modified K14/bluescript vector. After approval by the MGH Animal Use Committee, the linearized cDNA construct was injected into fertilized C57BL/6 ϫ SJLF2 embryos, and three founder mice expressing K14-RhoE were selected for breeding with C57BL/6 wild-type mice to yield F2 generation mice. K14-RhoE-positive F2 mice were crossed to yield a mixture of K14-RhoE-expressing mice and control transgene null littermates. All mice used in experiments were between 6 and 12 weeks old from the F3 generation. Transgenic animals were identified by PCR analysis of genomic DNA extracted from the tail and confirmed by Western blot. For mouse irradiation, the dorsal hair was removed by shaving with electric clippers, and then the remaining hair was removed by chemical treatment (Nair). The mice were then allowed to rest for 72 h. Avertin was used to anesthetize the mice, which were then exposed to UVB irradiation, measuring the UVB dose using a radiometer as described above. 24 h after irradiation, the mice were sacrificed, and the dorsal skin was harvested for analysis. Sections of skin were taken for frozen sections, paraffin embedding, and protein extraction. Frozen sections were fixed in 1% methanol-free formaldehyde/PBS for 1 h at room temperature and then processed through a sucrose gradient (5%, 10%, 20%) before embedding in optimal cutting temperature compound (Tissue-Tek, Sakura). Paraffin sections were fixed in 4% buffered formalin for 72 h before automated embedding with a Tissue-Tek vaccum infiltration processor. Protein extracts were prepared by homogenizing a small piece of skin in lysis buffer with a tissue grinder.
Histological analysis by hematoxylin and eosin staining on 6-m paraffin sections was performed with a linear stainer (Hacker Instruments) and imaged by bright field microscopy (Nikon E600). TUNEL analysis of 6-m frozen sections was performed according to the manufacturer's protocol (Roche Applied Science), including a 20-min fix in formaldehyde. Sections were imaged by confocal microscopy, and the number of TUNEL-positive cells of each slide was counted from six random fields. HA staining was performed on 6-m frozen sections. Briefly, slides were washed in PBS, fixed with 3.7% formaldehyde/PBS, permeabilized with 0.2% Triton/PBS, blocked, and then incubated in HA antibody (1:800 in blocking buffer) overnight at 4°C. Rhodamine-conjugated secondary antibody was used at room temperature for 1.5 h, and the nuclei were stained for 15 min with 1:2000 TO-PRO3 (Invitrogen).
UVB Induces RhoE in Human
Keratinocytes-Under various stress conditions, including DNA damage and oxidative stress, RhoE is induced in a p53-dependent manner (11,12). In response to UVB irradiation, both NHK and HaCat cells show increased expression of RhoE and p21 as well as increased p53 activation, as measured by serine 15 phosphorylation (Fig. 1A). Although the p53 in HaCat cells is mutated (23), it is known to still be induced (24,25) and phosphorylated in response to UV irradiation (26,27). Thus, the observed increases in phosphorylated p53 and p21 are consistent with the literature on NHK and HaCat cells (24, 25, 28 -32). To investigate the functional role of RhoE induction in the UVB response of keratinocytes, we used siRNA-targeting RhoE or GFP (negative control) to inhibit RhoE induction in response to UVB. siRNA was electroporated into HNK cells, which were treated with UVB 24 h later. The effectiveness of the RhoE siRNA is demonstrated by Northern and Western blot analysis (Fig. 1B). To exclude offtarget effects of the RhoE siRNA, we used an alternate sequence of RhoE shRNA expressed in the pBabe U6-shRNA vector to stably knock down RhoE in HaCat cells and obtained similar results (data not shown). Although the RhoE siRNA is effective at blocking RhoE induction in response to UVB irradiation, it did not alter the basal level of RhoE expression (Fig. 1B). The same blockage of RhoE induction was observed in the stable HaCat knockdown cells (data not shown). The maintenance of basal RhoE levels in siRNA-treated cells is important in maintaining normal function in the cells. This is particularly important since it has been previously observed that upon complete inhibition of RhoE in cancer cells, there is significant growth arrest and apoptosis. 4 Although the keratinocytes treated with siRNA to GFP demonstrate a UV response very similar to control keratinocytes, treatment with siRNA-targeting RhoE greatly alters the cellular response to UVB irradiation (Fig. 1B). While the activation of p53 in response to UVB irradiation was not significantly affected, p21 protein levels declined rapidly, and PARP cleavage occurred at much lower doses of UVB. PARP cleavage is an indirect measurement for activated caspase-3, which is responsible for PARP cleavage upon the induction of apoptosis. Thus, the increased caspase-3-dependent PARP cleavage in UVB-irradiated cells that have reduced RhoE levels implies an increase in apoptosis.
RhoE Promotes Cell Survival in Response to UVB Irradiation-We analyzed the effect of knocking down RhoE on cell death in NHK cells 16 h after UVB irradiation by bright field microscopy, trypan blue exclusion, DNA fragmentation ELISA, and flow cytometry analysis of cells stained with propidium iodide (Fig. 2). Bright field imaging of NHK cells demonstrates 4 P. P. Ongusaha and S. W. Lee, unpublished results. that after electroporation, keratinocytes are healthy, and the introduction of siRNA against RhoE clearly results in a reduction of cell number after UVB irradiation ( Fig. 2A). Cell death was then quantified by trypan blue exclusion (Fig. 2B). At UVB doses greater than 25 mJ/cm 2 , the RhoE knockdown cells display a greater than 2-fold increase in cell death. DNA fragmentation (Fig. 2C) was quantified by cell death ELISA to determine whether this increase in cell death was due specifically to apoptosis. These data clearly indicate that the increase in cell death after UVB irradiation of keratinocytes treated with siRNA against RhoE is due to apoptosis. Propidium iodide staining of the UVB-irradiated cells was performed to confirm the DNA fragmentation results by quantitation of the sub-G 1 cell popu-lation by FACS analysis (Fig. 2D). Similar data were obtained when these experiments were performed in HaCat cells treated with siRNA or stably transfected with shRNA against RhoE (data not shown). These data demonstrate that RhoE promotes cell survival in response to UVB irradiation.
The Pro-survival Function of RhoE Is Independent of p53 and ROCK I in Keratinocytes-RhoE was recently identified as a p53 target gene up-regulated in response to DNA damage (12), and p53 is known to be stabilized in response to UVB exposure (32)(33)(34)(35). Therefore, we investigated the p53 dependence of RhoE-mediated survival in response to UVB irradiation. Although it was expected that the pro-survival role of RhoE would be due, in part, to its role in p53-mediated survival, it does not appear that the UVB induction of RhoE is solely p53dependent since HaCat cells, which have a mutant low functioning p53 (23,36) with an increased half-life (37)(38)(39)(40), still induce RhoE after UVB irradiation (Fig. 1A). To further investigate the p53 dependence of RhoE expression in keratinocytes, stable HaCat cells were constructed in which p53 was ablated completely with shRNA-targeting p53 expressed in the pBabe-U6 vector. These cells were able to induce RhoE in response to UVB irradiation similar to control pBabe (empty vector) knockdown cells (Fig. 3A). There actually appears to be increased RhoE induction in the p53-knockdown HaCat cells when compared with the vector control HaCat cells (Fig. 3A). While elevated doses of UVB result in PARP cleavage in both NHK and HaCat cells, HaCat cells, expressing mutant p53, are more sensitive to UVB irradiation and become apoptotic at lower doses of UV than NHK cells, as reported previously (18,41,42). Although the p53 knockdown HaCat cells are more susceptible to UVB-induced apoptosis when compared with the pBabe vector control cells, RhoE knockdown in these cells increased the sensitization in an additive manner as measured by PARP cleavage and trypan blue exclusion assay (Fig. 3, A and B). These data clearly demonstrate that RhoE induction by UVB is not solely dependent on p53 activation in keratinocyte cells.
RhoE is well known to act as a negative regulator of RhoAmediated ROCK I activation (9 -11), and RhoE overexpression results in a transient loss of actin stress fibers (11,43). Recently, it has been shown that knocking down RhoE in cells exposed to genotoxic stress resulted in the maintenance of stress fibers and increases apoptosis through the enhancement of the ROCK I-mediated apoptosis (12). However, depletion of RhoE in HaCat cells did not significantly alter the actin cytoskeleton between 12 (data not shown) and 16 h after UVB irradiation (Fig. 4, top panel). The same observations were made in RhoE siRNA-treated NHK cells (data not shown). Also, little change in ROCK I activation, through caspase-3 cleavage, was observed in the RhoE knockdown cells (data not shown). When RhoE was inhibited in the p53-depleted HaCat cells, changes in the actin cytoskeleton appeared following UVB irradiation (Fig. 4, bottom panel) similar to those observed by Ongusaha et al. (12), including an increase in the number of stress fibers and filopodia (Fig. 4, bottom panel). Therefore, although RhoE may function in cytoskeleton regulation, these functions appear to be related to another p53-dependent mechanism, while the pro-survival UVB stress response of RhoE appears to act through a p53-and ROCK I-independent mechanism.
RhoE Acts as an Upstream Mediator of ROS, p38, JNK, and Cyclin D1-UV irradiation is known to induce oxidative stress via production of ROS (44,45). To investigate the possible mechanism by which RhoE may be involved in the UVB response, we investigated whether RhoE plays any role in responding to ROS after UVB irradiation (Fig. 5A). NHK cells transfected with siRNA against RhoE or a control siRNA were irradiated with UVB, and the intracellular ROS was measured by monitoring the oxidation of the dye 2Ј,7Ј-dichlorofluorescein diacetate (DCF-DA) by FACS. A representative data set demonstrates that knocking down RhoE resulted in increased ROS after UVB irradiation when compared with control knockdown cells (Fig. 5A). The increase in ROS was reproducible with an average 1.4 Ϯ 0.07-fold increase over the control cells in triplicate experiments. These data indicate that RhoE is important for cell survival in response to UVB irradiation, in part, by inhibiting ROS generation. We next aimed to gain further insight into the pro-survival function of RhoE by examining the expression patterns of several survival and stress response proteins in NHK cells. Depletion of RhoE resulted in a dramatic reduction in the levels of cyclin D1 and p21 (Fig. 5B). Cell cycle analysis confirmed this effect on cyclin D1 as demonstrated by an increase in G 1 arrest of the RhoE knockdown cells (data not shown). Additionally, the pro-survival factor Cox-2 was not induced to the same extent in the RhoE knockdown cells when compared with that of control cells in response to UVB irradiation. There is also an earlier activation of p38 and JNK kinases in the RhoE knockdown cells (Fig. 5B). These results were consistent in HaCat cells expressing an alternate RhoE shRNA in the pBabe-U6 vector (data not shown). These data suggest an overall upstream effect of RhoE on several UV response pathways.
RhoE Enhances Survival of UVB-irradiated Keratinocytes in Vivo-To determine whether RhoE expression can protect against UVB-induced cell death, transgenic mice expressing RhoE on the keratin 14 promoter (K14-RhoE) were generated as described under "Experimental Procedures." RhoE transgenic mice were healthy and were macroscopically indistinguishable from their wild-type littermates. UVB irradiation of control mice results in increased RhoE expression as seen in the in vitro studies on NHK and HaCat cells (Fig. 6A, upper panel). The transgenic K14-RhoE mice were clearly observed to express the HA-RhoE in the skin by ␣-HA Western blot and immunohistochemistry (Fig. 6, A, lower panel, and C). Although there is some background HA signal detected in the control littermates (Fig. 6A, lower panel), this is attributed to nonspecific detection since there is no positive staining in the skin (Fig. 6C). Overexpression of RhoE appeared to cause no defects in the growth and development of the mice as demonstrated by normal epidermal histology (Fig. 6B). K14-RhoE mice and control littermates were treated with 160 mJ/cm 2 of UVB and harvested 24 h after exposure. No quantifiable difference in the thickening of the epidermis or stratum corneum was observed after UVB irradiation (Fig. 6B). Cell death was identified and quantified by TUNEL analysis and imaged by confocal microscopy. The K14-RhoE mice demonstrated significantly fewer TUNEL-positive cells than their control littermates (Fig. 6, D and E), and the difference was determined to be statistically different by a two-tailed t test (Fig. 6E). Therefore, we found that RhoE promotes survival by protecting skin cells from UVinduced cell death in vitro and in vivo. All of these findings imply that RhoE plays an important role in maintaining healthy skin and protecting against UV damage stress.
DISCUSSION
Apoptosis of UV-damaged keratinocytes is a vital step in preventing skin cancer (14). UV is known to induce damage through three concomitant mechanisms: direct DNA damage, death receptor activation, and ROS formation. However, the details of this complex mechanism have yet to be elucidated, due in part to the wavelength dependence of UVinduced damage (44,46). In this study, we demonstrate that RhoE is an important pro-survival factor for the normal response of both NHK and HaCat cells to UVB irradiation and appears to act independently of both ROCK I and p53.
UVB irradiation results in RhoE up-regulation in both NHK and HaCat cells and in mouse epidermis in vivo. When RhoE induction is inhibited by siRNA, we find a dramatic increase in apoptosis when compared with cells treated with control siRNA.
Although RhoE was recently shown to be a p53 target gene (12), RhoE induction in response to UV irradiation in keratinocytes is not solely dependent on p53 (Fig. 3). Other transcription factors, including the p53 homologues p63 and p73, may aid in RhoE induction in keratinocytes. However, the increased susceptibility of the p53 shRNA HaCat cells to UVB-induced apoptosis implies that the p53 present in HaCat cells does have some residual function. Thus, p53 may still act on the RhoE promoter in keratinocytes as we observed higher levels of RhoE expression in shRNA-mediated p53 knockdown HaCat cells; however, other transcription factors or stabilization mechanisms are clearly more important for the function of RhoE in response to UV irradiation. The p53 pro-survival target p21 has also been observed to be up-regulated in a p53-independent manner in response to UV irradiation (28,29), as seen in this study (Fig. 1). Given that UV irradiation induces signature mutations in the p53 gene found in carcinomas (47,48), p53independent up-regulation or prosurvival factors may be a general survival mechanism in keratinocyte cells. Therefore, by targeting these survival factors, we may be able to prevent growth and propagation of UV-damaged cells.
Although the mechanism of the pro-survival function of RhoE remains unclear, we propose that RhoE acts upstream of JNK, p38, and cyclin D1. JNK (49,50) and p38 (51,52) are known to be activated in response to UV irradiation (51,53,54) and play a role in apoptosis. Therefore, inhibition of RhoE induction may effect the activation of p38 and JNK after UV irradiation and cause an increase in apoptosis. Keratinocytes undergoing apoptosis, as opposed to arrest and repair, display a significant decrease in p21 protein levels (32). Keratinocytes treated with high doses of UVB and in the siRNA-mediated RhoE knockdown cells also show a reduction of p21 coincident with apoptosis (Fig. 1). In addition, although Cox-2 is known to be downstream of p38 and JNK (55,56), it is also a pro-survival response gene in UV-irradiated keratinocytes (57), and the decrease in Cox-2 levels in RhoE knockdown cells may correlate with the loss of p21 expression and cell commitment to apoptosis. Although RhoE appears to act to inhibit p38 and JNK activation, it may also affect other upstream signaling pathways that culminate in altered Cox-2 expression and inhibition of apoptosis.
After UVB irradiation, cyclin D1 levels are known to decrease (58), as was observed in this study. However, the RhoE knockdown cells displayed a significant loss of cyclin D1 even in the absence of UV irradiation. Antisense cyclin D1 led to inhibition of squamous cell carcinoma cell growth (59) and the growth of other cancers (60,61). Moreover, fibroblasts derived from cyclin D1 Ϫ/Ϫ mice display increased apoptosis in response to UV irradiation (62), and cyclin D1 is overexpressed in many tumor types (63,64). Therefore, the reduction in cyclin D1 levels in response to RhoE siRNA may play a role in the observed increase in apoptosis.
Cdc42 is a well studied member of the Rho-GTPase family and has been linked to the activation of p38 and JNK (54, 65, 66) FIGURE 6. Transgenic overexpressing RhoE mice demonstrate increased resistance to UVB-induced apoptosis. Transgenic K14-RhoE mice, age 6 -12 weeks, and control littermates were shaved, and 72 h later, irradiated with UVB. Mice were sacrificed 24 h after irradiation, unless otherwise indicated, and the dorsal skin was taken for analysis. A, Western blot analysis of homogenized skin samples as follows: RhoE, HA tag, and -actin (control). Control mice were treated with UVB analyzed for RhoE expression after 24 and 48 h (upper panel). K14-RhoE mice and control littermates were irradiated with 160 mJ/cm 2 UVB and analyzed for HA-RhoE expression (lower panel). B, hematoxylin and eosin staining was used to observe the histology of RhoE mice and control littermates with or without UVB irradiation. WT, wild type. C, immunohistochemistry analysis of HA-RhoE expression in K14-RhoE mice and control littermates with or without UVB irradiation. HA expression is stained in red, and nuclear staining is blue. D and E, TUNEL staining was used to detect apoptotic cells in frozen sections from K14-RhoE mice and control littermates irradiated with 160 mJ/cm 2 UVB. TUNEL-positive cells are red, and nuclei are stained blue. TUNEL-positive cells were counted from at least five random fields of sections from two K14-RhoE mice and control littermates. The error bars represent the mean Ϯ S.D. from three different TUNEL staining experiments. The difference between the control and K14-RhoE mice is statistically different with a p value Ͻ0.001 as determined by a Student's t test.
as well as up-regulation of cyclin D1 (7,(67)(68)(69). Also, constitutively active RhoA induces cyclin D1 (70), and Rac is known to activate JNK and p38 (71,72). Inhibition of Rho or ROCK blocked mitogen-activated protein kinase (MAPK) activity and led to cyclin D1 induction in response to mitogenic stimuli, acting downstream of Rac (7). Interestingly, RhoA has actually been proposed to have a dual role in regulating cyclin D1, blocking early G 1 expression of cyclin D1 and promoting sustained extracellular signal-regulated kinase (ERK) activation and mid-G 1 cyclin D1 expression (73). Although RhoE is currently only known to interact with ROCK I (10) and the regulatory protein RhoGAP5 (74), it may act on other GTPases, including Cdc42. It is also possible that RhoE may act in a similar fashion to RhoA and promote cyclin D1 expression, as is observed in this study, or block cyclin D1 expression as was observed by Villalonga et al. (11). This correlates with the postulation by Villalonga et al. that RhoE acts on the cell cycle through a RhoA-and ROCK I-independent mechanism (11). Similar to other Rho GTPases, RhoE most likely acts downstream of Ras to elicit a range of effects on kinase signaling and cyclin D1 levels.
Although the exact mechanism of the RhoE-mediated prosurvival effect is under investigation, this study clearly shows the importance of RhoE in the survival of UV-irradiated keratinocytes. The pro-survival function of RhoE may be important for overall skin homeostasis. Targeting RhoE in precancerous skin cells could provide an important therapeutic mechanism for the removal of damaged skin cells.
|
v3-fos-license
|
2021-07-11T05:30:36.570Z
|
2021-07-01T00:00:00.000
|
235784304
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/22/13/7187/pdf",
"pdf_hash": "d9c28bc6b27a1fe726948cc28ad9e5bdf3dafa36",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1135",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "d9c28bc6b27a1fe726948cc28ad9e5bdf3dafa36",
"year": 2021
}
|
pes2o/s2orc
|
Biomarkers for Comorbidities Modulate the Activity of T-Cells in COPD
In smoking-induced chronic obstructive pulmonary disease (COPD), various comorbidities are linked to systemic inflammation and infection-induced exacerbations. The underlying mechanisms are unclear but might provide therapeutic targets. T-cell activity is central in systemic inflammation and for infection-defense mechanisms and might be influenced by comorbidities. Hypothesis: Circulating biomarkers of comorbidities modulate the activity of T-cells of the T-helper type 1 (Th1) and/or T-cytotoxic type 1 (Tc1). T-cells in peripheral blood mononuclear cells (PBMCs) from non-smokers (NS), current smokers without COPD (S), and COPD subjects (total n = 34) were ex vivo activated towards Th1/Tc1 and were then stimulated with biomarkers for metabolic and/or cardiovascular comorbidities (Brain Natriuretic Peptide, BNP; chemokine (C-C motif) ligand 18, CCL18; C-X3-C motif chemokine ligand 1, CX3CL1; interleukin-18, IL-18) or for asthma- and/or cancer-related comorbidities (CCL22; epidermal growth factor, EGF; IL-17; periostin) each at 10 or 50 ng/mL. The Th1/Tc1 activation markers interferon-γ (IFNγ), tumor necrosis factor-α (TNFα), and granulocyte-macrophage colony-stimulating factor (GM-CSF) were analyzed in culture supernatants by Enzyme-Linked Immunosorbent Assay (ELISA). Ex-vivo activation induced IFNγ and TNFα without differences between the groups but GM-CSF more in S vs. NS. At 10 ng/mL, the different biomarkers increased or reduced the T-cell activation markers without a clear trend for one direction in the different categories of comorbidities or for the different T-cell activation markers. At 50 ng/mL, there was a clear shift towards suppressive effects, particularly for the asthma— and cancer-related biomarkers and in cells of S and COPD. Comorbidities might suppress T-cell immunity in COPD. This could explain the association of comorbidities with frequent exacerbations.
Introduction
Chronic obstructive pulmonary disease (COPD) is mainly induced by tobacco smoking. It is a systemic inflammatory disease associated with various comorbidities that have a negative impact on prognosis and progression. The development and progression of comorbidities might be triggered by systemic inflammation [1]. Comorbidities are associated with the frequency of exacerbations, another major trigger of progression [2]. COPD subjects have an increased susceptibility to bacterial and viral infections, both of which are major causes of exacerbations [3]. 3 of 16 patients [21]. Brain Natriuretic Peptide (BNP) is a strong biomarker for heart failure, for which CAD is a major risk factor [22]. Its plasma levels are increased in patients with acute heart failure [23]. Serum CX3CL1, IL-18, and CCL18 are increased in stable COPD and are associated with disease progression and severity [24][25][26]. An association between BNP and stable COPD has, to our knowledge, not yet been observed.
Further common comorbidities in COPD are asthma, lung cancer, and non-pulmonary cancer types like bladder, breast, colorectal, ovarian, and prostate cancer, for example [27,28]. Asthma and lung cancer might also be associated with frequent exacerbations in COPD [2]. An increased serum periostin level is a strong marker for type 2 asthma [29] and might also have prognostic significance for non-small cell lung cancer (NSCLC) and various non-pulmonary cancer types including those mentioned [30][31][32]. Serum IL-17 is increased in obesity-associated asthma and might be indicative of severe phenotypes [33,34]. Serum CCL22 is increased in breast cancer and is indicative of progression and severity [35]. Serum epidermal growth factor (EGF) is suitable to distinguish NSCLC from healthy benign lung pathologies [36]. Increased serum EGF concentrations are also discussed as putative biomarkers for various non-pulmonary cancer types including gastrointestinal cancers [37]. Serum periostin and EGF but not IL-17 and CCL22 are increased in stable COPD [38][39][40][41].
We hypothesized that these biomarkers of comorbidities suppress the activation process of T-cells towards Th1/Tc1, thereby contributing to a deficit in immune responses to infections and to exacerbations in COPD. We used the human primary peripheral blood mononuclear cell (PBMC) culture model with ex-vivo activation of T-cells to address this question (1) because the circulating biomarkers likely have contact with the circulating immune cells at the beginning of recruitment and activation in vivo, and (2) because the influence of the biomarkers on T-cell activation might depend on accessory cells, for example, if T-cells do not functionally express the corresponding receptors. With the PBMC model, the influence of accessory cells is considered. Analysis parameters were markers for Th1/Tc1 activation, interferon-γ (IFNγ, a key marker), tumor necrosis factor-α (TNFα), and granulocyte-macrophage colony-stimulating factor (GM-CSF). We compared the data between non-smokers, current smokers without airway disease, and COPD subjects in order to analyze for specific effects of smoking or COPD in this context and to gain first evidence for possible effects in the biomarker-associated diseases in the absence of COPD.
Cytokine Release in Response to T-Cell Activation
The activation of T-cells in PBMCs with anti-CD3 and anti-CD28 antibodies and IL-12 induced the release of the Th1 and Tc1 activation marker IFNγ ( Figure 1A). Additionally, TNFα and GM-CSF, both of which are associated with Th1/Tc1 responses, were induced ( Figure 1B,C). The induction of GM-CSF was higher in cells of current smokers without respiratory symptoms (S) compared to non-smokers (NS) ( Figure 1C). IFNγ and TNFα were not different between the groups ( Figure 1A,B). We did not find correlations of IFNγ, TNFα, or GM-CSF concentrations after T-cell activation to age, pack-years, lung function parameters (forced expiratory volume in one second, FEV1 (% pred.), FEV1/forced vital capacity, FEV1/FVC (%)) or to differential blood count parameters (monocytes (% whole blood count, WBC), lymphocytes (% WBC), neutrophils (% WBC), eosinophils (% WBC)), neither by analyzing all subjects together nor by analyzing the groups S or COPD separately (data not shown).
We next tested for cytotoxic effects of CX3CL1, IL-18, CCL18, BNP, periostin, CCL22, IL-17, and EGF in this model. We did not find effects on the numbers of trypan blue positive cells for concentrations up to 50 ng/mL for each recombinant protein (data not shown). Therefore, we used 10 and 50 ng/mL in the following approaches. blue positive cells for concentrations up to 50 ng/mL for each recombinant protein (data not shown). Therefore, we used 10 and 50 ng/mL in the following approaches. Figure 1. T-cell activation towards Th1/Tc1 induced the release of IFNγ, TNFα, and GM-CSF in peripheral blood mononuclear cells (PBMCs) of non-smokers (NS), current smokers without respiratory symptoms (S) and COPD subjects. Peripheral blood mononuclear cells (PBMCs; 10 6 cells/mL) of NS (n = 10), S (n = 11), and COPD (n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by enzyme-linked immunosorbent assay (ELISA). The cytokine levels of the controls without T-cell activating reagents are artificial because they were below the detection limit of the ELISA. Data are presented as mean ± SEM. Differences between activated cells and non-activated controls within a group were analyzed with paired t-tests, differences between the groups were analyzed with one-way analysis of variance (ANOVA, p < 0.0001 in C) and post hoc Bonferroni test. *, p < 0.05; **, p < 0.01; ***, p < 0.001.
CX3CL1 Increased IFNγ, TNFα, and GM-CSF Release of T-Cells
When all subjects were analyzed together independent from disease status, CX3CL1 concentration-dependently further increased IFNγ, TNFα, and GM-CSF in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 (Figure 2A-C). After grouping according to COPD and smoking status, this effect was without differences between NS, S, and COPD for IFNγ ( Figure 2A). For TNFα, this effect was not observed in any subgroup ( Figure 2B). For GM-CSF, this effect was observed in NS and S without differences but not in the COPD subgroup ( Figure 2C). We did not find any correlation of the increase of IFNγ, TNFα or GM-CSF to the demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with CX3CL1, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown). PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant CX3CL1 was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with Figure 1. T-cell activation towards Th1/Tc1 induced the release of IFNγ, TNFα, and GM-CSF in peripheral blood mononuclear cells (PBMCs) of non-smokers (NS), current smokers without respiratory symptoms (S) and COPD subjects. Peripheral blood mononuclear cells (PBMCs; 10 6 cells/mL) of NS (n = 10), S (n = 11), and COPD (n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by enzyme-linked immunosorbent assay (ELISA). The cytokine levels of the controls without T-cell activating reagents are artificial because they were below the detection limit of the ELISA. Data are presented as mean ± SEM. Differences between activated cells and non-activated controls within a group were analyzed with paired t-tests, differences between the groups were analyzed with one-way analysis of variance (ANOVA, p < 0.0001 in C) and post hoc Bonferroni test. *, p < 0.05; **, p < 0.01; ***, p < 0.001.
CX3CL1 Increased IFNγ, TNFα, and GM-CSF Release of T-Cells
When all subjects were analyzed together independent from disease status, CX3CL1 concentration-dependently further increased IFNγ, TNFα, and GM-CSF in PBMCs pretreated with anti-CD3 and anti-CD28 antibodies and with IL-12 (Figure 2A-C). After grouping according to COPD and smoking status, this effect was without differences between NS, S, and COPD for IFNγ ( Figure 2A). For TNFα, this effect was not observed in any subgroup ( Figure 2B). For GM-CSF, this effect was observed in NS and S without differences but not in the COPD subgroup ( Figure 2C). We did not find any correlation of the increase of IFNγ, TNFα or GM-CSF to the demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with CX3CL1, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown).
IL-18 Increased IFNγ but Reduced TNFα and GM-CSF Release of T-Cells
IL-18 concentration-dependently further increased IFNγ ( Figure 3A) but decreased TNFα, and GM-CSF in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 3B,C). After subgrouping, the effects of IL-18 on IFNγ and GM-CSF were without difference between NS, S, and COPD ( Figure 3A,C). The reducing effect of IL-18 on TNFα was only observed in S. ( Figure 3B). We did not find any correlation to the demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with IL-18, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown).
CCL18 Concentration-Dependently Increased or Decreased the Cytokine Release of T-Cells
At 10 ng/mL, CCL18 further increased IFNγ and GM-CSF but not TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 4A-C). After grouping, the effect on IFNγ was not observed in any group ( Figure 4A). The effect on GM-CSF was observed in NS but not in S or COPD ( Figure 4C). We did not find any correlation to the demographic, lung function or blood count parameters (data not shown). At 50 ng/mL, CCL18 reduced all three cytokines ( Figure 4A-C). After grouping, the effects on IFNγ were observed in S and COPD but not in NS ( Figure 4A). The effects on TNFα and GM-CSF were observed in all groups without differences ( Figure 4B,C). In the COPD group, the reducing effect of CCL18 at 50 ng/mL on TNFα correlated positively to age (r 2 = 0.333; p = 0.038) and negatively to FEV1/FVC (%) (r 2 = 0.328; p = 0.04). The suppressing effect on GM-CSF correlated negatively to the monocyte content in the blood (% WBC) of COPD subjects (r 2 = 0.352; p = 0.033). We did not find any correlations for IFNγ (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with CCL18, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown).
controls within a group were analyzed with paired t-tests, differences between the groups were analyzed with one-way analysis of variance (ANOVA, p < 0.0001 in C) and post hoc Bonferroni test. *, p < 0.05; **, p < 0.01; ***, p < 0.001.
CX3CL1 Increased IFNγ, TNFα, and GM-CSF Release of T-Cells
When all subjects were analyzed together independent from disease status, CX3CL1 concentration-dependently further increased IFNγ, TNFα, and GM-CSF in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 (Figure 2A-C). After grouping according to COPD and smoking status, this effect was without differences between NS, S, and COPD for IFNγ ( Figure 2A). For TNFα, this effect was not observed in any subgroup ( Figure 2B). For GM-CSF, this effect was observed in NS and S without differences but not in the COPD subgroup ( Figure 2C). We did not find any correlation of the increase of IFNγ, TNFα or GM-CSF to the demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with CX3CL1, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown). median. The effects of CX3CL1 on the cytokines were analyzed by Wilcoxon-signed rank test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01; ***, p < 0.001; ***, p < 0.001.
IL-18 Increased IFNγ but Reduced TNFα and GM-CSF Release of T-Cells
IL-18 concentration-dependently further increased IFNγ ( Figure 3A) but decreased TNFα, and GM-CSF in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 3B, C). After subgrouping, the effects of IL-18 on IFNγ and GM-CSF were without difference between NS, S, and COPD ( Figure 3A,C). The reducing effect of IL-18 on TNFα was only observed in S. ( Figure 3B). We did not find any correlation to the demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with IL-18, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown). PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant IL-18 was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of IL-18 on the cytokines were analyzed by one-sample test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001.
CCL18 Concentration-Dependently Increased or Decreased the Cytokine Release of T-Cells
At 10 ng/mL, CCL18 further increased IFNγ and GM-CSF but not TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 4A-C). After grouping, the effect on IFNγ was not observed in any group ( Figure 4A). The effect on GM-CSF was observed in NS but not in S or COPD ( Figure 4C). We did not find any correlation to the demographic, lung function or blood count parameters (data not shown). At 50 ng/mL, CCL18 reduced all three cytokines ( Figure 4A-C). After grouping, the effects on IFNγ were observed in S and COPD but not in NS ( Figure 4A). The effects on TNFα and GM-CSF were observed in all groups without differences ( Figure 4B,C). In the COPD group, the reducing effect of CCL18 at 50 ng/mL on TNFα correlated positively to age (r 2 = 0.333; p = 0.038) and negatively to FEV1/FVC (%) (r 2 = 0.328 ; p = 0.04). The suppressing effect on GM-CSF correlated negatively to the monocyte content in the blood (% WBC) of COPD subjects (r 2 = 0.352 ; p = 0.033). We did not find any correlations for IFNγ (data not shown). In culture supernatants of PBMCs that were not pre-treated with . IL-18 modulated IFNγ, TNFα and GM-CSF release from PBMCs with activated T-cells. PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant IL-18 was added at 10 or 50 ng/mL. After 72 h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of IL-18 on the cytokines were analyzed by one-sample test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001.
BNP Reduced TNFα and GM-CSF Release of T-Cells
When all study subjects were analyzed together, BNP concentration-dependently reduced GM-CSF but did not modulate IFNγ or TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 5A-C). After grouping, BNP concentrationdependently reduced TNFα in NS and S but not in COPD ( Figure 5B) and GM-CSF in S and COPD but not in NS ( Figure 5C). BNP did not modulate IFNγ in any group ( Figure 5A). The suppressive effect of BNP on GM-CSF correlated positively to neutrophils (% WBC) (r 2 = 0.375, p = 0.026) and negatively to lymphocytes (% WBC) (r 2 = 0.35, p = 0.033). We did not find correlations for IFNγ or TNFα. In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with BNP, the concentrations , current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant CCL18 was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of CCL18 on the cytokines were analyzed by one-sample test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001.
BNP Reduced TNFα and GM-CSF Release of T-Cells
When all study subjects were analyzed together, BNP concentration-dependently reduced GM-CSF but did not modulate IFNγ or TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 5A-C). After grouping, BNP concentration-dependently reduced TNFα in NS and S but not in COPD ( Figure 5B) and GM-CSF in S and COPD but not in NS ( Figure 5C). BNP did not modulate IFNγ in any group ( Figure 5A). The suppressive effect of BNP on GM-CSF correlated positively to neutrophils (% WBC) (r 2 = 0.375, p = 0.026) and negatively to lymphocytes (% WBC) (r 2 = 0.35, p = 0.033). We did not find correlations for IFNγ or TNFα. In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with BNP, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown). , current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant BNP was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of BNP on the cytokines were analyzed by Wilcoxon-signed rank test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01. PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant CCL18 was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of CCL18 on the cytokines were analyzed by one-sample test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001.
BNP Reduced TNFα and GM-CSF Release of T-Cells
When all study subjects were analyzed together, BNP concentration-dependently reduced GM-CSF but did not modulate IFNγ or TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 5A-C). After grouping, BNP concentration-dependently reduced TNFα in NS and S but not in COPD ( Figure 5B) and GM-CSF in S and COPD but not in NS ( Figure 5C). BNP did not modulate IFNγ in any group ( Figure 5A). The suppressive effect of BNP on GM-CSF correlated positively to neutrophils (% WBC) (r 2 = 0.375, p = 0.026) and negatively to lymphocytes (% WBC) (r 2 = 0.35, p = 0.033). We did not find correlations for IFNγ or TNFα. In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with BNP, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown). , current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant BNP was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of BNP on the cytokines were analyzed by Wilcoxon-signed rank test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01. Figure 5. BMP reduced TNFα and GM-CSF release from PBMCs with activated T-cells. PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant BNP was added at 10 or 50 ng/mL. After 72 h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of BNP on the cytokines were analyzed by Wilcoxon-signed rank test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01.
Periostin Did Not Modulate IFNγ, TNFα or GM-CSF Release of T-Cells from Current Smokers without Respiratory Symptoms and from COPD Subjects
When analyzing all subjects together, periostin did not modulate IFNγ, TNFα or GM-CSF in PBMCs with pre-treatment with anti-CD3 and anti-CD28 antibodies and IL-12 ( Figure 6A-C). After grouping, periostin reduced TNFα exclusively in pre-treated PBMCs of NS in a concentration-dependent manner ( Figure 6B). We did not find any correlation of the effects of periostin on the cytokines with demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with periostin, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown).
IL-17 Suppressed GM-CSF Release of T-Cells
When analyzing all subjects together, IL-17 reduced GM-CSF but did not modulate IFNγ and TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 7A-C). After grouping, this effect was exclusively observed in S ( Figure 7C).
IL-17 concentration-dependently reduced TNFα in S but not in NS and COPD ( Figure 7B). We did not find any correlation of the effects of IL-17 to demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with IL-17, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown).
Periostin did not Modulate IFNγ, TNFα or GM-CSF Release of T-Cells from Current Smokers without Respiratory Symptoms and from COPD Subjects
When analyzing all subjects together, periostin did not modulate IFNγ, TNFα or GM-CSF in PBMCs with pre-treatment with anti-CD3 and anti-CD28 antibodies and IL-12 ( Figure 6A-C). After grouping, periostin reduced TNFα exclusively in pre-treated PBMCs of NS in a concentration-dependent manner ( Figure 6B). We did not find any correlation of the effects of periostin on the cytokines with demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with periostin, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown).
IL-17 Suppressed GM-CSF Release of T-cells
When analyzing all subjects together, IL-17 reduced GM-CSF but did not modulate IFNγ and TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 7A-C). After grouping, this effect was exclusively observed in S ( Figure 7C). IL-17 concentration-dependently reduced TNFα in S but not in NS and COPD (Figure 7B). We did not find any correlation of the effects of IL-17 to demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with IL-17, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown). not pre-treated with T-cell activating reagents but were stimulated with periostin, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown). PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant periostin was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of periostin on the cytokines were analyzed by Wilcoxon-signed rank test vs. a hypothetical value of 0 (= no change). **, p < 0.01.
IL-17 Suppressed GM-CSF Release of T-cells
When analyzing all subjects together, IL-17 reduced GM-CSF but did not modulate IFNγ and TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 7A-C). After grouping, this effect was exclusively observed in S ( Figure 7C). IL-17 concentration-dependently reduced TNFα in S but not in NS and COPD (Figure 7B). We did not find any correlation of the effects of IL-17 to demographic, lung function or blood count parameters (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with IL-17, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown).
CCL22 Suppressed IFNγ, TNFα and GM-CSF Release of T-Cells
CCL22 concentration-dependently reduced IFNγ, TNFα and GM-CSF in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 8A-C). After grouping, CCL22 reduced IFNγ only in COPD ( Figure 8A) but TNFα and GM-CSF in all three groups ( Figure 8B,C). The effect of CCL22 at 10 ng/mL on TNFα was higher in NS compared to COPD ( Figure 8B). When analyzing all subjects together, the suppressive effect of CCL22 at 50 ng/mL on IFNγ correlated negatively and that at 10 ng/mL on TNFα correlated positively with FEV1 (% pred.). We did not find correlations for GM-CSF (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with CCL22, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown).
grouping, CCL22 reduced IFNγ only in COPD ( Figure 8A) but TNFα and GM-CSF in all three groups ( Figure 8B, C). The effect of CCL22 at 10 ng/mL on TNFα was higher in NS compared to COPD ( Figure 8B). When analyzing all subjects together, the suppressive effect of CCL22 at 50 ng/mL on IFNγ correlated negatively and that at 10 ng/mL on TNFα correlated positively with FEV1 (% pred.). We did not find correlations for GM-CSF (data not shown). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with CCL22, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA at the conditions used (data not shown). Figure 8. CCL22 reduced IFNγ, TNFα, and GM-CSF release from PBMCs with activated T-cells. PBMCs from non-smokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant CCL22 was added at 10 or 50 ng/mL. After 72h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of CCL22 on the cytokines were analyzed by one-sample test vs. a hypothetical value of 0 (= no change). Comparisons between the groups were made with one-way ANOVA and post hoc Bonferroni test. *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001.
EGF Modulated IFNγ and GM-CSF Release of T-Cells
At 10 ng/mL, EGF increased IFNγ and GM-CSF but not TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 9A-C). After grouping, the EGF effect on IFNγ was exclusively observed in S and COPD ( Figure 9A). The effect on GM-CSF was observed in NS and S but not in COPD ( Figure 9C). We did not find correlations to the demographic, lung function or blood count parameters. At 50 ng/mL EGF reduced GM-CSF but did not modulate IFNγ and TNFα. After subgrouping, the effect was exclusively observed in COPD. When all subjects were analyzed together, the suppressive effect on GM-CSF correlated negatively to FEV1 (% pred.) (r 2 = 0.169; p = 0.016) and to FEV1/FVC (%) (r 2 = 0.133; p = 0.029). The negative correlation to FEV1 (% pred.) was also observed in the COPD group (r 2 = 0.331; p = 0.040). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with EGF, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown). Comparisons between the groups were made with one-way ANOVA and post hoc Bonferroni test. *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001.
EGF Modulated IFNγ and GM-CSF Release of T-Cells
At 10 ng/mL, EGF increased IFNγ and GM-CSF but not TNFα in PBMCs pre-treated with anti-CD3 and anti-CD28 antibodies and with IL-12 ( Figure 9A-C). After grouping, the EGF effect on IFNγ was exclusively observed in S and COPD ( Figure 9A). The effect on GM-CSF was observed in NS and S but not in COPD ( Figure 9C). We did not find correlations to the demographic, lung function or blood count parameters. At 50 ng/mL EGF reduced GM-CSF but did not modulate IFNγ and TNFα. After subgrouping, the effect was exclusively observed in COPD. When all subjects were analyzed together, the suppressive effect on GM-CSF correlated negatively to FEV1 (% pred.) (r 2 = 0.169; p = 0.016) and to FEV1/FVC (%) (r 2 = 0.133; p = 0.029). The negative correlation to FEV1 (% pred.) was also observed in the COPD group (r 2 = 0.331; p = 0.040). In culture supernatants of PBMCs that were not pre-treated with T-cell activating reagents but were stimulated with EGF, the concentrations of IFNγ, TNFα or GM-CSF were almost always below the detection limit of the ELISA in the conditions used (data not shown).
Discussion
INFγ and TNFα release of T-cells after activation towards Th1/Tc1 in PBMCs was not influenced by the smoking status of the subjects or by COPD and was also not associated with COPD-related lung function parameters. For IFNγ, this matches to previous Figure 9. EGF modulates the release of IFNγ and GM-CSF from PBMCs with activated T-cells. PBMCs from nonsmokers (NS; n = 10), current smokers without respiratory symptoms (S; n = 11) and chronic obstructive pulmonary disease subjects (COPD; n = 13) were stimulated with anti-CD3 and anti-CD28 antibodies (each at 500 ng/mL) and with IL-12 (10 ng/mL). After 30 min, recombinant EGF was added at 10 or 50 ng/mL. After 72 h, INFγ (A), TNFα (B) and GM-CSF (C) concentrations were measured in the cell culture supernatants by ELISA. Data were calculated as % change versus PBMCs that were stimulated with anti-CD3/anti-CD28 antibodies and IL-12. Data are presented as scatter with median. The effects of EGF on the cytokines were analyzed by one-sample test vs. a hypothetical value of 0 (= no change). *, p < 0.05; **, p < 0.01; ***, p < 0.001.
Discussion
INFγ and TNFα release of T-cells after activation towards Th1/Tc1 in PBMCs was not influenced by the smoking status of the subjects or by COPD and was also not associated with COPD-related lung function parameters. For IFNγ, this matches to previous data with isolated CD4+ T-cells [9]. GM-CSF release from the T-cells of current smokers, however, was increased compared to non-smokers. Smoking induces GM-CSF and granulocyte levels in the lung, which contributes to smoking-induced lung inflammation [42][43][44]. Epithelial cells and/or macrophages have been discussed as a primary source of these increased GM-CSF levels, but our data indicate that recruited T-cells might contribute to this pathology. We did not observe increased GM-CSF production in T-cells of COPD subjects. This could be explained by the fact that about 60% of the COPD subjects in our cohort were ex-smokers. Moreover, our data did not provide indication for an exhaustion of T-cells regarding the responsiveness to T cell-receptor and co-receptor stimulation specifically in COPD.
To address the question of mechanistic links between co-morbidities, systemic inflammation and infection-induced exacerbations, we tested for effects of the respective circulating biomarkers on T-cell activity and T-cell activation towards Th1/Tc1 in the PBMC culture model. The model considers the presence of accessory cells that influence the T-cell activation process and the possible influence of co-morbidity biomarkers at recruitment from the circulation to the draining lymph nodes in response to acute infections in vivo.
Acute infections in the last two months before sampling was exclusion criterion. Thus, the presence of T-cells that have already been activated in vivo in response to acute infections were almost excluded in the PBMC cultures. When using cells of COPD subjects, our experimental model, therefore, reflects the activation of T-cells in stable COPD in response to an acute infection (that can cause an exacerbation) in the presence of comorbidities. The acute infection was mimicked by adding the T-cell activating reagents to the culture and the comorbidities were represented by adding the respective biomarkers.
We have seen various smoking-, COPD-, and concentration-dependent effects of the biomarkers on the Th1 activation process. The effects were in a range of up to 30% increase or decrease of T-cell activity in terms of cytokine production and were not associated with cytotoxicity according to trypan blue staining in pre-experiments. Independent of disease and smoking status of the subjects, the effects of a single biomarker concentration may vary in strength and also in direction between IFNγ, TNFα, and GM-CSF ( Figure 10). For example, IL-18 at 50 ng/mL upregulated IFNγ but reduced GM-CSF in NS. This further confirms that the reductions of cytokine levels are not based on general cytotoxic effects of the recombinant proteins. Furthermore, some of the effects that have been observed here match to previous studies. The increasing effect of IL-18 on IFNγ is in line with data that have suggested IL-18 to be an enhancer of IFNγ-based Th1 cell activity in cooperation with IL-12 [45]. CCL22 reduced the activity of T-cells polarized towards Th1/Tc1. This matches to data that have shown a reduced Th1 effector function of infiltrating CD4+ and CD8+ cells after exposure to CCL22 [46]. In animal studies, the inhibition of the NPRA signaling pathway, a BNP target, resulted in an increased TNFα expression, which suggests respective suppressive effects of BNP [47]. Our data confirm this conclusion and indicate that T-cells might contribute to the mechanism.
There are two general limitations of this model that have to be considered before discussing the data in the clinical context. First, the cytokines analyzed might have been released from other cells than T-cells, exclusively or as well. This issue can be neglected here because in the control experiments without addition of T-cell activating reagents the concentrations of all three target cytokines were almost below the detection limit of the ELISA in the presence or absence of the biomarkers. Second, we cannot answer the question whether the biomarker effects on T-cells were the result of a direct response or if they were mediated by accessory cells. This issue can be neglected for the discussion regarding possible links between co-morbidities, systemic inflammation and exacerbations, the primary goal of the study. However, it gains more importance when trying to deduce possible therapeutic targets from these data. Therefore, the differential expression of the receptors for these biomarkers should be considered. With the exception of the BNP receptor Natriuretic peptide receptor A (NPR-A), which is expressed on accessory monocytes [48], the primary receptors for all other biomarkers used here are expressed on CD4+ and/or CD8+ T-cells and also on accessory cells of the PBMC fraction. The CX3CL1 receptor CX3CR1 is expressed on CD8+ T-cells, Th1-cells, monocytes and natural killer (NK) cells [18,49]. The IL-18 receptor (IL18R) is expressed on T-cells, B-cells, NK cells and on dendritic cells (DCs) [50,51]. CCR8, the primary CCL18 receptor, is expressed at low levels on Th1 cells, CD8+ T-cells and NK cells [52]. The EGF receptor (EGFR) and the periostin receptors αV/β3 and αV/β5 integrins are expressed on T-cells and on monocytes [53,54]. CCR4, the CCL22 receptor can be found on Th1 and NK cells and on DCs [55][56][57]. The heterodimer IL17RA/RC, the IL17 receptor, is abundantly expressed on monocytes and B-cells but in low levels also on inactive CD3+ T-cells [58]. Therefore, the effects of BNP on T-cell activity are indirect and might be mediated by monocytes, whereas the effects of the other biomarkers on T-cell activity might be direct or indirect or a combination of both. A third limitation is that the study was not powered to subgroup the COPD subjects for comorbidities. According to the study design, adding the biomarkers to the culture represents the respective comorbidity in this experimental model. A possible exposition of the cells with the biomarkers in vivo before sampling might have influenced their response to the reagents in culture. Nonetheless, the broad variation of comorbidities in the limited n-number of COPD subjects did not allow a corresponding stratification of the data. This further confirms that the reductions of cytokine levels are not based on general cytotoxic effects of the recombinant proteins. Furthermore, some of the effects that have been observed here match to previous studies. The increasing effect of IL-18 on IFNγ is in line with data that have suggested IL-18 to be an enhancer of IFNγ-based Th1 cell activity in cooperation with IL-12 [49]. CCL22 reduced the activity of T-cells polarized towards Th1/Tc1. This matches to data that have shown a reduced Th1 effector function of infiltrating CD4+ and CD8+ cells after exposure to CCL22 [50]. In animal studies, the inhibition of the NPRA signaling pathway, a BNP target, resulted in an increased TNFα expression, which suggests respective suppressive effects of BNP [51]. Our data confirm this conclusion and indicate that T-cells might contribute to the mechanism. There are two general limitations of this model that have to be considered before discussing the data in the clinical context. First, the cytokines analyzed might have been released from other cells than T-cells, exclusively or as well. This issue can be neglected here because in the control experiments without addition of T-cell activating reagents the concentrations of all three target cytokines were almost below the detection limit of the ELISA in the presence or absence of the biomarkers. Second, we cannot answer the question whether the biomarker effects on T-cells were the result of a direct response or if they were mediated by accessory cells. This issue can be neglected for the discussion regarding possible links between co-morbidities, systemic inflammation and exacerbations, the primary goal of the study. However, it gains more importance when trying to deduce possible therapeutic targets from these data. Therefore, the differential expression of the receptors for these biomarkers should be considered. With the exception of the BNP receptor NPR-A, which is expressed on accessory monocytes [52], the primary receptors for all other biomarkers used here are expressed on CD4+ and/or CD8+ T-cells and also on accessory cells of the PBMC fraction. The CX3CL1 receptor CX3CR1 is expressed on CD8+ T-cells, Th1-cells, monocytes and natural killer (NK) cells [18,53]. The IL-18 receptor (IL18R) is expressed on T-cells, B-cells, NK cells and on dendritic cells (DCs) [54,55]. CCR8, the primary CCL18 receptor, is expressed at low levels on Th1 cells, CD8+ T-cells and NK cells [56]. The EGF receptor (EGFR) and the periostin receptors αV/β3 and αV/β5 integrins are expressed on T-cells and on monocytes [57,58]. CCR4, the CCL22 receptor can be found on Th1 and NK cells and on DCs [59][60][61]. The heterodimer IL17RA/RC, the IL17 receptor, is abundantly expressed on monocytes and B-cells but in low levels also on inactive CD3+ T-cells [62]. Therefore, the effects of BNP on T-cell activity are indirect and At 10 ng/mL, the different biomarkers increased or reduced the T-cell activation markers without a clear trend for one direction in the different categories of comorbidities or for the different T-cell activation markers. However, increasing the biomarker concentrations clearly resulted in an increase of the suppressive effects, particularly for the biomarkers associated with asthma and cancer ( Figure 10). Indeed, at 50 ng/mL, an up-regulation was only observed for IFNγ in response to the diabetes and CAD marker IL-18, but a reduction for at least one of the three T-cell activity markers in response to all other biomarkers except CX3CL1 and periostin. We interpret a reduced T-cell activity caused by a comorbiditybiomarker in our experimental model as a putative mechanistic reason for an increased possibility to get or prolong an exacerbation in COPD with the respective comorbidity. This is because a reduced T-cell-dependent infection defense might result in a delayed clearance of the pathogen. In this context, it is important to observe the respective biomarker effects in the COPD group but it is irrelevant if there are differences to healthy subjects or active smokers without respiratory symptoms. Whether these T-cells with the reduced activity share characteristics of exhausted T-cells remains to be investigated.
Provided that the severity of a comorbidity is associated with an increase of the respective biomarkers in the circulation, our data indicate that the progression of comorbidities in COPD is associated with a reduced activity of Th1/Tc1 cells in response to acute infections. This would suggest a reduced capacity of the immune response to acute respiratory tract infections and could contribute to the mechanisms underlying the association of comorbidities with frequent exacerbations.
With two exceptions, we did not detect statistically significant differences when comparing the three subject groups, NS, S, and COPD, for the biomarker effects on T-cell activity. Nevertheless, we think that there is some evidence for an influence of active smoking and systemic COPD pathology on the biomarker effects. It is noteworthy that at 50 ng/mL statistically significant suppressive effects were more often observed in S and COPD than in NS, particularly again for those biomarkers associated with asthma and cancer ( Figure 10). In NS, only TNFα was reduced by CCL22. In S, TNFα and GM-CSF both were reduced by CCL22 and by IL-17. In COPD, we did not find effects of IL-17, but all three T-cell activity markers were reduced by CCL22, and GM-CSF was also reduced by EGF. This provides first evidence that systemic consequences of active smoking as well as of the COPD systemic pathology might influence the effects of asthma and cancer related co-morbidity biomarkers on T-cell activity. To a lesser extent this also applies to the biomarkers of cardiovascular co-morbidities, CCL18 and BNP. In summary, we carefully conclude that smoking and the COPD pathology might enhance the suppressive effects of the biomarkers for co-morbidities on T-cell immunity. However, because of the low numbers of significant differences between the groups, this conclusion requires further investigation. The observation that some of the biomarkers also modulate the T-cell activity in cells of healthy never smokers provides first indication that the diseases that are associated with the biomarkers might also influence T-cell immunity in the absence of COPD. However, compared to the COPD group, the overall trend was less pronounced towards suppressive effects.
The biomarker effects could have been influenced by the activation status of the T-cells in the samples before starting the culture despite our exclusion criteria. There is evidence for an increase in the number of activated circulating CD8+ T-cells and for dysregulated regulatory T-cells also in the absence of an acute infection in stable COPD, however, the data are controversial [59,60]. Our controls did not indicate any significant T-cell activity in the cultured PBMCs in the absence of T-cell activation reagents in any group (Figure 1). We cannot exclude the presence of CD4+ or CD8+ T-cells in the cultures that have been deactivated by regulatory T-cells in vivo before sampling.
This study adds another part to the understanding of the complex systemic molecular pathology that underlies the increased susceptibility to respiratory infections in COPD. In contrast to the local innate immune cells that show an overactivation in response to respiratory pathogens in COPD [11,61], the activation process of circulating innate and adaptive immune cells appears to be rather suppressed. We have previously shown that systemic defects in Toll-like receptor signaling prevent the full activation of T-cells and monocytes in response to respiratory bacteria [9,62,63]. Here, we add the information that the suppression of the Th1-immunity might be amplified by co-morbidities in COPD.
Study Subjects
The study population consisted of three groups: 10 healthy non-smokers (≥20 years of nonsmoking; <1 pack-year) (NS); 11 current smokers (≥10 pack-years) without respiratory symptoms (S); and 13 patients with stable COPD, five current and eight former smokers (≥10 packyears; Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages II-IV) ( Table 1). COPD was diagnosed according to the criteria recommended by the National Institutes of Health and according to GOLD standard. The following comorbidities were recorded in the COPD subjects at sampling: hypothyroidism (n = 2), scoliosis (n = 1), hypertension (n = 2), type II diabetes and hypertension (n = 1), history of cervix cancer (n = 1), gastroesophageal reflux disease (n = 1), peripheral arterial disease (n = 1), multiple (aortic and mitral valve insufficiency, arthrosis, hypertension, gastroesophageal reflux disease, glaucoma, gout, osteoporosis; n = 1), none (n = 3). Sample size calculation was based on preliminary experiments with activated T-cells in PBMCs (n = 3 per group) for the biomarker CX3CL1 and the analysis parameter IFNγ (Th1/Tc1 activation marker). With α < 0.05 and a power (1-β) of 0.8 the number of patients to be included was nine per group. We increased the number to 10-13 per group in order to compensate for putative errors in recruiting or in sample preparation, handling or analysis. Inclusion and exclusion criteria were defined according to previous studies [9]: age: ≥40 years; no use of oral corticosteroids or immunosuppressive treatments and no report of acute infection (bacterial, viral, or parasitic) 2 months before sampling. From 37 subjects recruited, three subjects were excluded because they did not match the in-or exclusion criteria or because of sample limitation resulting in missing endpoints after analyses. The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of the Ruhr-University Bochum, (4257-12, 06.07.2012), Bochum, Germany". 1.0 ± 0.3 0.9 ± 0.4 0.5 ± 0.2 NS, non-smoker; S, current smoker without chronic obstructive pulmonary disease (COPD) (≥10 pack-years); COPD, chronic obstructive pulmonary disease. COPD subjects were Global Initiative for Chronic Obstructive Lung Disease (GOLD) stages II to IV. FEV 1 , forced expiratory volume in one second; FVC, forced vital capacity; WBC, whole blood count; Data are given as mean ± standard error of the mean (SEM); ** p < 0.01; *** p <0.001 versus S and NS; ## p < 0.01 vs. NS; § § p < 0.01 vs. S.
Enzyme-Linked Immunosorbent Assay for IFNγ, TNFα and GM-CSF
IFNγ, TNFα and GM-CSF concentrations in the supernatants of cultured PBMCs were measured by ELISA according to the manufacturer's instructions (R&D Systems, Minneapolis, MN; cat# DY285, DY210, DY215) and as described in [66].
Statistical Analysis
Statistical analyses were performed to investigate whether the biomarkers for comorbidities influence the T-cell activation and if there are differences between the groups. The distribution of the data were analyzed by histogram analysis. Cytokine concentration data were presented as mean ± standard error of the mean (SEM). Data that show the influence of the biomarkers on cytokine release were normalized, expressed as % change (versus T-cell activation without biomarker stimulation), and presented as scatter with median. Paired t-test, one-sample t-test, and Wilcoxon-signed-rank test were used to analyze for differences between approaches within a group. One-way analysis of variance (ANOVA) with post hoc Bonferroni test or Kruskal-Wallis with post hoc Dunns test were used to analyze for differences between the study groups. Correlations of cytokine concentrations in the cell culture supernatants to age, pack-years, lung function parameters (FEV1, % pred.; FEV1/FVC, %) and differential blood count parameters (monocytes, % of whole blood count (WBC); lymphocytes, % WBC; neutrophils, % WBC; eosinophils, % WBC) were investigated by linear regression analysis. For all tests, a p-value below 0.05 was considered statistically significant. All tests were done with GraphPad Prism 5.01 (Graph Pad Software; San Diego, CA, USA). Funding: This study was financed by the general funding for research and teaching of the Ruhr-University Bochum. We further acknowledge support by the Open Access Publication Funds of the Ruhr-University Bochum.
Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of the Ruhr-University Bochum (4257-12, 06.07.2012), Bochum, Germany.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The original data, the statistical analyses and the data that were cited as "not shown" can be obtained from the corresponding author upon request.
|
v3-fos-license
|
2020-04-30T09:08:22.825Z
|
2020-04-22T00:00:00.000
|
216646203
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7554/elife.56605",
"pdf_hash": "71b358141cd11e7823e87580e073e9dc45331866",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1136",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "84e9b9b9ce8682e951b97d74feedeff31092d957",
"year": 2020
}
|
pes2o/s2orc
|
Probabilistic, spinally-gated control of bladder pressure and autonomous micturition by Barrington’s nucleus CRH neurons
Micturition requires precise control of bladder and urethral sphincter via parasympathetic, sympathetic and somatic motoneurons. This involves a spino-bulbospinal control circuit incorporating Barrington’s nucleus in the pons (Barr). Ponto-spinal glutamatergic neurons that express corticotrophin-releasing hormone (CRH) form one of the largest Barr cell populations. BarrCRH neurons can generate bladder contractions, but it is unknown whether they act as a simple switch or provide a high-fidelity pre-parasympathetic motor drive and whether their activation can actually trigger voids. Combined opto- and chemo-genetic manipulations along with multisite extracellular recordings in urethane anaesthetised CRHCre mice show that BarrCRH neurons provide a probabilistic drive that generates co-ordinated voids or non-voiding contractions depending on the phase of the micturition cycle. CRH itself provides negative feedback regulation of this process. These findings inform a new inferential model of autonomous micturition and emphasise the importance of the state of the spinal gating circuit in the generation of voiding.
Introduction
The regulated production, storage and elimination of liquid waste as urine (micturition) plays a critical homeostatic role in maintaining the health of organisms. Like breathing, this involves precisely co-ordinated autonomic (parasympathetic and sympathetic) and somatic motor drives and has both voluntary and autonomous (involuntary) control mechanisms. The power of the autonomous drive is illustrated by the challenge faced by anyone 'caught short' away from a socially acceptable location for urination. Disorders of autonomous micturition (resulting in involuntary voiding) are seen in overactive bladder syndrome, enuresis and following frontal lobe lesions (Drake et al., 2010;Banakhar et al., 2012;Nevéus, 2017). Barrington's nucleus, also known as the pontine micturition centre, is a key site for the control of urination (Barrington, 1925). The prevailing concept of the neural control of micturition is that afferent information from the bladder is conveyed via the spinal cord to the brainstem and periaqueductal gray (PAG) in the midbrain where it is integrated with information from higher centres such as hypothalamus and cortex Shefchyk, 2001;Drake et al., 2010;de Groat and Wickens, 2013). The synaptic drive from these centres is relayed to Barrington's nucleus which appears to be a key command point for micturition (Valentino et al., 1994;Hou et al., 2016;Verstegen et al., 2019).
When the bladder is full, a threshold is reached and a neural command to void is relayed from Barrington's nucleus to the lumbosacral parasympathetic neurons and urethral sphincter motoneurons. Lesions of Barrington's nucleus (Barrington, 1925) or acute transection of the pons abolishes micturition (De Groat, 1975;Sadananda et al., 2011). In contrast, supra-collicular decerebration or transection of PAG does not stop micturition in cats, rats or mice (Takasaki et al., 2010;Sadananda et al., 2011;Ito et al., 2018). Similarly, co-ordinated voiding is seen under anaesthesia Hou et al., 2016;Ito et al., 2017;Keller et al., 2018;Verstegen et al., 2019) when the contextual element of volitional voiding is removed. This constitutes autonomous micturition. Electrical or chemical stimulation of Barrington's nucleus induces bladder contraction (Holstege et al., 1986;Noto et al., 1989;Mallory et al., 1991;Sasaki and Sato, 2013). Functional imaging studies in humans Nour, 2000) and rats Tai et al., 2009 found activity in the dorsal pons during voiding. Thus, Barrington's nucleus is pivotal in the voiding reflex and is part of the minimal spino-bulbospinal circuit that generates autonomous voids and is believed to be the pre-parasympathetic control centre.
One of the largest populations of Barrington's nucleus neurons expresses corticotropin releasing hormone (CRH) in humans (Ruggiero et al., 1999) and rodents (Vincent and Satoh, 1984;Valentino et al., 1995;Valentino et al., 2011;Verstegen et al., 2017) and their axons terminate in the vicinity of the sacral parasympathetic neurons (Valentino et al., 2011;Hou et al., 2016;Verstegen et al., 2017). The role of these CRH-positive neurons in Barrington's nucleus (Barr CRH ) has recently been explored using CRH CRE mice to enable specific opto-and chemo-genetic manipulation of their activity (Hou et al., 2016;Keller et al., 2018). These studies indicated that Barr CRH neurons were glutamatergic, their activation caused bladder contraction (Hou et al., 2016;Keller et al., 2018) and increasing their excitability increased the probability of micturition (Verstegen et al., 2019). A second smaller subgroup of oestrogen receptor type-1 positive neurons in Barrington's nucleus (Barr ESR1 ) have been shown to be important for control of the urethral sphincter in voluntary scent marking with urine (Keller et al., 2018). Further a group of layer-5 pyramidal neurons in the primary motor cortex plays a role in the descending control of voluntary urination via their projections to Barrington's nucleus (Yao et al., 2018). A common feature of many of these functional studies is that they have focussed on volitional voiding behaviours (Hou et al., 2016;Keller et al., 2018), such as scent marking of males in the presence of females which depends on descending inputs to the brainstem to trigger the voiding behaviour. This has led to different hypotheses about the role of the Barr CRH neurons in mediating voids in conscious mice with several studies concluding that they play a supporting rather than a primary role in generating voids (Hou et al., 2016;Keller et al., 2018).
Single-unit recordings of micturition-related neurons in the vicinity of Barrington's nucleus in rats and cats showed multiple different patterns of activity with either increased or decreased firing during bladder contractions (de Groat et al., 1998;Sugaya et al., 2003;Tanaka et al., 2003;Sasaki, 2005b;Sasaki, 2005a). These results were thought to reflect the neural heterogeneity within Barrington's nucleus and/or be due to the complex neural circuits in nearby brainstem sites involved the regulation of other pelvic visceral functions. More recent microwire recordings of the dorsal pons in rats reported neurons in the vicinity of Barrington's nucleus that have more homogeneous firing patterns, characterised by tonic activity with phasic bursts that were temporally associated with the voiding phase of the micturition cycle (Manohar et al., 2017) but they also show bursts of activity between voids that were not associated with increases in bladder pressure. A common technical limitation of these pontine neural recordings is the difficulty of identifying specific cell populations during or after recordings. This has been addressed for populations of Barrington's neurons through fibre-photometry of genetically encoded calcium indicators in mice (Hou et al., 2016;Keller et al., 2018;Yao et al., 2018) to show that Barr CRH and Barr ESR1 neuronal activity increases around the time of voiding/scent marking respectively. However, the limited temporal and spatial resolution of the indicator and technique limits the ability to address whether this activity drives or follows micturition behaviour and the associated increase in bladder pressure. Therefore, the exact role of the Barr CRH neurons in micturition and specifically in autonomous micturition remains unclear -although it has recently been suggested they may play a more prominent (but relatively weak) role in promoting voids in anaesthetised mice (Verstegen et al., 2019). It is presumed that they act as a central control centre generating a pre-parasympathetic drive to the bladder but it is not known whether they are sufficient on their own to generate a co-ordinated void through their actions on spinal circuits.
Here, we study the role of Barr CRH neurons in the autonomous micturition cycle in anaesthetised mice using opto-and chemo-genetic interventions as well as recordings of the firing activity of identified Barr CRH neurons in vivo. This has informed the development of a model indicating that these Barr CRH neurons provide a probabilistic signal to spinal circuits that is gated to trigger either nonvoiding bladder contractions, which enable inferences to be made about the degree of bladder fullness, or voiding if a threshold level of pressure has been reached.
Results
Barr CRH neurons modulate micturition Barr CRH neurons do not simply act as high-fidelity controllers of bladder pressure To assess whether Barr CRH neurons act as a tightly-coupled, pre-motor drive to bladder parasympathetic neurons (Fowler et al., 2008;de Groat and Wickens, 2013) bladder pressure was recorded while parametrically opto-activating Barrington's nucleus unilaterally (9.5 ± 0.3 mW, 465 nm). In initial experiments, unilateral opto-activation of Barr CRH (20 ms x 20 Hz for 5 s) evoked non-voiding contractions of the bladder (eNVC, Figure 3A and B, with the bladder filled to half of its threshold capacity, n = 7 mice). These eNVC were similar to the transient bladder contractions triggered by optoactivation of Barr CRH neurons previously noted by Hou et al. (2016). Varying stimulus frequencies and pulse durations produced modestly graded changes in eNVC with a 20 ms x 20 Hz protocol producing near maximal responses (3.9 ± 0.8 mmHg, Figure 3C and D). The eNVC had a consistent latency to onset of 1.3 ± 0.1 s and a time to peak of 6.0 ± 0.3 s following stimulus onset and an average duration of 8.2 ± 0.6 s. With each of the stimulus parameters there were 'failures' where there was no detectable bladder response (Figure 3 and E). The probability of eNVC increased with stimulation frequency (71.4 ± 7.6% at 2.5 Hz and 97.1 ± 2.5% at 20 Hz) with 20 Hz being the most reliable. Single light pulses of longer duration (1-3 s) were also able to reliably generate eNVC. In contrast, illumination in control mice (CRH CRE mice injected with AAV-DIO-hm4Di-mCherry instead of ChR2) had no effect on the bladder pressure (101.4 ± 0.6% compared to the pressure immediately before optoactivation at 20 Hz x 20 ms, n = 3).
Previous anatomical studies with retrograde tracing using pseudorabies virus have suggested that there is a route for communication between afferent neurons innervating the distal colon and Barrington's nucleus and further that Barrington' (B) Administration of the DREADD ligand CNO (5 mg/kg, i.p) slowed the frequency of micturition seen with continuous saline infusion to the bladder. (C) The chemogenetic inhibition of Barr CRH:hM4Di neurons caused an increase in the voiding threshold (129.7 ± 8.9%), volume infused before void (161.9 ± 16.9%) and micturition pressure (131.7 ± 6.9%) compared to baseline (RM-ANOVA with Holm-Sidak's post hoc, *p<0.05, **p<0.01) unlike control mice (Barr CRH:ChR2 , n = 9 per group) where CNO was without significant effect. In each case this CNO effect peaked around 20 min after administration and reversed slowly. (D) Using an intermittent bladder infusion protocol (to a maximum bladder pressure of 15 mmHg) CNO administration inhibited voiding with (E) a large increase in the latency to void (time after start of infusion) -equivalent to urinary retention (n = 5) (RM-ANOVA with Holm-Sidak's post hoc, *p<0.05, **p<0.01). Source data in Figure 2-source data 1. The online version of this article includes the following source data for figure 2: Source data 1. Data for 'Chemogenetic inhibition of BarrCRH neurons prolongs the micturition cycle'. Figure 3. Phasic optoactivation of Barr CRH evokes bladder contractions. (A) Bladder pressure recordings with unilateral opto-activation (B) Phasic optoactivation (20 ms x 20 Hz, 5 s) of Barr CRH neurons evoked non-voiding contractions (eNVCs, with the bladder~half full, static). These eNVCs had a stereotyped shape and a relatively constant latency. In addition, there were 'failures' where no response was evoked by an identical stimulus. (C) Parameters of Barr CRH evoked non-voiding contractions. Threshold calculated at 20% of the amplitude, duration was measured at the threshold pressure. The latency was taken as the time from start of stimulation for the pressure to reach threshold. (D) The amplitude of eNVC increased with stimulation frequency (pulse length 20 ms for 5 s) and pulse duration (at 20 Hz for 5 s) (n = 7 mice). Higher frequencies of stimulation (50 Hz x10ms) did not substantially increase eNVC amplitude. Single longer light pulses (1-3 s) could also generate graded eNVCs. (E) The probability of generating an eNVC increased with stimulation frequency (pulse length 20 ms for 5 s) and pulse duration (frequency 20 Hz for 5 s). Longer light pulses (1-3 s) also reliably generated eNVCs. (RM-ANOVA with Dunnett's post hoc or Friedman's test, *p<0.05, **p<0.01, ****p<0.0001). Source data in Figure 3-source data 1. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Data for 'Phasic optoactivation of BarrCRH evokes bladder contractions'. colon (Rouzade-Dominguez et al., 2003). This has led to the suggestion that Barrington's nucleus may control the lower gastrointestinal tract as well as the lower urinary tract. Indeed, defaecation was suggested to occur on occasion (but not quantitated) following opto-activation of Barr CRH neurons in mice implying a role in motor control (Hou et al., 2016). To investigate a possible relationship between Barr CRH and activity in the distal colon, a balloon catheter was inserted to monitor pressure in pilot experiments (n = 2 mice). Distal colonic pressure was not synchronised with bladder pressure during the normal micturition cycle (Figure 3-figure supplement 1B). Furthermore, optogenetic activation of Barr CRH neurons did not alter distal colonic pressure (Figure 3-figure supplement 1C and D), despite the generation of bladder eNVC. This preliminary evidence suggests that this Barr CRH population of Barrington's neurons is not involved in motor control of the colon.
It was postulated that recruitment of a larger population of Barr CRH neurons in synchrony might be more effective in triggering larger or more reliable bladder contractions. This was tested with bilateral expression of ChR2 and a dual-fibre optical cannula allowing independent activation of one, the other or both Barrington's nuclei ( Figure 4A-C). Bilateral activation of Barr CRH produced larger eNVC (7.1 ± 2.0 mmHg at 20 ms and 20 Hz, n = 7 mice) than optoactivation of either side alone (2.9 ± 0.5 and 3.1 ± 0.8 mmHg for right or left side, respectively), particularly at higher frequencies of stimulation ( Figure 4D and E). The effect of bilateral stimulation on eNVC amplitude was additive rather than synergistic. The probability of generating eNVC was increased by bilateral stimulation (evident at lower stimulus frequencies that is increased by 154 ± 18% for bilateral vs right alone or by 158 ± 17% for bilateral vs left alone at 2.5 Hz, Figure 4E). It was notable that, with the bladder filled to half of its threshold capacity, bilateral Barr CRH stimulation never triggered voids.
These findings support the proposal that Barr CRH neurons can selectively generate bladder contractions and indicate that this is a probabilistic process, with failures, rather than being a simple high-fidelity pre-motor drive to the bladder.
Bladder pressure responses to Barr CRH drive augments with progress through the micturition cycle This raised the question of whether the stage of the micturition cycle influences the bladder pressure response to Barr CRH opto-activation as the cycle phase may modulate Barr CRH neuronal excitability. During continuous bladder filling, it was noted that the amplitude of Barr CRH eNVC increased progressively through the micturition cycle (increase of 17.0 ± 3.9 fold, comparing eNVC obtained during the 2 nd versus 5 th quintile of micturition cycle, Figure 5A-D). This phenomenon was also observable, albeit not quantitated or commented upon, in the recordings of Hou et al. (2016), see Figure 5B). Similarly, the probability of obtaining a bladder contraction with optoactivation also increased with progressive filling, with most 'failures' being seen when the bladder was <40% filled ( Figure 5E). The same phase dependence of eNVC was also apparent with bilateral stimulation of Barr CRH (Figure 5-figure supplement 1A and B).
The phase-dependence of eNVC amplitude may, in part, be a consequence of bladder distension, leading to raised passive detrusor tension and an increase of length-dependent contractions. To test this proposition, the effect of pelvic nerve stimulation was assessed in the pithed decerebrate, arterially-perfused mouse preparation (Ito et al., 2018;Ito et al., 2019). The amplitude of bladder contractions induced by pelvic nerve stimulation (4-20 Hz, 10V, 3 s) increased with bladder filling (Figure 5-figure supplement 2) with a doubling (2.2 ± 0.34 fold at 20 Hz) of the pressure generated between empty bladder and 70 ml fill (close to voiding threshold in an intact mouse). However, this amplitude increase plateaued at a volume of~50 ml -and showed a much less steep relationship than that observed for Barr CRH eNVC in vivo which increased by 17-fold over the same range of bladder distension. Additionally, this relationship did not account for the observed probabilistic nature of eNVC, as failures were never observed with pelvic nerve stimulation.
Barr CRH stimulation can conditionally trigger complete voids
Although tonic stimulation of Barr CRH increased voiding frequency, it was not possible to trigger full voiding contractions with phasic Barr CRH stimulation with the bladder up to 50% filled, even with bilateral stimulation. However, by applying stimuli systematically at points through the micturition cycle it was possible to trigger fully co-ordinated voids by activating Barr CRH neurons later in the cycle (>50% filled, Figure 5B-D). The pattern and amplitude of the evoked bladder contraction was similar to that seen with spontaneous voids and they occurred at a similar latency to eNVC. In addition, voiding was complete and the empty bladder relaxed to the basal pressure level after each void.
The mouse external urethral sphincter (EUS) shows bursting activity during spontaneous voids which facilitates urine expulsion (Ito et al., 2018;Keller et al., 2018). Injections of pseudorabies virus into either the bladder or EUS has shown labelling in the vicinity of Barrington's nucleus, suggesting it is part of the EUS control circuit (Nadelhaft et al., 1992;Nadelhaft and Vera, 1996;Marson, 1997). However, recent evidence suggests that it is the Barr ESR-1 , rather than Barr CRH , neurons which project to local circuit interneurons in L4-5 that may regulate EUS motoneurons (Keller et al., There is a substantial increase in the amplitude of the eNVC as the micturition cycle progresses (17.0 ± 3.9 fold comparing eNVC from the 2 nd and 5 th quintiles of the cycle) (RM one-way ANOVA followed by Dunnet's test, *-P < 0.05, **-P < 0.01). (D) Overlaid bladder pressure responses to the same optogenetic stimulus applied (x3) at different phases of the cycle can trigger either no response or eNVCs or full voiding contractions that show a stereotyped morphology and latency. (E) Analysis of the stage of the voiding cycle where each type of response was triggered showed that voiding contractions were significantly more likely to be evoked later in the voiding cycle (each symbol represents the average position of such events in each mouse, n = 8) (RM one-way ANOVA with Tukey's test, ##-P < 0.01, ####-P < 0.0001). (F) The bursting pattern of EUS activity was similar with both Barr CRH -evoked and spontaneous voids. Source data in Figure 5-source data 1.
The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1. Data for 'Dynamics of BarrCRH evoked events through the micturition cycle'. Figure 5 continued on next page 2018) analogous to the lumbar spinal coordinating centre (LSCC; Chang et al., 2007). Therefore, recordings were made from the EUS to investigate the relationship of the voiding-associated bursting to Barr CRH activation. The Barr CRH eNVC (irrespective of their magnitude) were never associated with EUS activity ( Figure 5B and D). However, when Barr CRH activation evoked a voiding contraction then bursting EUS activity was always found ( Figure 5D and F). These Barr CRH induced voids had EUS activity that was indistinguishable from spontaneous voids in terms of burst duration (spontaneous 4.4 ± 0.9 vs opto-induced 4.4 ± 1.0 s, n = 7, paired t-test, ns) and frequency (spontaneous 22.7 ± 2.7 vs opto-induced 21.5 ± 3.0 Hz, n = 7, paired t-test, ns)). These results indicate that the Barr CRH neurons can trigger voids that are in all aspects similar to those seen spontaneously but that can be triggered to occur earlier in the normal micturition cycle.
Spinal drive from Barr CRH neurons is sufficient to generate eNVC and voids
The axons from Barr CRH were noted to provide a specific innervation of the sacral parasympathetic neurons but not to the ventral horn at the level of Onuf's nucleus ( Figure 6A, Figure 6-figure supplement 1). To investigate whether optogenetic stimulation of spinal axons of Barr CRH is sufficient to directly generate eNVC, bladder pressure was recorded while light was applied from an optic fibre located above the spinal cord. Optogenetic stimuli (either 20 ms x 20 Hz for 5 s or single 1 s pulse) applied to the spinal cord reliably induced bladder contractions ( Figure 6B-D, p=0.025, bladder half filled). These eNVCs tended to occur with a shorter latency than those evoked directly from pontine stimulation (1.0 ± 0.2 s vs 1.26 ± 0.1 s, n = 5). Similarly, during continuous bladder filling, spinal activation could trigger full voids ( Figure 6E). These data support the principle that the Barr CRH neurons can evoke both voiding and eNVC through their spinal projections.
Spinal CRH inhibits the bladder response to Barr CRH activation
It has been proposed that CRH released from Barrington's neurons at a spinal level augments bladder pressure responses (Klausner and Steers, 2004;Klausner et al., 2005) although others have reported the opposite action Kiddoo et al., 2006;Wood et al., 2013) and genetic knock out of CRH expression in Barrington's neurons was without phenotype (Verstegen et al., 2019). If the release of CRH does increase during the micturition cycle, then this might be predicted to act as a positive feedforward mechanism to augment the parasympathetic and hence bladder pressure responses to Barr CRH drive. To test this hypothesis, the effect of intrathecal Astressin (a broad-spectrum CRH antagonist, 5 mg in 5 ml) on Barr CRH eNVC was assessed through the micturition cycle. Counter to the prediction, Astressin significantly and reversibly increased the amplitude of eNVC, an action that was more pronounced as the bladder filled (333 ± 75%, p=0.008 (n = 7), 20 mins after Astressin, Figure 7). Intrathecal Astressin also decreased the infused volume required to trigger a void ( Figure 7D).
This indicates that CRH is providing a negative feedback signal to limit the extent of the spinal parasympathetic response to Barr CRH neuronal activity (in agreement with Pavcovich and Valentino, 1995;Kiddoo et al., 2006;Wood et al., 2013). Therefore, increased release of CRH cannot account for the augmented responses to Barr CRH activation with progression through the micturition cycle.
Barr CRH activity anticipates bladder pressure during the micturition cycle Neural recordings from cats (Sasaki, 2005a) and rats (Manohar et al., 2017) indicates that some putative Barrington's neurons fire intermittently during the storage phase with an increase of firing that occurs around voiding, consistent with a role in mediating the drive to bladder parasympathetic Figure 6. Spinal opto-activation of Barr CRH axons generates eNVC and voids. (A) Unilateral transduction of Barr CRH neurons with AAV-EF1a-DIO-ChR2-mCherry. Spinal L5 section had immunocytochemistry for mCherry (red) and Choline acetyltransferase (green) to label filled Barr CRH axons and somatic and autonomic motoneurons. The Barr CRH axons show a lateralised distribution targeting the territory of parasympathetic preganglionic neurons at L5 (B) The spinal cord was exposed at the vertebral level of T11-12 and illuminated from an optic fibre placed above the cord. (C) Opto-activation (20 Hz x 20 ms for 5 s or single 1 s pulse) generated eNVCs (Related samples Friedman's test by ranks). (D) There was no difference in the eNVC in terms of amplitude or reliability between the two opto-stimulus patterns (n = 5 mice). (E) Opto-stimulation (20 Hz x 20 ms for 5 s) during continuous filling cystometry generated full voiding contractions as well as eNVCs. Source data in Figure 6-source data 1. The online version of this article includes the following source data and figure supplement(s) for figure 6: Source data 1. Data for 'Spinal opto-activation of BarrCRH axons generates eNVC and voids'. Figure 6 continued on next page neurons. Recent fibre photometric recordings of Barr CRH neurons, using the genetically encoded calcium indicator GCaMP6, indicate that the activity of these neurons is 'in phase' with the micturition cycle (Hou et al., 2016;Keller et al., 2018). However, fibre photometry is unable to resolve the showing that the augmentation of amplitude was particularly marked towards the end of the micturition cycle (Related samples Friedman's test by ranks, #-P < 0.05, ##-P < 0.01). (D) Even without Barr CRH opto-stimulation Astressin reversibly increased the frequency of voiding compared both to baseline and an intrathecal vehicle control group (n = 9) (vs baseline with related samples Friedman's test by ranks and vs vehicle with Mann-Whitney U test, *-P < 0.05, **-P < 0.01). Source data in Figure 7-source data 1. The online version of this article includes the following source data for figure 7: Source data 1. Data for 'Spinal CRH inhibits the bladder response to BarrCRH neuronal optoactivation'. action potential discharge patterns from Barr CRH neurons in vivo. As such it has not previously been possible to directly assess the functional relationship between Barr CRH firing and bladder pressure.
Neuronal activity was recorded in the vicinity of Barrington's nucleus using a 32-channel silicon probe to test whether changes in the excitability of Barr CRH neurons during the micturition cycle accounts for the observed variation in the evoked pressure responses of the bladder. An optic fibre was placed above Barrington's nucleus enabling optogenetic identification ( Figure 8A). Recordings were made of cell activity during the normal micturition cycle (with simultaneous bladder pressure and EUS EMG activity) and in response to the application of light stimuli. A total of 113 individual neurons were identified by clustering from recordings made in the vicinity of Barrington's nucleus (n = 3 mice, Figure 8BD). Definitive opto-identification of Barr CRH neurons (n = 12) was indicated by reliable short latency spike entrainment to light (20 ms pulses, Figure 8C) with time-locked, maintained firing in response to longer light pulses (!1 s, Figure 8C). Barrington's nucleus with simultaneous bladder and EUS monitoring. (B) Immunohistochemistry (mCherry -magenta and TH -green) confirming the position of the recording electrode (shown to scale and with its tip at the end of the histological track). The spike waveforms of individual units are shown schematically adjacent to their probe recording site. Note that the Barr CRH neurons (yellow) are clustered in sites located within Barrington's nucleus whereas the nonidentified neurons (green) lie above and below the level of Barrington's nucleus. A third population of non-optoidentified neurons is shown in blue (labelled Barr CRH-like ) whose firing pattern closely resembled the Barr CRH neurons (C) Barr CRH neurons were optoidentified by a short latency response to a brief light pulse (20 ms) data shown for a single representative unit top left. The population response of identified Barr CRH neurons shown below (n = 12, smoothed average firing rate curve generated by convolution of spikes with a Gaussian of SD 10 ms). The response to a 1 s light pulse is shown to the right with the same single unit and the population response from all Barr CRH neurons. Note that they showed an initial high frequency response that decayed to a plateau of~20 Hz likely reflecting the kinetics of ChR2 currents. (D) Auto-and cross-correlations (1 ms bin size) of three optoidentified Barr CRH neurons with their average spike waveforms showing isolation and a degree of cross-correlation at short latency.
These Barr CRH neurons showed a characteristic pattern of activity during the micturition cycle with bursting at the time of voiding ( Figure 9A, 20.5 ± 4.1 Hz peak firing frequency). A second population of neurons was recorded with a similar pattern of activity (but were not activated by light) that are henceforth termed Barr CRH-like (n = 32, Figures 9B and 10A . Barr CRH neuronal firing anticipates bladder pressure during the micturition cycle. (A) Barr CRH neurons showed a bursting pattern of discharge that aligned with bladder pressure. The z-scored responses of all Barr CRH neurons in this recording can be seen to have a similar pattern of activity (single representative firing rate plot shown above). (B) Within the same recording (and from adjacent probe sites) a further group of neurons was noted (n = 4) to exhibit a similar pattern of bursting discharge synchronized to the voiding cycle. These neurons were termed Barr CRH-like . Auto and crosscorrelations of the Barr CRH-like and Barr CRH neurons (see Figure 9-figure supplement 1) showed them to have similar properties and evidence of a degree of short latency correlation to other Barr CRH-like neurons and also Barr CRH neurons. (C) The increase in firing activity (a) of both Barr CRH-like and Barr CRH neurons (same experiment), preceded and anticipated the change in bladder pressure (b) and occurred before the onset of voiding marked by the sudden increase in EUS-EMG (c). The online version of this article includes the following figure supplement(s) for figure 9: Voiding phase centile Figure 10. Population dynamics of Barr CRH and Barr CRH-like neurons. (A) Firing rate heat maps from probe recordings across mice (n = 3) with optoidentified Barr CRH neurons (n = 12, (respectively 8, 3 and 1 in each mouse)) and Barr CRH-like neurons (n = 32, (respectively 9, 19 and 4 in each mouse)) showed very similar patterns of firing in relation to the voiding cycle (shown below normalized for pressure and time across six cycles). (B) Rose plots of firing activity against phase of micturition cycle showing that both Barr CRH and Barr CRH-like neurons increase their firing in the phase decile leading up to the void unlike the unidentified neurons (**-P < 0.01, one-way ANOVA followed by Tukey-Kramer test). (C) Plotting the relationship between firing rate and normalized bladder pressure showed a graded sigmoid relationship with increased firing rate corresponding to higher bladder pressures. No such relationship was seen for the other neurons in the dorsal pons (dotted lines mark 95% CI of curves, bars SEM of firing rate) (D) The cross correlation between Barr CRH (and Barr CRH-like ) neurons and bladder pressure was strongest at a lag of 3 s indicating that the bladder pressure follows the change in Figure 10 continued on next page Barr CRH neurons that was evident in cross-correlograms ( Figure 9-figure supplement 1). Both Barr CRH and Barr CRH-like neurons showed a clear temporal relationship to bladder pressure ( Figure 9C) with their firing preceding and ramping up with the pressure during voiding. For both the Barr CRH and Barr CRH-like neurons there was a strong sigmoid relationship between bladder pressure and neuronal firing ( Figure 10C), which wasnot seen in the non-identified group of neurons. The directionality of this influence was investigated by examining the cross-correlation between firing rate and bladder pressure -this indicated that the increases in firing frequency (for both Barr CRH and Barr CRH-like neurons) preceded increases in bladder pressure by~3 s for both sets of neurons ( Figure 10D). These data indicated that the pattern of firing of both Barr CRH and Barr CRHlike neurons anticipated changes in bladder pressure as would be expected for a pre-motor population upstream of bladder parasympathetic neurons.
Our recording probe trajectory passes close to the locus coeruleus (LC) on its lateral edge raising the possibility that some of our recorded (non-optoidentified) neurons could be LC neurons, which have been shown in rats to increase their firing in anticipation of a void (Manohar et al., 2017) and as such could fall into the Barr CRH-like group. To examine this possibility, we made recordings from LC neurons under identical recording conditions in mice which had received injections of CAV2-PRS-ChR2-mCherry directly into LC, causing selective expression of ChR2 in LC cells only (Li et al., 2016). A total of 29 opto-identified LC neurons were recorded (n = 3 mice). They showed a characteristic pattern of spontaneous firing and a phasic burst of activity with a paw pinch. LC cells increased their firing around the void -a pattern that was evident in individual firing rate plots and in the z-scored firing heatmap ( Figure 10-figure supplement 1A and B). However this peri-void activation was significantly less pronounced than the increase in firing seen in the Barr CRH-like (and Barr CRH ) neurons ( Figure 10-figure supplement 1C).This suggests that the Barr CRH-like neurons are most likely to be part of the population of Barr CRH neurons, of which only a subset recorded by the probe are exposed to enough light to be formally opto-identified.
Barr CRH neuronal excitability is not altered during the micturition cycle Analysis of spontaneous Barr CRH firing rates over the micturition cycle shows a pattern of activity that is consistent with what would be expected for a high-fidelity controller of bladder pressure. However, this is at odds with our optogenetic activation findings. To resolve this discrepancy the relationship between cycle phase and the light-evoked Barr CRH activity and voiding was examined in more detail.
During all phases of the voiding cycle it was possible to opto-excite Barr CRH neurons ( Figure 11A) and the increase in firing frequency in both absolute and relative terms was independent of the phase of the micturition cycle ( Figure 11B, ranging from 22.4 ± 7.7 to 24.0 ± 6.5 Hz across micturition phases). These data indicate that the intrinsic excitability of the Barr CRH neurons does not vary across the micturition cycle and that augmentation of the bladder pressure responses (by 17.0 ± 3.9 fold) occurs downstream of the firing output from Barrington's nucleus.
To further explore this proposition, the relationship between spontaneous non-voiding contractions (sNVC) and Barr CRH neuronal firing was mapped. sNVC are defined as phasic increases of intravesical pressure seen during filling cystometry, not associated with passage of urine and have been seen in many studies of murine urodynamics Hou et al., 2016;Ito et al., 2017;Keller et al., 2018;Verstegen et al., 2019). A burst of firing in the Barr CRH neurons preceded the sNVC by 1.5-3.0 s -suggesting that they were triggered by a signal from the pons ( Figure 11C and D). However, there was only a weak relationship between the magnitude of each Barr CRH burst and the amplitude of the associated sNVC (see Figure 11E). A linear fit of these data indicates that an increase in burst size of 20 spikes (close to the maximum observed range) would Figure 11 continued on next page only account for 0.5 mmHg difference in sNVC size (less than 20% of the observed range of amplitudes). Even this modest relationship was noted to be dependent upon a single outlier value of a large NVC occurring close to a void (circled). Again, this finding is consistent with Barr CRH providing a trigger signal rather than a pre-motor drive which determines the amplitude of the bladder contraction.
A spinal gate for the Barr CRH drive is opened by bladder distention
This indicates a model of autonomous micturition where a spinal circuit gates the output to the bladder (shown schematically in Figure 12A). The Barr CRH -parasympathetic -bladder afferent component of this circuit was modelled in NEURON using an existing preganglionic neuronal Figure 11 continued neurons (n = 6 across three mice) showing that there was no difference in firing (either the peak firing rate (upper) or the change in firing (lower, blue circles)) evoked by light across the phases of the micturition cycle. In contrast the amplitude of the eNVC (see Figure 3) increases markedly across the micturition cycle. (red squares) (C) Spontaneous NVCs were identified using a peak finding algorithm (amplitude 0.1-4 mmHg, green dotted circles) and were noted to be preceded by a burst of Barr CRH activity. (D) Averaged firing rate plots of Barr CRH and Barr CRH-like neurons triggered off sNVCs (averaged bladder pressure trace at the bottom) showed a consistent burst of firing between 1.5-3 s before the onset of sNVCs (unlike the unidentified population). Note this relationship was not seen in the shuffled data. (Mean firing rates ± S.D, 0.5 s bins). (E) Linear regression showed the number of spikes in each Barr CRH burst only showed a weak correlation (slope 0.03 mmHg/spike) with the amplitude of the following sNVC. This weak relationship was lost if a single outlier point was excluded (ringed). Source data in Figure 11-source data 1.
The online version of this article includes the following source data for figure 11: Source data 1. Data for 'BarrCRH neuronal activity conditionally drives bladder pressure'. model (Briant et al., 2014) and a combination of a fast, excitatory synaptic drive descending from Barrington's nucleus plus a bladder afferent synaptic drive (based on recordings of pelvic nerve afferents from Ito et al., 2019. The incrementing frequency of afferent drive, as the bladder fills, leads to summation and a maintained membrane depolarisation that increases parasympathetic excitability ( Figure 12B). The resulting output from the parasympathetic neuron when driven by Barr CRH (with a mimicked 20 Hz optogenetic drive) was strongly dependent upon the phase of the micturition cycle with a~10 fold increase over the voiding cycle which closely parallels the experimental data. Note also that in the early phase of the voiding cycle the Barr CRH input is unable to evoke action potentialsthus producing 'failures'.
An inferential model of autonomous micturition
The observations described above provided the basis for a new integrated model of the autonomous micturition cycle which incorporates the observed drive from Barr CRH neurons and the known afferent feedback from the bladder ( Figure 13A and methods including a summary of the evidence supporting the model). This afferent feedback governs both the excitability of the spinal parasympathetic neurons (demonstrated in the NEURON model above) and the output of a synaptic generator driving Barr CRH activity. The resulting feedback loop (depicted schematically in Figure 13B) closely reproduces characteristic of the observed micturition cycle with graded NVCs, periodic voids and patterns of Barr CRH firing.
The varying excitability of spinal parasympathetic neurons is represented in the integrated model by a pressure-modulated logistic relationship which determines the change in bladder pressure generated from a given level of Barr CRH firing. The afferent drive also determines the probability of a high frequency Barr CRH discharge in a given epoch. The bladder pressure-dependent synaptic drive for Barr CRH is represented as a logistic relationship. This synaptic generator is commonly believed to be relayed via the PAG (Drake et al., 2010;de Groat and Wickens, 2013;de Groat et al., 2015) however, there is also evidence for direct spinal inputs to Barrington's nucleus (in the rat) that could also act as a generator (Ding et al., 1997;Blok and Holstege, 2000). In addition, elegant recent studies indicate there are also direct functional inputs from the cortex and hypothalamus (Hou et al., 2016;Yao et al., 2018;Verstegen et al., 2019).
This circuit organisation generates NVCs: dynamic perturbations whose magnitude and frequency increase with progress through the micturition cycle. As pressure increases these contractions become more frequent and higher in amplitude -eventually summating to cause sustained increases in bladder pressure. This in turn increases the rate of firing of Barr CRH , making further contractions more likely, and shifting the system into a positive feedback loop in which pressure rapidly increases. A void occurs when the pressure reaches 15 mmHg which is presumed to be effected via a spinal mechanism and the micturition cycle restarts.
In line with experimental data, attenuation of the variance in Barr CRH firing (underpinning the NVCs) delays the time to void -indicating their importance in the process (Figure 13-figure supplement 1A). Similarly, augmenting the spinal parasympathetic sensitivity to the Barr CRH drive (as seen experimentally with intrathecal Astressin) increases the amplitude of the NVCs and shortens the inter-void interval (Figure 13-figure supplement 1C). We note that additional drive into the Barr CRH neurons (as is proposed to come from higher centres with voluntary voiding) would increase the variance and could trigger voiding earlier. This effect is demonstrated with the simulated optogenetic drive of Barr CRH neurons (20 Hz x 1 s, Figure 13-figure supplement 1B) which produces both failures, eNVCs and triggers voids earlier in the cycle than would otherwise have happened.
Discussion
These findings indicate that Barr CRH neurons do play a critical role in micturition. However, the activity of Barr CRH neurons, is not a simple switch mechanism for voiding nor do they provide a direct drive to bladder pressure (as might be expected for an autonomic command neuron). Instead these neurons play a more nuanced, probabilistic role. Their influence on the bladder depends on the state of priming of the downstream parasympathetic motor circuit. This identifies the Barr CRH neurons as being the efferent limb of an inferential circuit that assays bladder state repeatedly during the storage phase of the cycle. When the threshold for voiding is reached, they generate a high-fidelity Figure 13. An inferential model of autonomous micturition. (A) Schematic of the descending input from Barrington's nucleus to the bladder parasympathetic neurons. The parasympathetic neurons receive excitatory input from bladder afferents -shown as being relayed via a segmental excitatory interneuron. Note that the Barr CRH neuron has both a fast, excitatory transmitter (presumed glutamate) as well as an inhibitory action mediated by spinally released CRH -possibly acting via local inhibitory interneurons (not shown). The inset boxes show the logistic relationships linking Figure 13 continued on next page motor signal through a positive feedback loop that drives the bladder contraction required for voiding. This operating principle fits with a modular hierarchical hypothesis for the organisation of the micturition circuit (Fowler et al., 2008;Drake et al., 2010;de Groat and Wickens, 2013) with a primary spinal circuit providing a basic functionality, evident in the neonatal rodent (Kruse and De Groat, 1990;de Groat, 2002;Zvarova and Zvara, 2012) and indeed other mammals including humans, that has little context-sensitive control. The timing of micturition in immature rodents is often triggered by maternal stimulation of the perineum (although interestingly this is unsuccessful if applied when the bladder is <50% full Zvarova and Zvara, 2012). With development, the spinal micturition mechanism is believed to fall progressively under the descending control of Barrington's nucleus (both for voluntary and autonomous voiding). We suggest that such descending control provides an internalised signal, replacing the need for additional external peripheral sensory input, to trigger the void. Dysfunction of this descending control system, as is seen following spinal cord injury, results in a loss of voluntary control and initially in a complete loss of continence, but this tends to be restored as the spinal micturition reflex re-emerges (albeit in a poorly co-ordinated manner). This situation was mimicked experimentally herein by the chemogenetic inhibition of Barr CRH neurons -leading to a prolongation of the inter-void interval and retained volumes with progressive bladder distension -indicating that this is a necessary and critical component of the micturition circuit.
We used cystometry in anaesthetised mice to examine the role of Barr CRH neurons specifically in the core processes of autonomous micturition in the absence of behavioural influence. This has produced a number of different findings from previous optogenetic studies of the role of Barr CRH neurons in micturition (Hou et al., 2016;Keller et al., 2018;Verstegen et al., 2019) which we have ascribed to the contrast between volitional and autonomous micturition behaviour. However, an important caveat is that anaesthetic agents by definition alter neuronal function, typically suppressing activity, and they have been found to affect aspects of murine micturition by a number of authors (see review Ito et al., 2017). Urethane has been adopted by many research groups as the agent producing the least autonomic suppression and for its stable plane of anaesthesia. The cystometric profile and the characteristic activity of the EUS seen in this study is akin to that seen in awake mice and in decerebrate arterially perfused preparations in the absence of anaesthesia (Ito et al., 2017;Ito et al., 2018). Additionally, the fact that we find an apparent increase in the role of Barr CRH neurons argues against urethane anaesthesia accounting for the differences between our findings and those in conscious mice. This remains to be definitively tested and we hypothesise that the role of Barr CRH neurons in autonomous voiding may be best demonstrated in sleeping rather than conscious, behaving mice.
Rodents also use social urine scent marking, for example male mice use strategic urine 'spotting' to express their dominance and territorial ownership (Desjardins et al., 1973;Maruniak et al., 1974). Although autonomous micturition and scent marking with urine are related processes, likely with some shared physiology, there are also differences in the patterns of urination with greater frequency (>10 urine spots per minute Keller et al., 2018) and accordingly smaller volumes in each urine spot compared to a primary void (typically 80-120 ul) (Yu et al., 2014;Bjorling et al., 2015;Hill et al., 2018;Ito et al., 2018). A role has recently been described for Barr ESR1 neurons in social urine spotting evoked by female urine (Keller et al., 2018). These Barr ESR1 neurons preferentially target a spinal inter-neuronal circuit that is proposed to be involved in generating the bursting drive to the EUS as well as causing a bladder contraction. In contrast, in the same study, the Barr CRH neurons were reported to be relatively ineffective in generating voids under similar conditions (without active filling of the bladder) (Keller et al., 2018). A similar conclusion was reached by Verstegen and colleagues who compared the effect of activating Barr CRH neurons with the activation of all the glutamatergic neurons in Barrington's nucleus (including the ESR1 neurons) and found that global glutamatergic activation produced obligatory voiding with the characteristic of incontinence. In contrast activation of Barr CRH neurons only sporadically produced co-ordinated voiding with a delay and only with 6% of activations although this was a little higher at 17% in anaesthetised mice (Verstegen et al., 2019). This evidence led both groups to conclude that Barr CRH neurons only played a minor supporting, augmentative role in the generation of voids Hou et al., 2016;Ito et al., 2017;Keller et al., 2018;Verstegen et al., 2019). This is somewhat surprising given that the majority of spinally projecting Barr neurons are CRH positive (Verstegen et al., 2017) raising the question of why they have such an apparently minor role in voiding. With hindsight, a clue to this puzzle may have been offered by the finding of Verstegen et al. (2019) that genetic, diphtheria toxin-mediated, ablation of the Barr CRH neurons produced a phenotype of increased voided volume in awake mice and markedly delayed voiding during cystometry in anaesthetised mice.
Our study shows that Barr CRH neurons can effectively trigger co-ordinated voiding with the characteristic pattern of external urethral sphincter bursting when the bladder is sufficiently filled. This triggering is not just a simple consequence of the pressure rise produced by Barr CRH neuronal activation (ie a switch) -as prolonged activation of Barr CRH neurons leads to increased voiding frequency with a lowered threshold for voiding (and chemogenetic inhibition has the opposite effect). Indeed, the concept of the existence of a simple pressure threshold for triggering a void is challenged by the observation that large spontaneous non-voiding contractions occurred just prior to a void that exceed any estimated threshold for a subsequent void. Rather we believe that the drive from Barr CRH needs to be integrated at a spinal level with feedback from bladder afferents in order to generate a complete void -and thus the probability of evoking a void depends on the degree of bladder filling. When the bladder is partially filled, activation of the Barr CRH neurons evokes bladder contractions (eNVC) without any sphincter activity, suggesting that the drive to the sphincter is also dependent on the state of a downstream pattern generator and is not directly engaged by the firing of Barr CRH neurons at all stages of the micturition cycle. Our preliminary experiments found no evidence for the involvement of Barr CRH neurons in the control of the distal colon, suggesting they are bladder specific.
Given the mechanistic differences between the processes of autonomous micturition and voluntary scent marking in males, it is quite likely that there are distinct circuit drives for each type of urination. This would be consistent with the proposition that there are parallel pathways from Barrington's nucleus to the downstream spinal pattern generators: the Barr ESR1 neurons driving spotting behaviour which can be triggered irrespective of the degree of fullness of the male mouse bladder; and the other mediated by Barr CRH neurons which conditionally requires the bladder to be distended before a void can be generated. This may also explain why chemogenetic inhibition of the Barr ESR1 but not Barr CRH neurons blocked spotting whereas similar inhibition of Barr CRH neurons inhibited autonomous micturition in the current study. Our study did not set out to identify gender differences in Barr CRH neuronal function in micturition and we included mice of both sexes. Although female mice had smaller bladder capacities along with larger basal and micturition pressures on CMG there were no differences in their other parameters and their EUS-EMG trace was qualitatively similar in pattern during voiding. Equally both sexes showed similar responses to activation/inhibition of Barr CRH neurons which presumably relates to underlying commonalities in the processes of autonomous micturition.
In this context it is also important to acknowledge that the pattern of micturition varies to a degree across species and humans and cats do not show the same 'squirting' behaviour as their EUS relaxes to allow voiding -this likely involves some distinctive neuronal circuitry (Fowler et al., 2008). However, there are also many similarities including, surprisingly, in the time taken to void irrespective of body size (Yang et al., 2014) indicating that many of the fundamental principles of operation are conserved. Importantly, lesions or inhibition of Barrington's nucleus abolishes voiding in multiple species including humans, cats, rats and mice consistent with it being a core circuit component reviewed in Verstegen et al. (2017).
The pattern of firing activity seen here in optogenetically-identified Barr CRH neurons in anaesthetised mice is similar to that previously noted in recordings from Barrington's nucleus in the conscious rat (Manohar et al., 2017) and is also reminiscent of a subset of the neurons identified in anaesthetised or decerebrate rats and cats that showed a ramping activity with voiding (de Groat et al., 1998;Sugaya et al., 2003;Tanaka et al., 2003;Sasaki, 2005b;Sasaki, 2005a). The bursting activity seen in the Barr CRH recordings clearly precedes the changes in bladder pressure (and with a similar lag to bladder pressure response to that found from optogenetic activation of Barr CRH neurons) indicating that they are driving rather than responding to the changes in pressure. The Barr CRH neurons showed a pattern of spiking activity that is also consistent with that noted from the Ca 2+ imaging recordings seen with fibre photometry in mice (Hou et al., 2016;Keller et al., 2018;Verstegen et al., 2019) although we can see both the temporal precedence and that this activity decays promptly at the end of the void. It is also worth noting that in none of these recordings (from Barr CRH or indeed any of the neurons in the vicinity) was any pattern of activity seen that resembled the high frequency bursting of the urethral sphincter seen in mice and rats -to date such activity has never been observed in any of the recordings from Barrington's nucleus which is consistent with the idea that it is generated from a spinal motor pattern generator such as the LSCC (Chang et al., 2007).
The probabilistic nature of the influence of Barr CRH neurons on bladder pressure seems initially at odds with the clear relationship between their activity and bladder pressure noted herein (and previously in the E2 class of neurons recorded in rat Barrington's nucleus [Tanaka et al., 2003] and also in fibre photometry recordings Hou et al. (2016); Verstegen et al., 2019). However, this relationship only holds in the late stage of the micturition cycle when Barr CRH neurons do indeed act as a tightlycoupled, direct command neuron. This is not the case during the early phases of the micturition cycle when there is a weak relationship between the activity of Barr CRH neurons and the bladder pressure, in the extreme case leading to 'failures' of stimulation to evoke any contraction. Our recordings indicate that this happens downstream of Barr CRH at a spinal level, as the ability to optogenetically drive Barr CRH is unchanged and spinal activation of Barr CRH axons also shows failures. We propose a model for this action that integrates an incrementing and summating, but still sub-threshold, afferent drive from the bladder to PPN that enables a phasic burst of activity from Barr CRH to generate progressively larger numbers of action potentials and hence contraction when the bladder is sufficiently filled. In support of this idea, previous electrical stimulation studies of Barrington's nucleus in the rat indicated that the degree of excitation of the parasympathetic motor outflow to the bladder was strongly dependent upon the degree of bladder filling (Noto et al., 1991). The mechanisms enabling such priming of the parasympathetic control circuit will merit further investigation at a spinal level.
The inhibitory action of CRH released from Barr CRH neurons on micturition at a spinal level initially appears counterintuitive given the overall excitatory effect of Barr CRH neurons (mediated via fast glutamatergic signalling, Hou et al., 2016) on micturition and the known excitatory effects at a cellular level of CRH receptor activation (Lovejoy et al., 2014). However, a similar inhibitory spinal action of CRH on micturition has been reported Kiddoo et al., 2006;Wood et al., 2013). This inhibition may act to suppress the segmental excitatory activity in the spinal parasympathetic circuit. This process may also be involved in the transition from the immature spinal voiding circuit in the neonate that is supplanted by the top down requirement for Barrington's nucleus signals. The mismatch between the differential time-course of action of CRH-mediated inhibition (metabotropic) and the fast, glutamatergic excitation (ionotropic) may enable the initial rapid excitation of parasympathetic preganglionic neurones at the onset of voiding, but may also in turn act to help terminate voids and facilitate the unopposed relaxation of the bladder. The spinal mechanism of CRH actions at a spinal circuit level constitutes an intriguing target for therapeutic intervention potentially allowing modification of the gain of the micturition reflex in disease states.
We noted that bursts of activity in Barr CRH neurons precede both voids and also NVCs. There has been considerable debate about the origin of NVCs with respect to whether they are intrinsically generated by the bladder and their functional significance although there is a suggestion that they provide a means to infer the degree of bladder filling (Drake, 2007). An increase in their frequency and amplitude has been linked to diseases of the LUT (Vahabi and Drake, 2015) and also with loss of descending control from the brainstem (Sadananda et al., 2011) which could conceivably be related to the loss of CRH-mediated spinal inhibition. There is also evidence of their peripheral generation by the bladder early in development which becomes less coordinated in adult bladder (Kanai et al., 2007). We provide evidence that NVCs are generated by the 'noisy' probabilistic drive from Barrington's nucleus that repeatedly assesses the status of the spinal circuit during each micturition cycle and that the magnitude of the bladder pressure response reflects the phase of the micturition cycle. The resulting afferent signal provides an active way of inferring the degree of bladder fullness (analogous to the 'sampling' that assesses rectal fullness Rao, 2004) and could prime the neural control circuits and indeed could conceivably provide a stream of information that may enable a conscious awareness of bladder fullness and the ability to make volitional predictions about the need to void. We also note the homology with the development of other motor systems where spontaneous motor activity, initially generated in the periphery, becomes progressively embedded centrally as motor representations in the nervous system with developmental maturation (Llinas, 2001).
Our postulated model of such a circuit organisation with afferent feedback from the bladder both priming the spinal parasympathetic motor circuit and also determining the magnitude of the drive from Barr CRH neurons (perhaps via integration by the PAG which functions as a probabilistic firing switch) recapitulates many of the observed features of autonomous micturition. The generation of NVCs provides inference about the degree of bladder fullness and the afferent signal advances the progression through the cycle. The spinal priming mechanism enables a regenerative burst of activity from Barr CRH to drive the voiding contraction and the modelled release of spinal CRH that follows such a large discharge serves to reset the spinal circuit enabling passive filling to resume. A feature of this circuit organisation is that a direct volitional drive to Barr CRH from cortex as recently reported (Yao et al., 2018) would not be subject to the probabilistic firing switch at a supraspinal level and could therefore trigger a void earlier in the cycle if behaviourally appropriate albeit still contingent on the priming status of the spinal gate. Hypothetically, a parallel synaptic drive from the Barr ESR1 neurons (Keller et al., 2018) that was stronger than the Barr CRH neurons could also generate parasympathetic activity without requiring co-incident afferent activity -hence bypassing the gate to produce urine 'spotting' on behavioural demand.
On this basis we conclude that the Barr CRH neurons form a key component of the micturition circuit that generate a pre-motor drive to the bladder late in the cycle. The recording and stimulation data suggest that this drive is not generated by a burst generator residing within this cell population but is a product of the integration of inputs from both bladder sensory afferents and upstream centres such as the PAG but also including hypothalamus and motor cortex (Hou et al., 2016;Yao et al., 2018). We predict that failures of control at this key integrating locus are likely to be involved in both acute disorders of lower urinary tract function such as retention as well as in chronic diseases like nocturnal enuresis, detrusor-sphincter dyssynergia and overactive bladder syndrome where there is dysregulation of detrusor contractions and sphincter relaxation.
Experimental model and subject details Mice
All experiments and procedures conformed to the UK Animals (Scientific Procedures) Act 1986 and were approved by the University of Bristol Animal Welfare and Ethical review body and performed under licence (PPL3003362). Mice were group housed, with food and water available ad libitum and on a 12 hr/12 hr light/dark cycle. Gene expression was restricted to Barr CRH neurons using knock-in mice (of both sexes aged 3-8 months old) with an internal ribosome entry site (ires)-linked Cre-recombinase gene downstream of the CRH locus (CRH Cre mice (Taniguchi et al., 2011;Chen et al., 2015), Jax Laboratory #012704).
Quantification and statistical analysis
All data are presented as mean ± SEM (unless otherwise specified). Sample size was estimated from experience and is similar to other published studies (Ito et al., 2019;Hou et al., 2016;Verstegen et al., 2019).
Statistical tests used are specified in Figure legends and in the main text (see Results). Differences were considered significant at p<0.05. All experiments contain replications of the same experimental paradigm across different litters of animals and experimental runs. The number of replications (n) equals the number of mice for bladder pressure recordings, and/or the number of cells for electrophysiological experiments (as stated the relevant Figure legends/main text). Mice of either sex were allocated to experiments from the breeding colony as available. Blinding of the experimenter to drug was used for the chemogenetic experiments, however no blinding was possible for the optogenetics/cell recording studies.
Stereotaxic intracranial injections to Barrington's nucleus
To target Barr CRH neurons, homozygous CRH cre mice were anaesthetised with ketamine (70 mg/g) and medetomidine (0.5 mg/g) and placed in a small animal stereotaxic frame (Kopf, USA) with a drillinjection robot attachment (Neurostar, Germany). After exposing the skull under aseptic conditions, a small burr hole was drilled and AAVs were injected (200 nl x three injections per side) unilaterally or bilaterally through a pulled glass pipette at a rate of 100 nl/min. Injection coordinates for Barrington's nucleus were 5.3 mm posterior to bregma, 0.70 mm lateral and 3.25, 3.5 and 3.75 mm below brain surface. Injections for Locus coeruleus were identical but targeted 0.8 mm lateral. After surgical procedures, all mice were returned to their home cage for at least 21 days for recovery to maximise protein expression.
Optogenetic activation
To target Barrington's nucleus, adult CRH Cre mice had injections of AAV-DIO-ChR2-mCherry (1.6 Â 10 12 vg/ml) or AAV-DIO-hM4Di-mCherry (as control). To target Locus coeruleus, adult C57Bl/6 mice had injections of CAV-PRS-ChR2-mCherry (2 Â 10 11 pp/ml). Mice were used in experiments at least 3 weeks after vector injections. They were anaesthetised with urethane and prepared for cystometry as described below. Light from a 465 nm LED (Plexon, Dallas USA) was delivered in pulses with a maximum duty cycle of 50%. The light train was delivered once every 60 s for fixed-interval stimulation, or at randomised intervals between 30 s and 90 s. The light power exiting the fibre tip was set at approximately 10 mW and was measured before and after each experiment. For unilateral optoactivation light was delivered via a tapered optical fibre (Lambda-B, 0.39NA, 17 mm long, 1.2 mm emitting length, Optogenix, Italy) with the fibre lowered down the original vector injection track. For bilateral simultaneous opto-activation a dual fibre implant was used Doric,Canada) and coupled via a dual fibre optic cable to two separate LEDs.
For light delivery to the spinal cord, soft tissue was removed between T11 and T12 vertebral spines after skin incision. The exposed spinal cord was illuminated using a 473 nM laser (PhoxX, Omicron, Germany) via a bare ended fibre (Thorlabs, 400 mm) positioned above the cord and delivered in 20 ms pulses at 20 Hz for 5 s or a prolonged pulse of 1000 ms. The light power at the fibre tip was 29 ± 0.3 mW.
Chemogenetic inhibition
To inhibit Barr CRH neurons, CRH cre mice were bilaterally injected with AAV-DIO-hM4Di-mCherry (7.0 Â 10 10 vg/ml) into Barrington's nucleus (as described above) and allowed at least 3 weeks of recovery (control mice had AAV-DIO-ChR2-mCherry injected). They were anaesthetised with urethane and prepared for cystometry as described below. Intraperitoneal CNO (5 mg/kg, 1 mg/ml stock) or saline (as control) was applied after obtaining >five baseline micturition cycles. In an initial set of experiments, saline was continuously infused to the bladder around the time of CNO injection to investigate the effects on mice micturition. Subsequently, to determine the CNO effect on threshold for micturition, a cyclical infuse and hold protocol was adopted whereby saline infusion was stopped at the threshold for voiding and then held at that volume for 10 min or until a void occurred before emptying the bladder and restarting the infusion phase.
Cystometry, Electromyography and distal colonic manometry
Mice were anaesthetised with urethane (0.8-1.2 mg/kg) and the bladder was exposed via a 2 cm midline abdominal incision. A flanged catheter (PE50) was secured with a purse-string suture into the bladder and connected to a syringe pump and pressure transducer. The infusion rate was adjusted on an individual mouse basis (10-40 ml/min) to produce an equivalent proportionate speed of fill to threshold for voiding (typically 600 s) taking account of differing bladder volumes of the mice. External urethral sphincter (EUS) was recorded with insulated stainless steel wires, bared at the tip (0.075 mm, AISI316 Advent) inserted through a 30 G needle bilaterally into the EUS just proximal to the pubic symphysis. A balloon catheter (2.5 mm diameter x 12 mm when fully distended, Medtronic Sprinter) was inserted into distal colon and the tip of balloon was placed 40 mm from the anus. To monitor colonic pressure the balloon catheter was filled with distilled water.
Once a regular rhythm of micturition cycles was established (typically~1 hr after starting saline infusion into the bladder) then the following variables were measured (and averaged over at least three voiding cycles (see Figure 1-figure supplement 2): . Basal pressure was taken as the lowest bladder pressure reached after a void. . Voiding threshold was the bladder pressure when the EUS-EMG started bursting, indicating the initiation of voiding.
. Micturition pressure was the peak bladder pressure achieved during voiding (bursting phase of the EUS-EMG).
. Non-voiding contractions (NVCs) were identified as discrete increases in bladder pressure (>0.1 mmHg) observed during the filling phase in voiding preparations.
Pithed, Decerebrate Arterially-Perfused mouse (DAPM) preparation
The pithed DAPM preparation was used to examine the influence of bladder filling on pelvic nerve stimulation-evoked bladder contractions. The methods were as previously described (Ito et al., 2018;Ito et al., 2019) but in brief, mice were terminally anaesthetised with isoflurane, disembowelled through a laparotomy and the bladder was cannulated. The mouse was then cooled, exsanguinated, decerebrated and its spinal column was pithed to remove all central neuronal control. It was then moved to a recording chamber, perfused through the heart with warm (32˚C) Ringer's solution (composition (mM): NaCl (125), NaHCO 3 (24), KCl (3.0), CaCl 2 (2.5), MgSO 4 (1.25), KH 2 PO 4 (1.25); glucose (10); pH 7.35-7.4 with 95% O2/5% CO2). Ficoll-70 (1.25%) was added as an oncotic agent to the perfusate. The flow rate was adjusted (from 15 to 20 ml/min) to achieve a perfusion pressure of 50-60 mmHg. The pelvic nerve was identified, traced proximally and cut allowing the distal end to be aspirated into a bipolar stimulating electrode. Stimuli (10V, 1 ms, 4-10 Hz for 3 s) were applied to the nerve. The bladder was filled with saline to perform cystometry as above with filling limited to a ceiling pressure of 15 mmHg. The effect of pelvic stimulation on bladder pressure was examined with different degrees of bladder filling (0-70 ml).
Extracellular recordings and signal acquisition
Recordings were made from Barrington's nucleus and the locus coeruleus in urethane anaesthetised mice using a 15 mm thick silicone probe with 32 channels (NeuroNexus, Model: A1 Â 32-Poly3-10mm-25 s-177-A32). For recordings in Barrington's nucleus, the recording probe was lowered down a track using the same co-ordinates as the vector injection. An optical fibre was lowered on an intersecting track to target the nucleus from a caudal vector (bregma -8.8, ML 0.7 or 0.8 and 4.9 mm deep on an angle of 45˚to the vertical). For recordings in LC, an identical configuration was used, but with ML coordinate 0.8 mm. Each channel (177 mm 2 ) was spaced from the neighbouring channels by 50 mm. A reference electrode (Ag/AgCl) was inserted into the scalp. The probes were connected to an amplifier-digitising headstage (INTAN, RHD2132). The signals were amplified and filtered (100 Hz-3 kHz) and digitized at 30 kHz before being processed and visualised online within the Open Ephys system.
Anatomical tracing studies
To investigate the Barr CRH projection to the spinal cord, Cre-dependent AAV (AAV-EF1a-DIO-ChR2-mcherry) was unilaterally injected to Barrington's nucleus in CRH CRE mice. After a minimum of four weeks the mice were killed, and perfusion fixed for immunohistochemistry. To examine the Barr CRH projection into spinal cord, 40 mm transverse sections were taken from T11 to S2, processed for mCherry and Choline acetyltransferase immuno-(to demarcate motoneurons) followed by confocal imaging (detailed below).
Immunohistochemistry of brain and spinal cord
Mice were killed with an overdose of pentobarbital (20 mg per mouse, i.p; Euthetal, Merial Animal Health) and perfused trans-cardially with 4% formaldehyde (Sigma) in phosphate buffer (PB; pH 7.4, 1 ml/g). The brain and spinal cord were removed and post-fixed overnight before cryoprotection in 30% sucrose in phosphate buffer. Coronal tissue sections were cut at 40 mm intervals using a freezing microtome and left free floating for fluorescence immunohistochemistry. Tissue sections were blocked and incubated in phosphate buffer containing 0.3% Triton X-100 (Sigma) and 5% normal donkey serum (Sigma). Incubated on a shaking platform with primary antibodies for 14-18 hr at room temperature. After washing, sections were then incubated for 3 hr with appropriate Alexa Fluor secondary antibodies. A Leica DMI6000 inverted epifluorescence microscope equipped with Leica DFC365FX monochrome digital camera and Leica LAS-X acquisition software was used for widefield microscopy. For confocal and tile scan confocal imaging, a Leica SP5-II confocal laser-scanning microscope with multi-position scanning stage (Mä rzhä user, Germany) was utilized. Primary antibodies used were rabbit anti-mCherry (1:4,000; Biovision), sheep anti-tyrosine-hydroxylase (1:1,000; AB1542, Millipore) and goat anti-ChAT (1:250; AB144P, Millipore). Alexa Fluor 488-conjugated donkey secondary antibodies were used against goat IgG (1:500; Jackson ImmunoResearch) and sheep IgG (1:400; Jackson ImmunoResearch). Alexa Fluor 594-conjugated donkey secondary antibody was used against rabbit IgG (1:1000; Invitrogen).
Parasympathetic preganglionic model design
A model of the integration of the synaptic drive to the bladder parasympathetic preganglionic neurons in the spinal cord was constructed using NEURON (Carnevale and Hines, 2006). The preganglionic neuron was based on using an existing preganglionic neuronal model (Briant et al., 2014) that was modified slightly to have a resting potential of~À65 mV by altering the leak conductance reversal potential. A synaptic drive from Barr CRH neurons was modelled by adding an EXPSYN to the soma which was driven with trains of action potentials (20 Hz x 1 s) generated from a NETSTIM to mimic an optogenetic stimulus of Barr CRH . The synapse was subthreshold if triggered alone as an input despite a modest degree of summation (~20%) when driven at 20 Hz. A second fast excitatory synaptic drive to model a bladder afferent was added as an EXPSYN to a proximal medial dendrite. This was driven with an incrementing frequency of action potentials (from a NETSTIM based on recordings of pelvic nerve afferents from Ito et al., 2019) to model bladder distension-evoked increase in afferent firing over a period of 5 min to represent a typical micturition cycle. The model files are available from data.bris.ac.uk (DOI: 10.5523/bris.20l920gl27ufi204brn8ilonsf).
Autonomous micturition model design
Evidence informing the design of model 1. The activation of Barr CRH neurons can generate bladder contractions. This is a probabilistic process, with failures. Both the amplitude and the probability of generating an NVC are increased at higher stimulation frequencies. (Figures 3, 4 and 11E) 2. Bladder pressure response to Barr CRH drive augments with progress through the micturition cycle. The amplitude of NVCs following identical Barr CRH stimulations increases with bladder filling ( Figure 5) an effect not explained by detrusor muscle stretch ( Figure 5-figure supplement 2). 3. Barr CRH stimulation triggers voids when the bladder is filled (Figure 5) 4. Barr CRH activity anticipates bladder pressure during the micturition cycle (Figures 9 and 10). 5. Barr CRH neuronal excitability does not alter during the micturition cycle ( Figure 11). Changes in activity in Barr are likely due to greater synaptic drive into Barr CRH neurons, not higher responsiveness to a maintained drive (ie the intrinsic properties of the Barr CRH neurons do not generate the bursts). 6. A spinal gate for Barr CRH drive is opened by bladder distention. This is a hypothesised explanation for point 3, and also explains the experimental observation that the level of Barr CRH activity does not rise linearly with greater sNVC amplitudes ( Figure 11) (even though as demonstrated in point 1, higher Barr CRH firing rates are capable of producing larger eNVCs, when optogenetic stimulation is used). Also the localisation of this gate at a spinal level is indicated by the response to optoactivation of the descending axons at a spinal level still leading to failures, eNVCs and voids (indicating that this probabilistic gating does not occur in the brainstem, Figure 6). 7. The probability of Barr CRH neurons firing at high frequency increases at the end of the micturition cycle. The probe recordings show that the firing of the Barr CRH neurons is only significantly elevated in the final 10-15% of the cycle ( Figures 9E and F and 10B) and as summarised in the sigmoid relationship ( Figure 10C).
Features of the model
Towards the end of the cycle, Barr CRH neurons have a higher probability of increasing their firing rate above baseline levels (as per point 7). We hypothesise that this is due to a greater level of external drive (consistent with point 5) that is via a mechanism which translates bladder pressure into synaptic drive to Barr. This is represented in the model by the first logistic function, marked 'input to Barr' on Figure 13A which explicitly replicates the sigmoid seen in the recording data.
The bladder response to bursts of Barr CRH activity consists of a rise in pressure in the shape of a bell curve, of width~2 s. This represents an NVC and is motivated by points 1 and 4, which show that bursts of Barr CRH activity cause NVCs. The shape of the response is modelled on the profile of responses seen in the recordings. These NVCs summate if triggered in quick succession (as seen towards the end of each micturition cycle). The amplitude of the NVC is set by the second logistic function (marked 'spinal modulation' on Figure 13A). This function generates the increased response to identical levels of Barr CRH firing as the baseline bladder pressure increases with progression through the micturition cycle (points 2 and 6). Again, these responses only become sizeable at the end of the cycle ( Figure 5), motivating the logistic shape of the curve. Together, these logistic functions create a positive feedback loop between pressure and Barr CRH firing which generates a regenerative large Barr CRH burst and hence voiding contraction, closely mirroring the responses seen in the real data.
Voiding is considered to have happened when a pressure of 15 mmHg is reached (the model does not include a specific representation of the external urethral sphincter so needs this arbitrary mechanism).
Model implementation
The model of autonomous micturition employs an iterative algorithm in which bladder pressure and Barr CRH firing rate are updated in each cycle (assumed to last 1 s). At each time point, bladder pressure is incremented by 0.015 mmHg (modelling a continuous infusion). The level of pressure in the bladder is used to determine the maximum level of Barr CRH firing via a logistic function (modelling an afferent input): Where s max is the maximum spiking rate, and f max and f min represent the maximum and minimum levels of firing -here set to 25 and 4.3Hz respectively, based on spiking rates seen in unit data recordings. The gradient and mean were set to k s ¼ 4 and m s ¼ 4:5. Actual spiking rates were then generated probabilistically by sampling from a random uniform distribution with a maximum level set by s max .
The change in bladder pressure produced by the firing of Barr CRH neurons was determined by a logistic function modulated by bladder pressure (representing the level of parasympathetic neuron excitability at the level of the spinal cord): DP t ð Þ ¼ DP min þ DP max À DP min ð Þ 1 þ e Àkp s t ð ÞÀmp ð Þ Where s is the Barr CRH firing rate and DP max ; DP min are the maximum and minimum amplitude of bladder contractions, and k p ¼ 0:5 and m p ¼ 6. The maximum change in pressure depends on the current bladder pressure such that: The output of these equations is shown in Figure 13 and Figure 13-figure supplement 1. Bladder contractions are modelled with a Gaussian bump function, of amplitude DP max and variance 2s. These had a duration of 6 timepoints (approximating the characteristics of non-voiding contractions observed experimentally). At the beginning of each cycle, the current pressure is calculated by summing the baseline pressure (including incrementing due to constant filling) with increases in pressure caused by Barr CRH triggered bladder contractions generated in the current and past cycles.
Voiding occurs at a pressure of 15 mmHg, at which point bladder pressure is decremented at 2 mmHg per cycle (as the bladder empties) until a baseline pressure of <0.3 mmHg is reached and the cycle restarts. The model is coded in MATLAB.
In cycles where optogenetic activation of Barr CRH firing was simulated, the firing rate s was set to 20Hz. To simulate intrathecal Astressin, the mean of the logistic curve for DP t ð Þ was reduced to m p ¼ 5 To simulate attenuation of NVCs, the variance in the level of Barr CRH firing was reduced by 80% but without any change in the mean level of firing.
Data analysis
Clustering of multiunit data, waveforms, auto/cross correlations and calculation of firing rates Multiunit data were recorded on a 32-channel silicon probe (NeuroNexus, Model: A1 Â 32-Poly3-10mm-25 s-177-A32), and clustered using spike sorting framework 'Kilosort' (Pachitariu et al., 2016). The sorting analysis was carried out using the facilities of the Advanced Computing Research Centre, University of Bristol -http://www.bristol.ac.uk/acrc/. Manual curation of clusters was performed in 'Phy' (https://github.com/kwikteam/phy-contrib; Rossant et al., 2019) in order to select only well isolated units with clear refractory periods, and to remove artefacts. All further analysis of spike trains and cluster characteristics was carried out in MATLAB.
The centre channel of each cluster was defined as the probe channel on which the waveform was recorded with the maximum range. Representative waveforms were extracted on the centre channel for each cluster by sampling 2000 spikes from the group (if the cluster had fewer than 2000 spikes present, all spikes were used ( Figure 8B,D). Autocorrelations and cross-correlations were calculated by binning spike trains (1 ms bins) and using the MATLAB function 'xcorr' ( Figure 8D). Smooth firing rates during laser stimulation events were calculated by convolving the spike train with a normalised Gaussian of standard deviation of 10 ms, using the MATLAB function 'conv2'. ( Figure 8C). Where z-scored firing rates were required, the MATLAB function 'zscore' was used in with binned spike counts.
Analysis of bladder pressure and spike trains
Bladder pressure and external urethral sphincter EMG were recorded in Spike2 (CED) and analysed in MATLAB. To compare changes in bladder pressure over multiple recordings, bladder pressure was normalised to between 0 and 1 in each recording, where one represents the maximum pressure recorded during the experiment ( Figure 10A). The times of voids were identified using the MATLAB function 'findpeaks'; correct identification was verified by eye. Where the bladder pressure was split into phases of the voiding cycle, the period between successive voids was split into the required number of equal time intervals (100 phases in Figure 10A, 10 phases in Figure 10B). Spike counts during these phases were converted to firing rates by dividing by the width of the relevant time window. This enabled calculation of the mean firing rate in each phase of the voiding cycle (over multiple cycles and cells). In Figure 10A firing rates were z-scored to enable comparison between multiple voiding cycles recorded in different animals. Where voiding/inter voiding periods were used ( Figure 10E), 'voiding periods' were defined as a window of 15 s either side of the peak of bladder pressure during a void; 'intervoid periods' consisted of all remaining times.
Cross correlograms and Pearson correlation coefficients between spike count and bladder pressure were calculated by downsampling both data to a sampling rate of 1 hz (i.e. 1 s bins for the spike count, 1 hz sampling for the bladder pressure), z-scoring and using the MATLAB function 'xcorr' and 'corr'. ( Figure 10D,E). All curve fitting was carried out using the MATLAB curve fitting toolbox. In Figure 10C, data were fitted to the sigmoid relationship f x ð Þ ¼ a þ b 1þe c dÀx ð Þ where a,b,c and d were constants to be determined. 95% confidence intervals were then calculated in MATLAB using the 'confint' function.
Analysis of urethral sphincter EMG
For EMG data shown in Figure 9, artefacts of over 50 times the standard deviation for the recording were removed in MATLAB. The data were then RMS filtered using a 5 s moving window and smoothed using the MATLAB function 'movmean' over 1000 samples (a window of 0.32 s).
Analysis of non-voiding contractions (NVCs)
NVCs were identified in the intervoid periods only ( Figure 11C). Pressure measurements during these periods were detrended (using the MATLAB function 'detrend') and NVC peaks were detected using the MATLAB function 'findpeaks', using parameters to select only those peaks greater than 0.1 mmHg above the baseline, between 1.5 and 8 s wide, and at least 5 s from any other such peak. These provided well isolated examples of NVCs for analysis. Spike trains for each cell were binned (2 s bin width for the longer timescales shown in Figure 11C; 0.5 s bin width for Figure 11D) to extract estimates of firing rate around each NVC and to examine the difference in spike counts during 1.5 s windows beginning 6 s and 3 s before each NVC. For shuffled data a vector of random times was generated for each recording (taken from intervoid periods only) and examined in the same way. The number of shuffled NVC times and real NVC times was equal in each recording To create the trace of bladder pressure shown in Figure 11D, bladder pressure was extracted during a 10 s window around each identified or shuffled NVC. Values were referenced to the initial value of pressure recorded during the window in order to extract the change in pressure around the NVC. Windowed measurements were then averaged to generate the mean change in bladder pressure around each identified or shuffled NVC.
Code availability
Custom MATLAB scripts used to analyse the data are available along with example data are at DOI: 10.5523/bris.20l920gl27ufi204brn8ilonsf along with the MATLAB code for the model of autonomous micturition and the Parasympathetic Preganglionic NEURON model.
|
v3-fos-license
|
2018-04-03T01:28:28.397Z
|
1995-10-13T00:00:00.000
|
29510924
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/270/41/23883.full.pdf",
"pdf_hash": "8f5d3bd977eca90dca220087963aa377c7819280",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1137",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "d40e7d0187bcb913f486c9a023ddf8cbe02671f4",
"year": 1995
}
|
pes2o/s2orc
|
The role of human immunodeficiency virus type 1 envelope glycoproteins in virus infection.
Enveloped viruses enter cells by a two-step process. The first step involves the binding of a viral surface protein to receptors on the plasma membrane of the host cell. After receptor binding, a membrane fusion reaction takes place between the lipid bilayer of the viral envelope and host cell membranes. This fusion reaction can occur either at the plasma membrane or in acidic endosomes following receptor-mediated endocytosis. In either case, the membrane fusion reaction delivers the viral nucleocapsid into the host cytoplasm, allowing the infection to proceed. Viral proteins embedded in the lipid bilayer of the viral envelope (known variously as surface, spike, or envelope proteins) catalyze receptor binding and membrane fusion reactions. The critical involvement of these viral proteins in receptor binding and membrane fusion has stimulated intensive investigation aimed at understanding the mechanisms by which these proteins function. In this article, we provide a brief overview of the roles envelope (Env) glycoproteins play in the human immunodeficiency virus type 1 (HIV-1) life cycle.
tion sites in gp120 from the HTLV-III B HIV-1 isloate and at least three of the five sites in the ectodomain of gp41 appear to be utilized ( Fig. 1; 4 , 14). It has been suggested that HIV-1 Env also contains O-linked carbohydrates (15).
Proteolytic cleavage of gp160 in the Golgi is inefficient (12) and is catalyzed by a host protease at a Lys/Arg-X-Lys/Arg-Arg motif (where X is any amino acid) that is highly conserved among viral Env glycoprotein precursors (16 -19). Several studies have suggested that the host enzyme responsible for cleaving gp160 (and other viral Env precursors) is furin or a furin-like protease (20,21). Other enzymes may also be capable of mediating gp160 precursor processing, since cleavage can occur in a furin-deficient cell line (22), and a basic pair of amino acids at the cleavage site is not absolutely required for gp160 processing (18). Following gp160 cleavage, the oligomeric, noncovalently associated gp120-gp41 complexes are transported to the cell surface, where they are incorporated into budding virions.
Env Incorporation into Virus Particles
Because of the role HIV-1 Env plays in receptor binding and membrane fusion (see below), the virion incorporation of Env is essential for the formation of infectious virus particles. In certain virus systems (e.g. the alphaviruses), an interaction between the Env protein intracytoplasmic tail and the viral capsid has been demonstrated directly, and this interaction is required for virus release (23)(24)(25). In the case of retroviruses, which do not require Env expression for virus assembly and release (for review, see Ref. 26), the picture is less clear. Evidence derived from Env and Gag mutagenesis and pseudotyping studies has accumulated over the past decade both for and against the existence of a specific interaction between the TM cytoplasmic tail and the matrix protein (MA), which forms the membrane-proximal component of the retroviral core (27)(28)(29)(30)(31)(32)(33)(34)(35). In a recent study, it was demonstrated that mutations in HIV-1 MA that blocked the virion incorporation of full-length HIV-1 Env did not affect the incorporation of heterologous retroviral Env glycoproteins with short cytoplasmic tails or HIV-1 Env mutants containing large truncations in the gp41 cytoplasmic tail (36). This latter finding implies that the incorporation of Env glycoproteins with long cytoplasmic tails (i.e. lentiviral Env glycoproteins) depends upon a specific interaction between sequences in the cytoplasmic tail of the TM glycoprotein and the HIV-1 MA, whereas the incorporation of Env glycoproteins with short cytoplasmic tails into HIV-1 virions does not (36).
CD4 Binding
The initial step in HIV-1 infection involves the binding of virionassociated gp120 to the cell surface molecule CD4, which serves as the major receptor for HIV-1 and the related HIV-2 and simian immunodeficiency viruses (SIVs) (37)(38)(39). The Env determinants of CD4 binding map to gp120, in particular C3 and C4 (40 -42). CD4 binding to gp120 induces conformational changes in both gp120 and gp41 that result in the exposure of Env domains (see below) that are thought to be involved directly in the membrane fusion reaction (43)(44)(45)(46).
Following the identification of CD4 as the primary receptor for HIV, it was determined that soluble CD4 (sCD4) could neutralize virus infectivity (47)(48)(49)(50)(51). This neutralization was demonstrated to be primarily a result of an enhanced shedding of gp120 from virions following treatment with sCD4 (52)(53)(54). Initially, it was suggested that the ability of sCD4 to neutralize HIV-1 might be exploited therapeutically. Unfortunately, however, primary, non-laboratoryadapted HIV-1 isolates are neutralized poorly by sCD4 (55,56), in part as a result of the relative resistance of primary isolates to sCD4-induced gp120 shedding (57)(58)(59)(60), thereby diminishing the utility of sCD4 as a therapeutic agent. In fact, in related HIV-2 and SIV systems, sCD4 has been reported to actually enhance virus infectivity (61). It is currently unclear what role, if any, CD4induced gp120 shedding plays in HIV-1 Env function (62,63).
In addition to binding CD4 on the cell surface during the early phase of virus infection, HIV-1 Env associates with CD4 intracellularly soon after gp160 synthesis in the ER. Pulse-chase and transport inhibition studies suggested that within approximately 30 min of synthesis, gp160 adopts a conformation suitable for CD4 binding (64). The association of Env and CD4 early in the transport pathway leads to the down-regulation of CD4 expression from the surface of Env-expressing cells (65)(66)(67)(68). This decrease in the level of cell surface CD4 may reduce the ability of Env-expressing cells to become infected with additional virions (66), a phenomenon, described for other retroviruses (69), known as superinfection interference. It has also been proposed that the association between Env and CD4 early in the transport pathway may be cytotoxic (70,71).
Membrane Fusion
The ability to induce fusion between the lipid bilayer of the viral envelope and host cell membranes is a central feature of Env glycoprotein function. Env expression in an infected cell can also lead to cell-to-cell fusion, or syncytium formation, with neighboring CD4 ϩ cells, a process that contributes to HIV cytopathicity in culture and possibly in vivo (72)(73)(74). In addition to domains required for gp160 proteolytic cleavage and CD4 binding (discussed above), a number of determinants in both gp120 and gp41 have been postulated to play a role in membrane fusion.
The first domain recognized as being directly involved in HIV-1 Env-induced membrane fusion was the highly hydrophobic sequence at the amino terminus of gp41. A number of studies involving the hemagglutinin protein of orthomyxoviruses (i.e. influenza A) and the F protein of paramyxoviruses had previously suggested that an analogous domain, referred to as the fusion peptide, plays a role in the membrane fusion function of these proteins (for reviews see Refs. 19 and 75-79). Analysis of lentiviral Env glycoproteins indicated that single amino acid changes in the highly hydrophobic amino termini of the HIV-1, HIV-2, and SIV TM glycoproteins blocked Env-induced syncytium formation (80 -84). An extensive literature, reviewed elsewhere (76,78,79), details the irreversible conformational changes in the influenza virus hemagglutinin and paramyxovirus F proteins that lead to the exposure, or "activation" of the fusion peptide. As mentioned above, the HIV-1 Env glycoprotein also undergoes a series of conformational changes following CD4 binding, one outcome of which is the exposure of the gp41 fusion peptide (43). Several fusion peptides may act in concert to destabilize the lipid bilayer of the target membrane by forming a "fusion pore" between the two bilayers (77). The hypothesis that Env glycoproteins behave cooperatively to promote membrane fusion derives support from the finding that substitutions of polar amino acids in the fusion peptide of HIV-1 gp41 elicit a transdominant negative effect on syncytium formation and virus infectivity (85,86) and from studies demonstrating a synergistic effect of high levels of sCD4 on HIV-1 neutralization (87).
In addition to the amino-terminal fusion peptide, other domains within gp41 have been reported to play a role in the fusion process. Mutations in a putative leucine zipper motif in the gp41 ectodomain (88) blocked syncytium formation and virus infectivity without affecting Env oligomerization, transport, processing, or CD4 binding (89,90); a peptide based on the sequence of this region also inhibited syncytium formation and virus infection (91). Substitution of two charged residues in the membrane-spanning domain of gp41 also perturbed Env-induced membrane fusion (92).
In a number of studies, deletion, frameshift, or premature translation termination mutations were introduced into the cytoplasmic tails of the HIV-1, HIV-2, or SIV TM glycoproteins (40, 93-102), which, as noted above, are unusually long compared with those of other retroviruses. In some cases, these deletions enhanced Envinduced membrane fusion, suggesting that sequences in the gp41 cytoplasmic tail may modulate Env fusogenicity (94,96,97,99,101,102).
In gp120, primarily two regions are involved in membrane fusion. A number of studies determined that antibodies to V3 were capable of neutralizing virus infectivity (103-109) without affecting virus binding to CD4 (108,109). Mutational analyses demonstrated that single amino acid substitutions within the HIV-1 V3 loop, and the analogous domain of HIV-2, blocked Env-induced syncytium formation (83,110,111) and virus infectivity (83,112). More recent studies have also implicated the V1/V2 region in membrane fusion. Mutations within V1/V2 were reported to block syncytium formation without affecting the gp120-gp41 interaction or CD4 binding (113), and the transfer of V2 sequences from syncytium-inducing Env glycoproteins conferred the ability to induce fusion on non-syncytium-inducing Env glycoproteins (114,115). Consistent with a role for V1/V2 in membrane fusion, antibodies to this region are capable of neutralizing virus infectivity (116,117).
It has been postulated for a number of years that molecules other than CD4 may be necessary for membrane fusion induced by HIV-1 Env. The following observations suggest that factor(s) provided by human cells are required for HIV-1 Env-induced membrane fusion: (i) expression of human CD4 in murine cells does not confer upon them the ability to support HIV-1 infection (39), (ii) in a cell-fusion reaction, the target cell must be of human origin, whereas the Env-expressing cell can be of non-human origin (118), and (iii) the formation of some somatic cell hybrids between human cells and CD4-expressing non-human cells can overcome the fusion defect observed in human CD4-expressing non-human cells, suggesting that the inability of CD4 ϩ murine cells to support HIV infection is due to the absence of factor(s) on murine cells, rather than the presence of a mouse cell-specific interfering function (119 -121). Although non-CD4 molecules have been reported to serve as alternative HIV-1 receptors on CD4 Ϫ cells (122), no widely accepted CD4 co-receptor has been identified. It was suggested by Callebaut et al. (123) that CD26 (dipeptidyl-peptidase IV) conferred susceptibility to HIV-1 infection upon CD4-expressing murine (NIH 3T3) cells. A number of groups, however, failed to confirm a role for CD26 in HIV-1 infection or syncytium formation (124 -128). Recent protease digestion data suggest that the factor(s) provided by human cells may be nonproteinaceous (129).
Tissue Tropism
An additional function of the HIV-1 Env glycoprotein is to determine the cell-type specificity, or tissue tropism, of virus infection. In culture, HIV-1 typically infects either cells of the monocyte/ macrophage lineage or immortalized T-cell lines, but rarely both. Primary virus isolates obtained from infected individuals during the early, asymptomatic phase of infection are frequently nonsyncytium-inducing and macrophage-tropic, and cells of the monocyte/macrophage lineage are thought to be important targets for virus infection in vivo (for review, see Ref. 130). HIV-1 isolates which are syncytium-inducing and capable of productively infecting T-cell lines tend to arise late in infection after the onset of AIDS-defining symptoms (131). In fact, it has been argued that the evolution in vivo of syncytium-inducing, T-cell line-tropic variants may play a causal role in disease development (74). As would be predicted for a property determined by Env, the block to infection in nonpermissive cells appears to be primarily at the level of entry, presumably resulting from a defect in membrane fusion (132,133). Interestingly, both macrophage-tropic and T-cell line-tropic isolates are capable of efficiently infecting primary human CD4 ϩ T-lymphocytes.
Studies conducted in a number of laboratories have concluded that sequences within gp120 are responsible for determining the tissue tropism of HIV-1. The V3 loop, discussed above in the context of membrane fusion, plays a central role in tropism. The introduction of sequences encompassing the V3 loop from macrophage-tropic clones to T-cell line-tropic clones is able to confer macrophage tropism upon certain T-cell line-tropic clones (134 -138). It is clear, however, that a combination of sequences both within, and outside, V3 is required for optimal macrophage infec- tion. The exchange of additional sequences adjacent to V3, particularly amino-terminal to V3, greatly enhances the ability of chimeric viruses to infect macrophages (139,140). It appears that V3 loop conformation differs between macrophage-tropic and T-cell line-tropic Env glycoproteins (140,141), and that residues both within and outside V3 influence V3 conformation (142).
The mechanism by which the V3 loop affects cell-type tropism has not been elucidated. It has been suggested by some that a sequence near the tip of the V3 loop serves as the target for proteolytic cleavage, and that this V3 cleavage event activates the membrane fusion potential of HIV-1 Env (143,144). It was proposed that differences in tropism correlate with altered sensitivity of the V3 loop to proteolytic cleavage (141,145), although other investigators failed to find a correlation between these properties (146), and it has not been established that V3 cleavage plays any role in HIV-1 infection. Another model for V3 loop function invokes the existence of a cell-type specific, non-CD4 receptor molecule with which the V3 loop interacts. To date, however, no such receptor has been identified.
Env Interactions
In the discussion above, we have focused largely on the functions of discrete domains within the HIV-1 Env glycoprotein. It is becoming increasingly clear, however, that most Env functions require the interaction between nonadjacent sequences within gp120 or between gp120 and gp41. Although attempts to obtain a crystal structure of HIV-1 Env have thus far been unsuccessful, a variety of biological, biochemical, and immunological data have provided information about Env interactions. In an early demonstration of the importance of Env interactions, a debilitating mutation in C2 affecting infectivity could be reversed by changes in C1 and V3, suggesting a functional interaction between these domains of gp120 (147). More recently, the analysis of a revertant obtained from a V1/V2 envelope chimera demonstrated the existence of a functional interaction between V1/V2 and C4 (148). The analysis of chimeras between syncytium-inducing and non-syncytium-inducing Env glycoproteins also support an interaction between V1/V2 and sequences near the carboxyl terminus of gp120 (114).
Several groups have used antibody binding analyses to identify interacting regions within gp120. This approach has provided evidence for interactions between V1/V2 and C4; V3 and C1, C2, and C4; and C1, C2, and C5 (117, 149 -153). An interaction between V3 and C4 is further supported by the observation that treatment of gp120 with soluble CD4 (which binds C4) enhances the binding of anti-V3 monoclonal antibodies (152), and that single amino acid mutations in C4 increase binding by a V3-specific antibody (153). Although some of these findings may be ascribed to indirect effects on protein folding rather than direct, functional interactions, these studies are consistent with the concept that while distinct domains within HIV-1 Env are involved in specific functions, complex interactions between, and within, these domains are essential for the full range of biological activities required for productive infection.
|
v3-fos-license
|
2024-05-19T16:01:28.280Z
|
2024-05-01T00:00:00.000
|
269868702
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/14/10/855/pdf?version=1715679800",
"pdf_hash": "4a1ee8abe31d7127130f50b40ea6242fa49b795f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1141",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "3643ecd7c216c986b093fcec7deb0afdd8e0d5d3",
"year": 2024
}
|
pes2o/s2orc
|
Review of Detection Limits for Various Techniques for Bacterial Detection in Food Samples
Foodborne illnesses can be infectious and dangerous, and most of them are caused by bacteria. Some common food-related bacteria species exist widely in nature and pose a serious threat to both humans and animals; they can cause poisoning, diseases, disabilities and even death. Rapid, reliable and cost-effective methods for bacterial detection are of paramount importance in food safety and environmental monitoring. Polymerase chain reaction (PCR), lateral flow immunochromatographic assay (LFIA) and electrochemical methods have been widely used in food safety and environmental monitoring. In this paper, the recent developments (2013–2023) covering PCR, LFIA and electrochemical methods for various bacterial species (Salmonella, Listeria, Campylobacter, Staphylococcus aureus (S. aureus) and Escherichia coli (E. coli)), considering different food sample types, analytical performances and the reported limit of detection (LOD), are discussed. It was found that the bacteria species and food sample type contributed significantly to the analytical performance and LOD. Detection via LFIA has a higher average LOD (24 CFU/mL) than detection via electrochemical methods (12 CFU/mL) and PCR (6 CFU/mL). Salmonella and E. coli in the Pseudomonadota domain usually have low LODs. LODs are usually lower for detection in fish and eggs. Gold and iron nanoparticles were the most studied in the reported articles for LFIA, and average LODs were 26 CFU/mL and 12 CFU/mL, respectively. The electrochemical method revealed that the average LOD was highest for cyclic voltammetry (CV) at 18 CFU/mL, followed by electrochemical impedance spectroscopy (EIS) at 12 CFU/mL and differential pulse voltammetry (DPV) at 8 CFU/mL. LOD usually decreases when the sample number increases until it remains unchanged. Exponential relations (R2 > 0.95) between LODs of Listeria in milk via LFIA and via the electrochemical method with sample numbers have been obtained. Finally, the review discusses challenges and future perspectives (including the role of nanomaterials/advanced materials) to improve analytical performance for bacterial detection.
Introduction
Foodborne illnesses can be dangerously infectious, and they are predominantly caused by pathogens (e.g., bacteria, fungi, viruses, parasites, etc.) or toxins (e.g., dioxins, heavy metals, mycotoxins, etc.) entering the body through contaminated food [1].Most of the pathogens that can cause foodborne diseases are bacteria [2].Bacteria can cause acute poisoning, long-term diseases, serious disabilities and even deaths [3].Among all the bacteria species, Salmonella causes the most serious illnesses and deaths related to contaminated food [4,5].Salmonella is commonly found in birds, vegetables and also in natural water [6].Its symptoms include fever, vomiting, pain and dehydration, etc. [7].Salmonella can be divided into over 2600 species.Among them, Salmonella enterica and Salmonella typhimurium are the most commonly found [8].Listeria usually exists in processed products such as milk and meat and can grow in refrigerators [9].Listeria is shown to cause miscarriages in pregnant women or deaths of infants, although the chance is low [10].Around 20 species in Listeria can cause human diseases, and Listeria monocytogenes causes the most harm to humans [11].Most Campylobacter infections in humans are acquired by eating and touching contaminated poultry and seafood [12].More than 20 species of Campylobacter have been implicated in human disease, and the most well-known ones are Campylobacter jejuni and Campylobacter coli [13].The most common symptoms of Campylobacter infections are diarrhea, fever, vomiting and stomach cramps [14].S. aureus is normally found in birds, meat and milk [15].S. aureus is one of the common bacteria species that display antimicrobial resistance to antibiotics like methicillin and vancomycin [16].Common symptoms of S. aureus are shown on the skin, such as painful red welts and sores [17].Generally, E. coli can be found in contaminated meat, milk and vegetables [18].E. coli can be divided into three main groups-Enteropathogenic, Enteroinvasive and Enterohemorrhagic.A strain of Enterohemorrhagic E. coli is the most toxic variant [19].Although E. coli does not cause any symptoms in most healthy humans, it can lead to diarrhea, vomiting and sometimes fever [20].
As many bacteria species currently pose a major threat to humans, a quick, accurate and cheap method to detect bacteria in the environment is essential, especially for food samples [21,22].The traditional method to detect bacteria is through culturing of bacteria, which includes isolating the bacteria and monitoring the growth of the colonies [23].During the culture process, the bacterial colonies are fixed and stained on a glass slide and confirmed using microscopy observation in order to identify different bacteria species.This process is usually very time-consuming and labor-intensive [24].Other methods are more complex and can overcome some limitations of bacteria culture.Another common detection method is high-performance liquid chromatography (HPLC), which has high sensitivity [25,26].When the concentration of bacterial colonies is very low but still cannot be ignored for human health, it poses a challenge for these methods [27].Researchers have developed many alternative methods to overcome these problems [21].One technology that has been widely used more recently is the enzyme-linked immunosorbent assay (ELISA), which is available as a commercial test kit for bacterial detection [28,29].However, it has disadvantages, such as low sensitivity and very low temperature for storage [30].As a result, it is very difficult to meet the demand for large-scale bacterial detection in food samples with current technologies.
PCR is a widely used method for bacterial detection in food [31].It can make millions to billions of copies of a DNA sample rapidly, and LOD can be measured using copies of that DNA sample [32].It has a high sensitivity and a relatively lower LOD than other common detection methods [33].Commonly used nanomaterials in PCR are gold nanoparticles (AuNPs) and magnetic nanoparticles, which can speed up the PCR process and enhance its efficiency because they have good thermal conductivity [34].LFIA, another widely used method for bacterial detection in food, is quick, cost-effective and simple to use [35].LFIA usually provides qualitative and semi-quantitative but also quantitative results by measuring the color darkness of the test region in a strip [36].It measures the concentration of bacteria using the darkness of color on the strip.Conjugated nanoparticles dominate the porous membrane as an indicator [37].AuNPs and magnetic nanoparticles are the most-used nanoparticles in the LFIA because of their low toxicity, and particle size and shape can be controlled by many factors [8,38].An alternative well-known method for bacterial detection in food is the electrochemical method [39,40].This method mainly measures the changes in the electrochemical properties caused by bacteria introduced to the solution [41].Carbon-based nanomaterial and metal nanoparticles are usually integrated onto the electrodes to capture bacteria efficiently and increase signal amplification [42].
Many reviews have been published on bacterial detection via different methods [43][44][45][46][47][48][49][50].A review from Nnachi and co-authors compares nine methods for bacterial detection [43].A review from Oh and co-authors discusses the influence of four pre-treatment methods for bacterial detection via PCR [44].Furthermore, a review from Dey and co-authors discusses bacterial detection via LFIA in 10 bacteria species [45].Moreover, a review from Vidic and co-authors discusses bacterial detection using an electrochemical method in eight sensor types [46].However, they were not organized by a clear category (e.g., bacteria species, LOD, etc.) [43][44][45][46].The LOD can be influenced by different conditions, and none of these articles has investigated how different conditions influence LOD systematically [47][48][49][50].As a result, it is necessary to study the relationship between LOD and different conditions comprehensively.In this review, PCR, LFIA and electrochemical methods and their efficiency in the detection of different bacteria species in different food sample types have been summarized.Recent developments (2013-2023) cover PCR, LFIA and the electrochemical method for the detection of various bacterial species (Salmonella, Listeria, Campylobacter, S. aureus and E. coli) by considering the different food sample types, analytical performance and the reported LOD, as discussed from 150 peer-reviewed articles.Current challenges and future avenues to further improve analytical performance for bacterial detection are discussed.
Research Methods
Information was collected from Science Direct with these keywords: bacteria, PCR, LFIA, electrochemical method, LOD.A total of 150 peer-reviewed articles from 2013 to 2023 were compared to identify the LOD for different bacteria species-bacteria in the Pseudomonadota domain, which include Salmonella and E. coli; the Campylobacterota domain, which includes Campylobacter; and the Bacillota domain, including Listeria and S. aureus-via PCR, LFIA and the electrochemical method.One bacteria species using one detection method comprised 10 articles in this review.It is notable that the bacterial detection with the lowest LOD was selected in the current review when more than one bacteria species or detection method was investigated in the articles.It was difficult to keep track of detection efficiency, performance and LOD simultaneously.The multiplex detection capability was included as an important category in this review.The data were collected for bacteria species, year of article, multiplex detection capability, food sample type, sample number (number of samples tested for that bacteria species and food sample type in the article) and LOD (CFU/mL), shown in Tables 1-3.The sample number shows the repeatability of the experiment, which is important since it reflects successive measurements under the same conditions.
Results
Tables 1-3 provide a breakdown of the analysis of the 150 peer-reviewed articles used in this review, based on the bacteria species, multiplex detection capability, food sample type, detection method and reported LOD.The food sample types were divided into eight groups for easier analysis: mammals (including beef, pork and sheep), birds (including chicken, duck, poultry and turkey), fish, egg, milk, plants (including lettuce, soybean, rice, cabbage and apple), natural water and bacterial solution (a solution that contains the target bacteria species, prepared via a conventional boiling method).The distribution of number of articles is further shown in Figure 1.
Tables 1-3 provide a breakdown of the analysis of the 150 peer-reviewed articles used in this review, based on the bacteria species, multiplex detection capability, food sample type, detection method and reported LOD.The food sample types were divided into eight groups for easier analysis: mammals (including beef, pork and sheep), birds (including chicken, duck, poultry and turkey), fish, egg, milk, plants (including lettuce, soybean, rice, cabbage and apple), natural water and bacterial solution (a solution that contains the target bacteria species, prepared via a conventional boiling method).The distribution of number of articles is further shown in Figure 1. Figure 1a shows that the number of articles with multiplex detection capability was different among different detection methods.The number of articles with multiplex detection capability was the highest for PCR, followed by LFIA and the electrochemical method.The number of articles with multiplex detection capability for PCR, LFIA and the electrochemical method was 26 articles, 11 articles and 5 articles, respectively.Although most articles with multiplex detection capability only involved simultaneous detection of two bacteria species or two strains of one bacteria species, some articles with multiplex detection capability involved simultaneous detection of five bacteria species or five strains of one bacteria species (Tables 1-3).Figure 1b shows the numbers of articles according to food sample groups with varying detection methods.It was illustrated that milk was studied in the highest proportion of articles for each detection method.The second most-studied food sample group candidate was mammals via PCR and LFIA and plants via electrochemical method.The number of articles for milk for PCR, LFIA and the electrochemical method was 13 articles, 17 articles and 22 articles, respectively.The number of articles for mammals for PCR and LFIA was 12 articles and 11 articles, respectively.The number of articles for plants for the electrochemical method was 11 articles.
The distribution among the number of articles and the average LODs are annually presented in Figure 1a shows that the number of articles with multiplex detection capability was different among different detection methods.The number of articles with multiplex detection capability was the highest for PCR, followed by LFIA and the electrochemical method.The number of articles with multiplex detection capability for PCR, LFIA and the electrochemical method was 26 articles, 11 articles and 5 articles, respectively.Although most articles with multiplex detection capability only involved simultaneous detection of two bacteria species or two strains of one bacteria species, some articles with multiplex detection capability involved simultaneous detection of five bacteria species or five strains of one bacteria species (Tables 1-3).Figure 1b shows the numbers of articles according to food sample groups with varying detection methods.It was illustrated that milk was studied in the highest proportion of articles for each detection method.The second most-studied food sample group candidate was mammals via PCR and LFIA and plants via electrochemical method.The number of articles for milk for PCR, LFIA and the electrochemical method was 13 articles, 17 articles and 22 articles, respectively.The number of articles for mammals for PCR and LFIA was 12 articles and 11 articles, respectively.The number of articles for plants for the electrochemical method was 11 articles.
The distribution among the number of articles and the average LODs are annually presented in Figure 2.This figure shows the research trend of different detection methods in the years from 2013 to 2023 via different detection methods.
Figure 2a shows that the annual numbers of articles were different between different detection methods and years.There was at least one article published each year for each detection method from 2013 to 2023.The number of articles published from 2019 to 2023 was higher than that for Figure 2b shows that the average annual LOD was different between different detection methods and years.The annual average LOD was usually the highest for LFIA, except for the electrochemical method in 2018.The LOD was usually the lowest in PCR, except for the electrochemical method in 2016 and 2022.In PCR, the average LOD was 4 CFU/mL in 2013.Then, it increased gradually to 18 CFU/mL in 2016 and was followed by a large drop to 3 CFU/mL in 2017.After 2017, it decreased in an overall trend to 2 CFU/mL in 2021.It increased to 6 CFU/mL in 2022.In LFIA, it was 10 CFU/mL in 2013; it decreased to Figure 2b shows that the average annual LOD was different between different detection methods and years.The annual average LOD was usually the highest for LFIA, except for the electrochemical method in 2018.The LOD was usually the lowest in PCR, except for the electrochemical method in 2016 and 2022.In PCR, the average LOD was 4 CFU/mL in 2013.Then, it increased gradually to 18 CFU/mL in 2016 and was followed by a large drop to 3 CFU/mL in 2017.After 2017, it decreased in an overall trend to 2 CFU/mL in 2021.It increased to 6 CFU/mL in 2022.In LFIA, it was 10 CFU/mL in 2013; it decreased to 7 CFU/mL in 2014 and increased tremendously to 75 CFU/mL in 2016.Then, it decreased sharply to 17 CFU/mL in 2017 and was again followed by an increase to 40 CFU/mL in 2019.After that, it decreased gradually to 8 CFU/mL in 2021 and increased again to 25 CFU/mL in 2023.In the electrochemical method, it was 4 CFU/mL in 2013, and it increased gradually to 10 CFU/mL in 2015.After a decrease to 7 CFU/mL in 2016, it increased again to 35 CFU/mL in 2018.Then, it decreased gradually to 3 CFU/mL in 2022.
Figure 3 shows the number of articles and average LODs for various nanoparticles from the reported articles for LFIA. Figure 3 shows the number of articles and average LODs for various nanoparticles from the reported articles for LFIA. Figure 3a shows that the numbers of articles for LFIA were different with different nanoparticles.In all the articles for LFIA, gold was the most studied nanoparticle, followed by iron.The number of articles with gold or iron was 33 articles and 8 articles, respectively.Figure 3b illustrates that the average LODs were different among different nanoparticles.The average LOD was highest for articles with silicon, followed by articles with palladium and carbon.The average LOD of articles with silicon, palladium, carbon, gold, iron, cobalt, manganese and europium was 50 CFU/mL, 41 CFU/mL, 40 CFU/mL, 26 CFU/mL, 12 CFU/mL, 10 CFU/mL, 9 CFU/mL and 4 CFU/mL, respectively.
The relationship between the number of articles and different techniques and average LOD is presented for the electrochemical method in Figure 4. Figure 3a shows that the numbers of articles for LFIA were different with different nanoparticles.In all the articles for LFIA, gold was the most studied nanoparticle, followed by iron.The number of articles with gold or iron was 33 articles and 8 articles, respectively.Figure 3b illustrates that the average LODs were different among different nanoparticles.The average LOD was highest for articles with silicon, followed by articles with palladium and carbon.The average LOD of articles with silicon, palladium, carbon, gold, iron, cobalt, manganese and europium was 50 CFU/mL, 41 CFU/mL, 40 CFU/mL, 26 CFU/mL, 12 CFU/mL, 10 CFU/mL, 9 CFU/mL and 4 CFU/mL, respectively.
The relationship between the number of articles and different techniques and average LOD is presented for the electrochemical method in Figure 4.
Figure 4a shows that the number of articles for the electrochemical method was different with different techniques.In all the articles for the electrochemical method, EIS was the most studied detection method, followed by CV and DPV.The number of articles with EIS, CV and DPV was 20 articles, 14 articles and 13 articles, respectively.Figure 4b illustrates that the average LODs via the electrochemical method were different between different techniques.The average LOD was the highest for articles with CV, followed by articles with ASV.The average LOD of articles with CV, ASV, EIS, DPV and SWV was 18 CFU/mL, 15 CFU/mL, 12 CFU/mL, 8 CFU/mL and 6 CFU/mL, respectively.lowed by iron.The number of articles with gold or iron was 33 articles and 8 articles respectively.Figure 3b illustrates that the average LODs were different among differen nanoparticles.The average LOD was highest for articles with silicon, followed by article with palladium and carbon.The average LOD of articles with silicon, palladium, carbon gold, iron, cobalt, manganese and europium was 50 CFU/mL, 41 CFU/mL, 40 CFU/mL, 26 CFU/mL, 12 CFU/mL, 10 CFU/mL, 9 CFU/mL and 4 CFU/mL, respectively.
The relationship between the number of articles and different techniques and averag LOD is presented for the electrochemical method in Figure 4. Figure 4a shows that the number of articles for the electrochemical method was dif ferent with different techniques.In all the articles for the electrochemical method, EIS wa the most studied detection method, followed by CV and DPV.The number of articles with EIS, CV and DPV was 20 articles, 14 articles and 13 articles, respectively.illustrates that the average LODs via the electrochemical method were different between different techniques.The average LOD was the highest for articles with CV, followed by articles with ASV.The average LOD of articles with CV, ASV, EIS, DPV and SWV was 1 CFU/mL, 15 CFU/mL, 12 CFU/mL, 8 CFU/mL and 6 CFU/mL, respectively.Figure 5 shows the average LOD of different bacteria species and food sample group via different detection methods based on 150 articles in tables.It shows which detection method is the most suitable for each bacteria species and food sample group.5a shows that the overall average LOD was the lowest for PCR and the highest for LFIA.The average LOD was higher for article with multiplex detection capability in PCR than for those without, but it was lower in LFIA and the electrochemical method.PCR has the lowest average LOD among all detec tion methods for Salmonella, Campylobacter and E. coli, and these bacteria species are al gram-negative (−).In addition, the electrochemical method has the lowest average LOD among all detection methods for Listeria and S. aureus, and these bacteria species are al gram-positive (+).On the other hand, LFIA always has the highest average LOD for each bacteria species.In PCR and LFIA, the LODs for bacteria species in the Pseudomonadot domain were usually lower than those for bacteria species in the Bacillota, but they wer similar to the latter for the electrochemical method.For bacteria species in the Pseudo monadota domain, the average LOD for E. coli was lower than it was for Salmonella in PCR and the electrochemical method, but higher than the latter in LFIA.For bacteria species in the Bacillota domain, the average LOD for Listeria was lower than it was for S. aureus in PCR and the electrochemical method, but higher than the latter in LFIA.The average LOD of Campylobacter was usually the highest among all the bacteria species in each detection method, except that it was lower than S. aureus in PCR.5a shows that the overall average LOD was the lowest for PCR and the highest for LFIA.The average LOD was higher for articles with multiplex detection capability in PCR than for those without, but it was lower in LFIA and the electrochemical method.PCR has the lowest average LOD among all detection methods for Salmonella, Campylobacter and E. coli, and these bacteria species are all gram-negative (−).In addition, the electrochemical method has the lowest average LOD among all detection methods for Listeria and S. aureus, and these bacteria species are all gram-positive (+).On the other hand, LFIA always has the highest average LOD for each bacteria species.In PCR and LFIA, the LODs for bacteria species in the Pseudomonadota domain were usually lower than those for bacteria species in the Bacillota, but they were similar to the latter for the electrochemical method.For bacteria species in the Pseudomonadota domain, the average LOD for E. coli was lower than it was for Salmonella in PCR and the electrochemical method, but higher than the latter in LFIA.For bacteria species in the Bacillota domain, the average LOD for Listeria was lower than it was for S. aureus in PCR and the electrochemical method, but higher than the latter in LFIA.The average LOD of Campylobacter was usually the highest among all the bacteria species in each detection method, except that it was lower than S. aureus in PCR.
Figure 5b shows that PCR had the lowest average LOD among birds, fish and milk, while the electrochemical method had the lowest average LOD among mammals, egg and plants.LFIA usually had the highest average LOD among all food sample groups, except that it was highest for PCR in egg.Among all the food sample groups, egg had the lowest average LOD for LFIA and the electrochemical method among all food sample groups, while fish had the lowest average LOD for PCR.In contrast, birds had the highest average LOD for PCR, which was followed by mammals and milk.Natural water and bacterial solution were only involved in detection for PCR, and their average LODs were lower than all other food sample groups except fish.
EU limits for Salmonella, Listeria, Campylobacter, S. aureus and E. coli in food for most people are 100, 100, 1000, 1000 and 100 CFU/mL, respectively [18].All the LODs in this review are far lower than the EU limits stipulate.Although some foods intended for special groups, such as infants and sick people, require no presence of bacteria, at least one article with LOD within 0.3 CFU/mL was included in each bacteria species for PCR [18].
In order to find an accurate quantitative relationship between LODs with different parameters, the LOD should be analyzed against each parameter individually.An exponential relation with relatively high Pearson correction coefficients (R 2 > 0.95) can be obtained between the LOD of Listeria in milk via LFIA and the LOD of Listeria in milk via the electrochemical method with a sample number.The details of them can be seen in Figure 6, which shows both the LODs for the original and for the regression of Listeria in milk via LFIA and the electrochemical method.In order to find an accurate quantitative relationship between LODs with different parameters, the LOD should be analyzed against each parameter individually.An exponential relation with relatively high Pearson correction coefficients (R 2 > 0.95) can be obtained between the LOD of Listeria in milk via LFIA and the LOD of Listeria in milk via the electrochemical method with a sample number.The details of them can be seen in Figure 6, which shows both the LODs for the original and for the regression of Listeria in milk via LFIA and the electrochemical method.For articles involving detection of Listeria in milk via LFIA, LOD (CFU/mL) = 5 + 1451/exp(0.5053× sample number), R 2 = 0.9867.For articles involving detection of Listeria in milk via the electrochemical method, LOD (CFU/mL) = 0.3 + 6.885/exp(0.05958× sample number), R 2 = 0.9591.These results show that the LOD usually decreases when the sample number increases for the detection of the same bacteria species and food sample group using the same detection method.However, the decreasing rate of LOD reduced gradually with an increasing sample number until LOD reached its lowest and remained unchanged after that.It can be seen that when the sample number is very large, the LOD of Listeria in milk via LFIA and the electrochemical method will be around 5 CFU/mL and 0.3 CFU/mL.This rule may also apply to other bacteria species and food sample groups, but more articles need to be collected and analyzed before a possible regression can be achieved.
Discussion
The review of the LODs for PCR, LFIA and the electrochemical method has revealed a trend in this research area that will inform food safety and public health experts.Figure 1a illustrates that the number of articles with multiplex detection capability is the highest in PCR, followed by LFIA and the electrochemical method.That is the main reason that PCR is considered a reliable standard detection method for bacterial detection under many circumstances.However, PCR does have the disadvantages of being a high-cost, For articles involving detection of Listeria in milk via LFIA, LOD (CFU/mL) = 5 + 1451/ exp(0.5053× sample number), R 2 = 0.9867.For articles involving detection of Listeria in milk via the electrochemical method, LOD (CFU/mL) = 0.3 + 6.885/exp(0.05958× sample number), R 2 = 0.9591.These results show that the LOD usually decreases when the sample number increases for the detection of the same bacteria species and food sample group using the same detection method.However, the decreasing rate of LOD reduced gradually with an increasing sample number until LOD reached its lowest and remained unchanged after that.It can be seen that when the sample number is very large, the LOD of Listeria in milk via LFIA and the electrochemical method will be around 5 CFU/mL and 0.3 CFU/mL.This rule may also apply to other bacteria species and food sample groups, but more articles need to be collected and analyzed before a possible regression can be achieved.
Discussion
The review of the LODs for PCR, LFIA and the electrochemical method has revealed a trend in this research area that will inform food safety and public health experts.Figure 1a illustrates that the number of articles with multiplex detection capability is the highest in PCR, followed by LFIA and the electrochemical method.That is the main reason that PCR is considered a reliable standard detection method for bacterial detection under many circumstances.However, PCR does have the disadvantages of being a high-cost, time-consuming and complex procedure.As a result, PCR cannot replace LFIA and the electrochemical method for bacterial detection completely.Milk is the most popular food sample for bacterial detection via each detection method.
Figure 2 shows that although bacterial detection has attracted more attention from researchers in recent years, LODs in the published articles have not decreased continually.The main reason is that the LODs in this review are all lower than the EU limits.
Figure 3a shows that only three articles for LFIA involved non-metal nanoparticles (silicon: two, carbon: one).The majority of articles with LFIA involved metal nanoparticles.Figure 3b illustrates that average LODs for non-metal nanoparticles are usually higher than for metal nanoparticles, except that the average LOD of palladium is a little higher than that for carbon.In addition, four articles in detection of bacteria via LFIA involved combined detection (one article involved combined detection with the electrochemical method; three articles involved combined detection with PCR).For the same bacteria species, the LODs in articles involving combined detection with other methods were usually lower than the LODs in articles without.In the detection of Salmonella, the LOD in Ref. [101] was 1 CFU/mL, and it was lower than all LODs in other articles without combination with the electrochemical method.In the detection of Listeria, the LOD in Ref. [101] was 7 CFU/mL, and it was lower than all LODs in other articles without combination with PCR.In the detection of S. aureus, the LOD in Ref. [133] was 3 CFU/mL, and it was higher than the LODs in Ref. [131] and Ref. [132] without combination with PCR.However, the nanoparticle used in Ref. [133] was silicon while the nanoparticle used in the other two articles was gold, and the average LOD with silicon was higher than the LOD with gold.In addition, the LOD in Ref. [135] was 10 CFU/mL, and it was higher than the LODs in Ref. [131], Ref. [132] and Ref. [134] without combination with PCR.However, Ref. [125] involves the detection of five bacteria species simultaneously while the other three articles only involve the detection of S. aureus.It is very difficult to keep high detection efficiency and a low LOD simultaneously.These four articles combine the advantages of both detection methods, which can be a choice for further development of detection methods.
Figure 4 shows that few articles about the detection of Campylobacter with extremely high LODs (only Campylobacter includes articles with LODs over 20 CFU/mL via the electrochemical method) increase the average LODs via the electrochemical method.
In Figure 5a, the average LOD is the lowest for PCR in gram (−) bacteria species and for the electrochemical method in gram (+) bacteria species.Campylobacter is gram (−), and its average LOD is usually the highest among all bacteria species for each detection method, except that the LOD is the highest for S. aureus via PCR.A possible reason for the higher average LOD of Campylobacter and S. aureus is that EU limits for them are higher than for other bacteria species [18].The average LODs for Salmonella and E. coli (both gram (−)) in the Pseudomonadota domain are usually lower than those for Listeria and S. aureus (both gram (+)) in the Bacillota domain in PCR and LFIA, but similar to the latter in the electrochemical method.The difference between two bacteria species in the same domain is much smaller than the difference between different domains.
Furthermore, the sample number plays an important role in controlling the LOD in each method.Figure 6 shows that the exponential formulas fulfil the original data of LODs of Listeria in milk via LFIA and the electrochemical method with sample numbers from already published research articles.These exponential regressions involve the LOD of Listeria and sample numbers in LFIA and the electrochemical method in milk samples.The main reason is that milk is the most common food sample in each detection method in this review, and its composition is simpler than that of meat samples [201].
This review also shows that the average LOD for articles with multiplex detection capability is higher than for articles without in PCR, but lower in LFIA and the electrochemical method.One of the possible reasons is that PCR usually has a lower LOD than LFIA and the electrochemical method.It is difficult to keep both detection efficiency and sensitivity at the same time when LOD is already low.This could be a promising focus for the development of bacterial detection in the future.This review also indicates that fish and egg have the lowest average LOD among all food sample groups.The complexity of the food sample composition can increase the LOD.To address such limitations and challenges, sample enrichment and improvement in the device properties of detection are needed.PCR, LFIA and the electrochemical method have been used in detection of different bacteria species, and many of them involve multiplex detection.It is often observed that bacteria species coexist in a single food sample.As a result, multiplex detection is needed that can fulfill the requirements of a low LOD and high efficiency simultaneously.These detection methods can also be combined with other technologies to obtain a better detection performance.
Challenges and Future Perspectives
Sensitivity and Specificity: Enhancing sensitivity and specificity poses a significant challenge.The integration of specific aptamers or DNA strands enhances PCR-based bacterial detection in terms of sensitivity and specificity.For LFIA, lateral-flow design and integration of monoclonal antibodies and nanomaterials seem crucial for enhancing specificity and LODs.For the electrochemical method, electrode modification with diverse nanomaterials has emerged as a prevalent technique, amplifying signals and improving sensitivity.MALDI-TOF mass spectrometry is a widely used technique in electrochemical methods for the bacterial detection to increase reliability, accuracy and efficiency.Microfluidic platforms offer a seamless integration with LFIA and the electrochemical method.The colorimetric and fluorescent sensing methods can be used in PCR, LFIA and electrochemical methods to achieve lower LODs and wider linear ranges.In addition, all PCR, LFIA and electrochemical methods can be used in bacteria drug resistance tests [202,203].
Sample Complexity: Addressing the challenges related to sample complexity and matrix effects and cost is crucial for the development of efficient bacterial detection systems.Sample complexity can lead to a higher LOD, and LOD is also affected by pretreatment of food samples.As a result, complex biosensing systems necessitate pretreatment of food samples, with different food samples requiring varied sample treatments and techniques.Achieving data under similar sample treatments and identical testing conditions is challenging but important.
Analysis Time: The total time required for analysis varies across different bacterial detection methods, including PCR, LFIA and electrochemical methods.LFIA and electrochemical methods are well known for their rapid analysis and multiplex detection capability.
Role of nanomaterials and advanced materials for future developments: The integration and successful utilization of various materials and nanomaterials for bacterial detection in food is well reported in recent years.Nanomaterials offer unique properties, including high surface area, tunable physical characteristics and enhanced reactivity, which makes them ideal candidates for improving sensitivity, specificity and overall performance.
PCR: Nanomaterials find a major application in PCR-based bacterial detection methods, contributing to the sensitivity and efficiency of the amplification process.Nanoparticles such as AuNPs, silicon and magnetic nanoparticles are often utilized in PCR assays.One significant application is in the extraction/purification of nucleic acids from bacterial samples.Magnetic nanoparticles coated with specific ligands can bind to bacterial DNA or RNA selectively, enabling the isolation from food matrices.This enhances purity and subsequently improves the reliability of PCR amplification.Additionally, nanoparticles, as labels for detection, can help in facilitating the visualization of PCR products.Quantum dots, for instance, provide a fluorescent signal which can be quantified, enhancing the sensitivity and specificity of bacterial detection via PCR [204].
LFIA: Nanomaterials play a crucial role in enhancing the performance of LFIA for bacterial detection in food.Carbon nano-tubes, magnetic nanoparticles and quantum dots are among the commonly utilized nanomaterials.These materials are employed for conjugation with antibodies specifically related to the targeted species.Nanomaterials are normally integrated into the test strip, e.g., AuNPs are frequently utilized as labels for bacterial detection (due to their distinct color change properties).The immobilization of antibodies on the surface of these nanoparticles facilitates specific binding to bacterial antigens, thereby enabling the quantitative detection of the target bacteria species.Moreover, the use of nanomaterials in LFIA is reported to help in signal amplification and improved sensitivity (and lower LOD) [205].
Electrochemical method: Nanomaterials play a crucial role in enhancing the performance of electrochemical methods for bacterial detection.Carbon-based nanomaterials, metal nanoparticles and nanocomposites are commonly integrated onto the electrode surfaces to improve the response and signal amplification.Nanomaterials provide an improved surface area for the immobilization of specific recognition elements (antibodies or aptamers), which ensures efficient capture of the target bacteria species, thereby improving sensitivity.In addition, nanomaterials modify the electrode surface to promote electron transfer kinetics and hence result in rapid and reliable electrochemical signals and detection.The unique properties of nanomaterials, such as size, structure, conductivity and catalytic activity, contribute to the overall performance of electrochemical biosensors for bacterial detection [206][207][208][209].To make a comparison of PCR, LFIA and electrochemical methods for bacterial detection, Table 4 is listed below.Most by sample reactions with bacteria sensor, matrix, etc.
Low, complex
In summary, the integration of nanomaterials in PCR, LFIA and electrochemical methods for bacterial detection in food represents a promising strategy to overcome the challenges associated with sensitivity, specificity, overall performance and LODs.The exploration of novel nanomaterials and their tailored applications would help us to further lower the LODs and advance the capability of bacterial detection in food safety.
Conclusions
The development of detection technology for monitoring the quality and safety of foods has provided promising tools for improved quantitative performance.In order to improve the accuracy and precision of different detection methods (PCR, LFIA and electrochemical method), different parameters such as bacteria species, year of article, multiplex detection capability and food sample type have been considered as determinants of LOD.The results show that bacteria species and food sample type strongly contribute to predicting the LOD.Average LOD is the highest for detection using LFIA (24 CFU/mL), followed by the electrochemical method (12 CFU/mL) and PCR (6 CFU/mL).Salmonella and Escherichia coli in the Pseudomonadota domain usually have lower LODs than other bacteria species.LODs are usually lower for detections in fish and egg than for detections in other food samples analyzed.Most articles about LFIA involve metal nanoparticlesespecially gold and iron.The average LOD of articles involving gold (26 CFU/mL) is higher than that of iron (12 CFU/mL).EIS, CV and DPV are three major techniques among articles about electrochemical methods.CV has a higher average LOD (18 CFU/mL) than EIS (12 CFU/mL) and DPV (8 CFU/mL).The LOD usually decreases when the sample number increases until it reaches its lowest point in the detection of the same bacteria species, food sample group and detection method.The LODs of Listeria in milk using LFIA and an electrochemical method with sample numbers have exponential regressions with relatively high Pearson correction coefficients (R 2 > 0.95).Sample enrichment and improvement in device properties of detection and the possibility of combination with other detection technologies are needed to lower the LOD and improve the performance of detection further.This review provides guidance for future developments in bacteria monitoring technologies based on the enrichment of bacteria from samples and the development of multiplex detection methods that can increase the detection efficiency but also keep the LOD low.The integration and exploration of novel nanomaterials will help to further lower the LOD and advance the capability of bacterial detection technologies in the realm of food safety.
Methods
PRISMA Statement (Preferred Reporting Items for Systematic Reviews and Meta-Analyses): We finished the PRISMA 2020 checklist and constructed a flowchart following the PRISMA guidelines and registration information.The selection process was based on the PRISMA statement 2020 [210], and the flowchart is shown in Figure 7.
Research Process: Most foodborne diseases are caused by bacteria in food, and they can be infectious and dangerous.It is essential to detect bacteria in food quickly and accurately.The systematic review was gathered through a literature search from online databases.Relevant articles were searched on Google Scholar and the Scopus database to identify the LODs of common detection methods-PCR, LFIA and electrochemical methods-in bacterial detection in food.The Boolean operators "AND" and "OR" were used to broaden the search.The keywords used for searching were "LOD", "Salmonella", "Listeria", "Campylobacter", "S.aureus", "E.coli", "PCR", "LFIA" and "electrochemical method".The article was identified through the Scopus database and Google Scholar online.The citations were collected from recent studies (2013-2023).To further ensure that we had assembled a comprehensive list of studies, we asked researchers with relevant knowledge on the topic to review and suggest keywords.The search focused on scientific research articles using the following protocol: i.
Publication years were between 2013 and 2023.ii.The keywords "("LOD")" AND "("Salmonella" OR "Listeria" OR "Campylobacter" OR "S.aureus" OR "E.coli")" AND "("PCR" OR "LFIA" OR "electrochemical method")" had to appear in the title and/or abstract.iii.They had to be scientific indexed papers with lowest LODs only.
The results were screened against inclusion criteria, i.e., articles that were not relevant to the studies.The full text of papers for all the articles that fit into the inclusion criteria was retrieved.
Screening: Strict criteria were used to determine the relevant articles for inclusion.For example, articles were excluded if published in languages other than English or for which only an abstract was available, and then each remaining search result was grouped as one of the articles.i.
"Primary articles" were research papers that appeared in the peer-reviewed literature and reported original data or results based on observations and experiments.ii."Review" papers summarized the understanding of the LODs of five bacteria species using three detection methods.The results were screened against inclusion criteria, i.e., articles that were not relevant to the studies.The full text of papers for all the articles that fit into the inclusion criteria was retrieved.
Screening: Strict criteria were used to determine the relevant articles for inclusion.For example, articles were excluded if published in languages other than English or for which only an abstract was available, and then each remaining search result was grouped as one of the articles.i.
"Primary articles" were research papers that appeared in the peer-reviewed literature and reported original data or results based on observations and experiments.ii."Review" papers summarized the understanding of the LODs of five bacteria species using three detection methods.
Throughout the screening process, the number of publications excluded in each stage and their reasons for exclusion were noted based on the guidelines outlined in the PRISMA statement 2020 in Figure 7.
Figure 1 .
Figure 1.Number of articles (a) with multiplex detection capability and (b) food sample groups by different detection methods.
Figure 1 .
Figure 1.Number of articles (a) with multiplex detection capability and (b) food sample groups by different detection methods.
Figure 2 .
Figure1ashows that the number of articles with multiplex detection capability was different among different detection methods.The number of articles with multiplex detection capability was the highest for PCR, followed by LFIA and the electrochemical method.The number of articles with multiplex detection capability for PCR, LFIA and the electrochemical method was 26 articles, 11 articles and 5 articles, respectively.Although most articles with multiplex detection capability only involved simultaneous detection of two bacteria species or two strains of one bacteria species, some articles with multiplex detection capability involved simultaneous detection of five bacteria species or five strains of one bacteria species (Tables1-3).Figure1bshows the numbers of articles according to food sample groups with varying detection methods.It was illustrated that milk was studied in the highest proportion of articles for each detection method.The second most-studied food sample group candidate was mammals via PCR and LFIA and plants via electrochemical method.The number of articles for milk for PCR, LFIA and the electrochemical method was 13 articles, 17 articles and 22 articles, respectively.The number of articles for mammals for PCR and LFIA was 12 articles and 11 articles, respectively.The number of articles for plants for the electrochemical method was 11 articles.The distribution among the number of articles and the average LODs are annually presented in Figure2.This figure shows the research trend of different detection methods in the years from 2013 to 2023 via different detection methods.
Figure1ashows that the number of articles with multiplex detection capability was different among different detection methods.The number of articles with multiplex detection capability was the highest for PCR, followed by LFIA and the electrochemical method.The number of articles with multiplex detection capability for PCR, LFIA and the electrochemical method was 26 articles, 11 articles and 5 articles, respectively.Although most articles with multiplex detection capability only involved simultaneous detection of two bacteria species or two strains of one bacteria species, some articles with multiplex detection capability involved simultaneous detection of five bacteria species or five strains of one bacteria species (Tables1-3).Figure1bshows the numbers of articles according to food sample groups with varying detection methods.It was illustrated that milk was studied in the highest proportion of articles for each detection method.The second most-studied food sample group candidate was mammals via PCR and LFIA and plants via electrochemical method.The number of articles for milk for PCR, LFIA and the electrochemical method was 13 articles, 17 articles and 22 articles, respectively.The number of articles for mammals for PCR and LFIA was 12 articles and 11 articles, respectively.The number of articles for plants for the electrochemical method was 11 articles.The distribution among the number of articles and the average LODs are annually presented in Figure2.This figure shows the research trend of different detection methods in the years from 2013 to 2023 via different detection methods.Figure2ashows that the annual numbers of articles were different between different detection methods and years.There was at least one article published each year for each detection method from 2013 to 2023.The number of articles published from 2019 to 2023 was higher than that for articles published from 2013 to 2018 in each detection method, indicating increased research interest.For PCR, the annual number of articles was five articles in 2013; then, it decreased to two articles in 2014.It increased gradually to seven articles in 2018 and decreased to three articles in 2019.It reached its highest point at ten articles in 2020 and decreased to the bottom at two articles in 2021.In the case of LFIA, the annual number of articles was only one article in 2013.It increased gradually to eight articles in 2018.It reached eight articles again in 2019 and decreased to the lowest level at four articles in 2021.In the case of the electrochemical method, the annual number of articles also started with one article in 2013.It increased sharply to three articles in 2014 and decreased to two in 2016.Then, it increased gradually to nine in 2022, followed by a large decline to five in 2023.
Figure 2 .
Figure 2. Timeline of the annual number of articles collected and average LODs in different years via different detection methods.(a) Number of articles.(b) Average LODs.
Figure 2 .
Figure 2. Timeline of the annual number of articles collected and average LODs in different years via different detection methods.(a) Number of articles.(b) Average LODs.
Figure
Figure2ashows that the annual numbers of articles were different between different detection methods and years.There was at least one article published each year for each detection method from 2013 to 2023.The number of articles published from 2019 to 2023 was higher than that for articles published from 2013 to 2018 in each detection method, indicating increased research interest.For PCR, the annual number of articles was five articles in 2013; then, it decreased to two articles in 2014.It increased gradually to seven
Nanomaterials 2024 ,
14, x FOR PEER REVIEW 9 of25 7 CFU/mL in 2014 and increased tremendously to 75 CFU/mL in 2016.Then, it decreased sharply to 17 CFU/mL in 2017 and was again followed by an increase to 40 CFU/mL in 2019.After that, it decreased gradually to 8 CFU/mL in 2021 and increased again to 25 CFU/mL in 2023.In the electrochemical method, it was 4 CFU/mL in 2013, and it increased gradually to 10 CFU/mL in 2015.After a decrease to 7 CFU/mL in 2016, it increased again to 35 CFU/mL in 2018.Then, it decreased gradually to 3 CFU/mL in 2022.
Figure 5
Figure5shows the average LOD of different bacteria species and food sample groups via different detection methods based on 150 articles in tables.It shows which detection method is the most suitable for each bacteria species and food sample group.
Figure 5 .
Figure 5. LODs of different bacteria species analyzed using different detection methods in the vari ous food sample groups (in Tables 1-3).(a) LOD vs bacteria species.(b) LOD vs food sample groups
Figure
Figure5presents the average LOD of different (a) bacteria species and (b)food sam ple groups via different detection methods.Figure5ashows that the overall average LOD was the lowest for PCR and the highest for LFIA.The average LOD was higher for article with multiplex detection capability in PCR than for those without, but it was lower in LFIA and the electrochemical method.PCR has the lowest average LOD among all detec tion methods for Salmonella, Campylobacter and E. coli, and these bacteria species are al gram-negative (−).In addition, the electrochemical method has the lowest average LOD among all detection methods for Listeria and S. aureus, and these bacteria species are al gram-positive (+).On the other hand, LFIA always has the highest average LOD for each bacteria species.In PCR and LFIA, the LODs for bacteria species in the Pseudomonadot domain were usually lower than those for bacteria species in the Bacillota, but they wer similar to the latter for the electrochemical method.For bacteria species in the Pseudo monadota domain, the average LOD for E. coli was lower than it was for Salmonella in PCR and the electrochemical method, but higher than the latter in LFIA.For bacteria species in the Bacillota domain, the average LOD for Listeria was lower than it was for S. aureus in PCR and the electrochemical method, but higher than the latter in LFIA.The average LOD of Campylobacter was usually the highest among all the bacteria species in each detection method, except that it was lower than S. aureus in PCR.
Figure 5 .
Figure 5. LODs of different bacteria species analyzed using different detection methods in the various food sample groups (in Tables1-3).(a) LOD vs bacteria species.(b) LOD vs food sample groups.
Figure 5
Figure5presents the average LOD of different (a) bacteria species and (b)food sample groups via different detection methods.Figure5ashows that the overall average LOD was the lowest for PCR and the highest for LFIA.The average LOD was higher for articles with multiplex detection capability in PCR than for those without, but it was lower in LFIA and the electrochemical method.PCR has the lowest average LOD among all detection methods for Salmonella, Campylobacter and E. coli, and these bacteria species are all gram-negative (−).In addition, the electrochemical method has the lowest average LOD among all detection methods for Listeria and S. aureus, and these bacteria species are all gram-positive (+).On the other hand, LFIA always has the highest average LOD for each bacteria species.In PCR and LFIA, the LODs for bacteria species in the Pseudomonadota domain were usually lower than those for bacteria species in the Bacillota, but they were similar to the latter for the electrochemical method.For bacteria species in the Pseudomonadota domain, the average LOD for E. coli was lower than it was for Salmonella in PCR and the electrochemical method, but higher than the latter in LFIA.For bacteria species in the Bacillota domain, the average LOD for Listeria was lower than it was for S. aureus in PCR and the electrochemical method, but higher than the latter in LFIA.The average LOD of Campylobacter was usually
Figure
Figure5presents the average LOD of different (a) bacteria species and (b)food sample groups via different detection methods.Figure5ashows that the overall average LOD was the lowest for PCR and the highest for LFIA.The average LOD was higher for articles with multiplex detection capability in PCR than for those without, but it was lower in LFIA and the electrochemical method.PCR has the lowest average LOD among all detection methods for Salmonella, Campylobacter and E. coli, and these bacteria species are all gram-negative (−).In addition, the electrochemical method has the lowest average LOD among all detection methods for Listeria and S. aureus, and these bacteria species are all gram-positive (+).On the other hand, LFIA always has the highest average LOD for each bacteria species.In PCR and LFIA, the LODs for bacteria species in the Pseudomonadota domain were usually lower than those for bacteria species in the Bacillota, but they were similar to the latter for the electrochemical method.For bacteria species in the Pseudomonadota domain, the average LOD for E. coli was lower than it was for Salmonella in PCR and the electrochemical method, but higher than the latter in LFIA.For bacteria species in the Bacillota domain, the average LOD for Listeria was lower than it was for S. aureus in PCR and the electrochemical method, but higher than the latter in LFIA.The average LOD of Campylobacter was usually
Figure 6 .
Figure 6.Exponential regressions of LODs of Listeria in milk via LFIA and electrochemical method with sample number.(a) Via LFIA.(b) Via electrochemical method.
Figure 6 .
Figure 6.Exponential regressions of LODs of Listeria in milk via LFIA and electrochemical method with sample number.(a) Via LFIA.(b) Via electrochemical method.
Figure 7 .
Figure 7. PRISMA flow diagram for the literature search; na = not applicable.
Table 1 .
PCR: Papers with LODs for five common bacteria species.
Table 2 .
LFIA: Papers with LODs for five common bacteria species.
Table 3 .
Electrochemical method: papers with LODs for five common bacteria species.
Table 4 .
Comparison of PCR, LFIA and electrochemical methods for bacterial detection.
|
v3-fos-license
|
2024-03-27T15:49:13.171Z
|
2024-03-25T00:00:00.000
|
268700500
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09632719241231514",
"pdf_hash": "d08d4cadae34fae7e5cfab4b19b12439b5fa1aaf",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1142",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Philosophy"
],
"sha1": "c487d72b9a02987faae5fd1b05402a4fb72824e6",
"year": 2024
}
|
pes2o/s2orc
|
Exploring economic dimensions of social ecological crises: A reply to special issue papers
In this paper I consider various shifts in my research and understanding stimulated by seeking how to combat social ecological crises connected to modern economies. The discussion and critical reflections are structured around five papers that were submitted to Environmental Values in an open call to address my work. A common aspect is the move away from neoclassical environmental economics, and its reductionist monetary valuation, to a more realist theory and multiple methods. This relates to my work on environmental ethics, plural values, stated preference validity and deliberative monetary valuation. Expanding beyond the narrow confines of mainstream orthodoxy has involved exploring a range of other disciplines (e.g. applied philosophy, social psychology, human geography, political science, social anthropology, history of thought and philosophy of science) and learning from this literature to rethink economics and develop social ecological economics. A broad range of subjects are covered here, including: personal responsibility, social practice, psychology of the individual, participatory processes, value (intrinsic, instrumental and relational), Nature–society relationships and interdependencies, critical realism and the conduct of unifying interdisciplinary science. I end with a series of comments concerning the failings of orthodox economics and the conduct of scientific research for social ecological transformation.
Introduction
Over 40 years ago, when I began formally training as an economist, the connection of the economy to environmental problems was pursued by relatively few in the profession and they were rather marginalised.If you introduced yourself as studying the environment and economics people were generally mystified and disinterested.Ecology and environmentalism had been headlined in the early 1970s and a decade later nobody seemed much concerned about pollution, species loss or resource depletion.Environmental economics had lost its radical edge and become pre-occupied with optimal control modelling and efficient pricing via a tax set to reflect damages converted into money values.Externality theory was erroneously attributed to Arthur C. Pigou and deployed by Chicago school economists to remove blame for pollution from industrial polluters and the capitalist market system (Spash, 2021b).Environmental pollution and resource depletion were primarily regarded as problems concerning intergenerational equity and discounting, that is, low-priority things affecting some distant future generations.
Well, the future has arrived!A younger generation is deeply disturbed by the state of the world both socially and ecologically.On the environmental side human induced climate change tops the crisis list for most people, followed by mass extinction of species and biodiversity loss, and then a long line of pollutants and human activities (noise, light, surface sealing, genetic modification, species introduction and manipulation, land use change).On the social side are increasing inequity, exploitation, inhumanity, violence and violation of human rights, securitisation and wars.The structural reality of social ecological economic systems was never addressed by orthodox economists' idealised abstractions.In fact, the problems were always as much spatial as intemporal, as evident in cost-shifting (Kapp, 1978(Kapp, [1963]]), unequal exchange (Hornborg, 1998;Hornborg, 2017) and the colonial exploitation required to maintain supply chains (Brand and Wissen, 2017;Brand and Wissen, 2021).
Today, the systemic failures of economic structures create angst, fear and depression.One set of reactions is to maintain business-as-usual based on denial that problems even exist and, when that fails, blame others who are then vilified.Amongst those who accept the problems are real there have arisen various forms of apologia for maintaining capital accumulating economies (Spash, 2020a).This includes what Brand (2016) has termed a 'new critical orthodoxy' that undertakes a radical diagnosis of the ecological crises, but then recommends 'transformation' as a processes embedded in current institutions without systemic change.A set of more seriously reformist reactions have seen the revival of open criticism of capitalism, a rise in civil disobedience, protest and calls for direct action (e.g.Malm, 2021).Movements like degrowth have arisen that critique economics and seek to establish fundamentally alternative economic systems.
While seeking to identify and address the real causes of harm of the innocent, both human and non-human, a disturbing finding is the extent to which economics itself has been a causal mechanism preventing change for the better.Comprehensively rethinking economicsits definition, aims and conductforms a connecting thread across the five papers I will review in the following sections.These were compiled by the editors of Environmental Values in response to an open call for contributions relating to or inspired by my work.Claudia Carter provides a good starting point by summarising the process of my changing ideas in a chronological sequential set of stages: problem identification, developing alternative evaluation methods, philosophy of science and social ecological economics.This tracks my move from naïvely believing in reform via economic incentives to realising the necessity for new institutions and radical systemic change.
Contrary to the fact-value dichotomy, explanatory descriptive social science entails criticism of ideas in society.Research exposes erroneous and bad causal claims and associated ideas, as well as the institutions that promote them.Critical realism explains how this then entails a responsibility on the part of the researcher to act in ways consistent with their findings.For social ecological economists this means acting in accord with established warranted beliefs about social and ecological reality, and related environmental and social value commitments (Spash, 2012(Spash, , 2024)).Iana Nesterova reflects upon the personal implications of taking seriously the requirement to align ones scientifically informed value commitments and daily practices.Academic pursuits can then align with activism.The consequences are far reaching and involve an ongoing process with challenging psychosocial dimensions and self-identity issues.
Individual psychology, socialisation and behaviour formed a large part of my research and built from my interest in understanding environmental values and ethics.This latter aspect inevitably entailed extensive criticism of orthodox economics, as evident in the remaining three papers of this special issue.Rachelle Gould et al. review my criticisms of monetary valuation, which include economics being value laden, denying incommensurability and imposing value monism.They argue in favour of adding to the dichotomy of instrumental and intrinsic values a relatively new category called relational values.I critically reflect on their presentation of my work and the meaning of relational value.The case study by Lina Isacs et al. explores the values of respondents who answer stated preference surveys concerning environmental change.I note how this reaffirms my (and others') arguments for regarding economists' stated preference studies as misrepresentations of peoples' actual values.In a second stated preference case study, Jacob Ainscough et al. criticise the design and appropriateness of mainstream economic attempts to add deliberation to legitimise monetary valuation.They contrast this with an alternative arbitrated group deliberation built on participatory and democratic principles.I discuss their contribution in relation to my work on deliberative monetary valuation and, more generally, value articulating institutions.
In the following sections, I explore these five papers in the order just given: Claudia Carter, Iana Nesterova, Rachelle Gould et al., Lina Isacs et al. and Jacob Ainscough et al.Before proceeding, let me thank all the authors for their contributions and the anonymous reviewers and editors for their invisible work.I deeply appreciate the opportunity, offered by this special issue, to reflect upon and clarify some aspects of my research and positions on environmental values, economic science and social ecological transformation.
Economics, public policy and the ecological crisis
Claudia Carter provides a good and insightful chronological overview of my work as a professional economist which she divides into three phases covering my move from the mainstream orthodoxy of environmental economics to developing the alternative paradigm of social ecological economics.Here I add to her account some personal anecdotes and reflections.I very much appreciate the overview she has provided but will start by making one qualification.
Claudia Carter states that my 'attempts to bring different factions and disciplines together (even within heterodox economics) have encountered difficulties and been limited' and she thinks have mainly concerned economists and ecologists.I would not deny difficulties and limits in interdisciplinary endeavours but feel this is misleading.There are a variety of ways in which I have brought together 'different factions and disciplines', and I believe with some noted success, including several activities in which Claudia Carter was herself heavily involved.There were a range of international research projects that were inherently interdisciplinary and undertaken at a time when seriously applying such approaches was rare (as Claudia Carter notes).Similarly, a decade of running the European Society for Ecological Economics (ESEE) involved troubleshooting across disciplines and national interests and established the ESEE as the most progressive regional society in the field.This also included organising two international conferences that brought the entire community together with long lasting impacts.I also helped Dick Norgaard in establishing the constitutional foundations for the International Society for Ecological Economics (ISEE), international regional society participation and preventing disintegration in light of domination from the United States (Spash, 2023).Then there is, of course, my work for Environmental Values as Editor-in-Chief, which expanded its interdisciplinarity, both in terms of board members and papers published.My own research activities also reach across more than just ecological and economic sciences.For example, working on biodiversity loss involves engaging with a range of natural and social scientists covering disciplines such as botany, zoology, ecology, conservation, planning, economics, philosophy, social psychology, sociology and politics.I have been involved in projects, conference participation and publications in a variety of communities (e.g.conservation, degrowth, philosophy, history of thought, political economy and social psychology).Then there is my teaching, which for the past 12 years has included running an interdisciplinary and heterodox master's programme in social ecological economics and policy that involves a dozen different research institutes and at any one time between 100 and 120 students from diverse disciplinary backgrounds.Having made this qualification, let me turn to the substantive presentation by Claudia Carter of the three phases she identifies in my work.
The first is what she refers to as 'problem identification', and involved my emerging recognition of the extent to which economics fails to address reality.My focus on environmental policy is evident in all my university dissertations which relate to air pollution problems -B.A. sulphur and nitrous oxides (acid rain), M.Sc.tropospheric ozone and Ph.D. greenhouse gases.In terms of recognising public policy failures Claudia Carter cites my early work on human induced climate change via enhancement of the greenhouse effect published some 35 years ago (Spash and d'Arge, 1989).She correctly mentions that 'the need for action by the current generation on global warming seemed already clear'.Indeed, the back-tracking on action ever since the late 1980s has been phenomenal.
The Intergovernmental Panel on Climate Change (IPCC) became a talking shop and an excuse for too many politicians to optout of mitigation by claiming 'more evidence' was required.Capitalist business as usual, euphemistically termed 'the economy', had to be protected, and showing that action was costly and the benefits small became part of the game (e.g.Nordhaus, 1991aNordhaus, , 1991cNordhaus, , 1991b)).Here the promotion of 'evidence-based science' can also be seen as problematic.Requiring empirical evidence prior to action means awaiting actualisation of extreme climatic disaster events, by when preventative action is by definition too late.An increasing frequency of catastrophic climatic events has left denialist politicians unmoved because they have managed the threat to their political base of support by pouring doubt on science.Even within the pro-action community, climate science-policy saw conversion from precautionary mitigation based on scenario analysis to adaptation based on risk management.Converting strong uncertainty to probabilistic risk assessment contradicts the simple fact that unique climatic events cannot be given a statistical probability of occurrence because they are not regular or repeatable events (Spash, 2002a).The fallback position is to get experts to provide spurious subjective probabilities.Such science-policy fails to recognise the necessary role of judgement based on understanding of biophysical and social-economic structure, and the need for ethically committed decision making, as well as the need for broader inclusive deliberative participation in such policy.
Over the 35 years since NASA scientist James Hansen testified to the Senate in the United States, about the disastrous consequences of greenhouse gas emissions from fossil fuels predicted by structural understanding of biophysical reality, trillions have been spent on fossil fuel exploration, extraction, pipelines and new sources (e.g.fracking and tar sands).That climate denialism and 'think tanks' have been funded by the oil and gas corporations has been well documented (Spash, 2014b;Oreskes and Conway, 2010).Each time serious policy initiatives have been on the agenda serious reversals were created by misinformation campaigns and orchestrated personal attacks on scientists.This arose because greenhouse gas mitigation means the end of fossil capitalism and requires extensive government intervention to achieve peaceful social ecological transformation of modern economies.In 1989, Hansen was a politically naïve scientist who seemingly believed that the Senate would change everything based on NASA's scientific structural understanding of the climate.Today he is a climate activist who has been arrested several times.
There are many lessons in the failure of public policy on greenhouse gas mitigation and not least amongst them is the role of an unscientific and deeply flawed climate economics (Spash, 1993a(Spash, , 1994a(Spash, , 1994b(Spash, , 1996(Spash, , 2002a(Spash, : 3091, 2007b(Spash, , 2007d(Spash, , 2007c(Spash, , 2008b;;Baer and Spash, 2008;Spash and Gattringer, 2017).There has been slow, but increasing, recognition that climate economists who focus public policy on growth, investment, rates of return to capital and discounting are acting without any regard to the actual phenomena they are supposedly addressing.Recently a group of climate economists (e.g.William Nordhaus, Richard Tol, David Anthoff, Francisco Estrada, Simon Dietz, James Rising, Thomas Stoerk and Gernot Wagner) have be called savant idiots, climate simpletons or just simply idiots (see Ketcham, 2023).There is something unethical in treating climatic catastrophe as an optimisation problem for economic growth modellers, fiddling with calculus while the planet burns.The longer this practice persists the more it appears as a form of insanity.Unlike some, I do not see the likes of Nicholas Stern and Joseph Stiglitz as the good guys here, because they similarly deny reality and seek to maintain a development model based on colonial capital accumulating growth and social ecological exploitation.
Such realisations took time.While I had been trained to think that getting monetary values into the system could solve environmental problems, I was well aware that the methods of cost-benefit analysis (CBA) were problematic and employed a narrowly defined value.I was researching just how narrow and working on refusals to trade-off, inviolable rights, intergenerational ethics and deontology.As Claudia Carter remarks, I did not pursue a second edition of a book on CBA and the environment (Hanley and Spash, 1993), despite the publisher, Edward Elgar, telling me himself that it was 'a minor classic'.There might have been an opportunity to bring critical positions into the environmental CBA community, but I concluded this was training people in fundamentally flawed methods.I saw first hand how applications were conducted and the ease with which numbers were created, manipulated and even fabricated.
A classic example is a study valuing the externalities from aggregates extraction in order to justify a new tax.This was commissioned by the then recently elected Labour government of Tony Blair.After completion of an open-ended format contingent valuation study, in which I was involved, the Treasury Department decided a second study was required.The economists brought-in were prepared to 'incentivise' respondents to produce numbers via a closed format (dichotomous) choice approach.I was not involved but kept-on to attend consultation meetings.At one of these, a Treasury official asked the new experts how many respondents could be cut while still claiming a stated preference survey was valid.The answer, they agreed, was up to 30%!The state of the art involved various techniques to prevent large numbers and protest bids.Despite this, the final report used a 25% discount rate to reduce the money values.When I was asked for my final input before publication, I requested that they remove my name from the report, which they did.This kind of experience informed my critique of the contingent valuation method (CVM) (Spash, 2008a), and recognition that the newer dichotomous choice approach was even worse.The anecdote appears like an episode of 'Yes, Minister', but exemplifies how stated preference methods can operate in purely strategic terms to justify a predetermined policy requirement, and respondents' real values are purposefully ignored (see Isacs et al., 2023).
Getting inside policy circles reveals how scared people are to rock the boat or point out failures and inadequacies in evidence.During the various projects cited by Claudia Carter, our research community engaged with government agencies and civil servants and held workshops with them that produced open reflections and discussions.Some such as Ronan Palmer (2000) from the Environment Agency were prepared to put fairly candid views in print.Others heavily qualified their positions and protected their agencies in light of our project findings.For example, showing the array of environmental valuation methods available deconstructed the position that 'there is no alternative' to CBA but led to a pragmatic pluralist argument to allow its continued use (Burney, 2000).A basic contradiction here is supporting value pluralism while using methods based on value monism.Unfortunately, the existence of multiple methods is too often used as an excuse for 'anything goes'.
Developing alternative evaluation methods, as a means for change, is the second phase in Claudia Carter's account.My aim, in collaboration with fellow researchers, became to Spash firmly establish the diversity of methods and explore their different uses and qualities.I planned to write an evaluation textbook that placed CBA in context, as I was doing in my teaching, but only managed a couple of co-edited volumes (e.g.O'Connor and Spash, 1999;Getzner et al., 2005).I was designing and coordinating large projects, and managing an interdisciplinary team of some 20 or so researchers, which engaged my time in administration, facilitation of others' work and networking.Modern managerialist metrics misguide and fail to value much of what constitutes research-unsuccessful grant applications, workshops, meetings, discussions, project reports and grey literature.The team work and international networks were highly productive and innovative, and I was able to display some of this later when compiling the Handbook of Ecological Economics (Spash, 2017a).There I brought together a full range of alternative methods in a substantive section with multiple authors covering: multi-criteria mapping (White, 2017), Q methodology (Davies, 2017), participatory approaches (Blackstock, 2017), the Numeral, Unit, Spread, Assessment and Pedigree (NUSAP) framework (van der Sluijs, 2017), multi-criteria evaluation (Greco and Munda, 2017), deliberative monetary valuation (Kenter, 2017), participatory modelling (Videira et al., 2017), input-output analysis (Erickson and Kane, 2017) and sustainability indicators (Roman and Thiry, 2017).
A major part of the teamwork in this period was on participatory approaches.We explored ideas and worked around the topics of public engagement and problems and potentials of participatory institutions (e.g.citizens juries) compared with stated preference methods (O' Neill and Spash, 1998).We engaged with the work of human geographers Jacque Burgess, Judy Clark and Carolyn Harrison on deliberative inclusive participatory approaches (Burgess et al., 1988a;Burgess et al., 1988b) and their critiques of CVM (e.g.Burgess et al., 1998).There was interaction with political scientists, such as John Dryzek (2000) and, his then doctoral student, Simon Niemeyer, who came to study with me for a year leading to a joint publication (Niemeyer and Spash, 2001).Exploring the implications of participation for economic valuation led me to introduce the termed deliberative monetary valuation (Spash, 2001b); see discussion of Ainscough et al. below.Around the same time, I coordinated an international team working on social psychology and economics in environmental research (Spash and Biel, 2002).I was exploring the role of institutions, ethics and attitudes in the context of individual valuation and choice, and this led to a series of articles (Spash, 2002b;Ryan et al., 2009;Ryan andSpash, 2011, 2012).Ideas were developing of value articulating institutions (O' Neill and Spash, 2000), and this was stimulated by engaging with Arild Vatn (2005: see chapter 12).
However, as Claudia Carter explains: 'even if improved valuation studies introduce more meaningful measures and results, this does not necessarily lead to better decisions because political, power and profit interests may sideline or trump social ecological interests and values'.The role of scientific knowledge in society is intertwined with how vested interests operate.As noted above, over 30 years of international climate research, and consensus under the auspices of the IPCC, has failed to result in effective curtailment of rising greenhouse gas emissions, let alone reduce them.There have been piles of reports, and endless meetings to produce more reports, and funding for more research, as if lack of information were the issue, rather than lack of political will to address the social ecological structure of modern economies.As Claudia Carter explains, 'The trouble is not a lack of data, knowledge or insights, but an erosion of principled decisionmaking for the long-term wellbeing of society and ecosystems'.
Such concerns were reflected in the collective research efforts of European social ecological economists on science and technology studies (STS) and post-normal science relating to sustainability and valuation (O'Connor, 2000;O'Connor et al., 1998).This stood in opposition to claims that monetary valuation produced facts about people's true preferences, and also the pragmatic position that representing Nature's value in simple monetary numbers appearing in Science or Nature would change public policy.Post-normal science emphasises the process, such as NUSAP, by which valid scientific knowledge of high quality can be established (Funtowicz and Ravetz, 1990).Under a set of recognisable circumstancesstrong uncertainty, indeterminacy, high stakes, potential catastrophe, value conflictsthe need is for public engagement.A core concern is to challenge top-down, expert techno-optimism and unjustifiable quantified risk management, while exposing the biased values informing environmental policy.I was engaged in various exploratory projects, addressing these topics, that ran for about a decade from the mid-1990s.I also found post-normal science particularly useful for analysing the epistemic failings of the Stern review on climate economics (Spash, 2007b).Post-normal science and STS engagement further stimulated my interest in philosophy of science.
Philosophy of science constitutes part of the third phase that Claudia Carter highlights, with the other part being social ecological economics.When writing new foundations for ecological economics (Spash, 2012), I incorporate critical realism, which I first came across in the 1990s attending workshops, seminars and conferences run by Tony Lawson.My renewed interest was stimulate by discussions with Tone Smith and later meeting Armin Puller and Andrew Sayer after a conference presentation (Spash, 2017b).Claudia Carter notes my explorations of philosophy of science in terms of ontology, epistemology and value theory (axiology), and that this work has supported my development of social ecological economics (Spash, 2024).These aspects were first brought together with my coverage of the conflicting paradigms and pragmatic strategies being employed under the broad banner of ecological economics (Spash, 2011(Spash, , 2013)), which led to my proposing a scientific approach that emphasised ontology and epistemology in contrast to the dominant focus on pluralist methodology (Spash, 2012).Strong, critical and principled argument certainly does not accept all positions as equally valid but rather seeks to judge between ideas and theories.An anything goes pluralism is rejected and instead the role of judgemental rationalism is recognised as necessary for deciding between theories on epistemic grounds.Normally (scientific) collectives build-on an understanding held in common and unifying problems of concern, and if emancipatory then something they oppose and seek to change.Social ecological economics uses critical realism as under-labourer to clarify and justify the grounds on which such unity of science can operate to address ongoing crises and actualise transformation (Spash, 2024).
Social ecological economics combines a critical perspective that rejects mainstream economics with the potential for unity across a range of heterodox schools and more generally unity and integration in science (Spash, 2024).The breadth, heterodoxy and interdisciplinarity of social ecological economics is evident in the 50-chapter Handbook of Ecological Economics (Spash, 2017a), where I brought together 63 authors from a variety of disciplines, besides heterodox economics (ecological, institutional, post-Keynesian, Marxist and feminist), including: architecture, development studies, ecology, philosophy, political ecology, political science and sociology.In recent years, I have been outlining the positive directions and research agenda of social ecological economics (Spash and Guisan, 2021;Spash and Smith, 2019;Spash, 2020cSpash, , 2024)).Claudia Carter provides an insightful overview of the developing transformative agenda.In summary, social ecological economics emphasises and supports critical thinking and discussion, descriptive realism and causal explanation.A social ecological economic system requires that people accept shared responsibility for addressing the ongoing multiple crises and that the response is not merely placed on individual action but seen as a shared social responsibility.In public policy terms, dealing with the structural causes of crises is emphasised over end-of-pipe 'solutions' and adaptation to an ever degraded environment.Systemic change is sought to address, rather than patch-up, impacts from a range of practices, for example, competitive cost-shifting, inequity, exploitation, injustice, colonialism and economic growth.Social ecological transformation of capital accumulating competitive economic systems is a central issue and objective.
Social ecological transformation and the individual
The critical realist concept of explanatory critique, and its deconstruction of the fact-value dichotomy, provides a good foundation for social ecological economics as a revolutionary transformative science (Spash, 2024).Social science research is not value free even when attempting to be purely descriptive as to causal explanation, because it reveals what is wrong with prevalent ideas and identifies individuals and organisations that spread falsehoods.As Collier (1998: 446) argues, scientific explanation of social institutions is a precondition of criticising and changing them, but sometimes entails beginning the work of their subversion.There is emancipatory potential arising from revealing the practices and beliefs that reproduce unsustainable and unjust social relations (Puller and Smith, 2017: 18).That places a responsibility on the researcher to do something about reforming problematic social structures and belief systems, but also their own behaviour.
Iana Nesterova addresses how undertaking scientific research relates to our self-understanding and has implications for personal practices, which constitute our identity and way of being.Referencing critical realism and citing Roy Bhaskar, she notes that scientists should aim for unity of theory and practice.That is, how we conceptualise the world, including our value commitments, should align with how we operate in the world.Her article provides a brave and honest personal account of the challenges of being true to one's beliefs about the causes of social ecological crises.Her transformative journey started from recognising the divorce between her radical academic commitments and conformity to prevalent norm driven behaviours.Personal transformation aims to address the value conflicts involved in these mismatches.For Iana Nesterova this led to a process of ongoing self-reflection on daily practices and ways of being.
We are all familiar with the value action gap, such as anti-corporate campaigners using Gmail accounts, Marxists drinking Coca-Cola or climate activists flying.The result is cognitive dissonance, which people try to reconcile by appealing to various justifications for maintaining practices that contradict their stated values, even where changing behaviour would appear rather straight forward.Inconvenience and personal cost are typical explanations for inaction.Consequentialist arguments, that make individual actions appear meaningless and so unnecessary, are also employed and would dismiss many of the actions that Iana Nesterova discusses.For example, posing the rhetorical question 'if I stop fly what difference does it make to climate change?', and concluding that there is no individual responsibility for such things as greenhouse gas mitigation (e.g.Kasperbauer, 2016).Such arguments contrast with deontological ethics, based on the right action, but I think closer to Iana Nesterova's position is virtue ethics, or neo-Aristotelian ethics of eudaimonia, where acts are constitutive of achieving a good or meaningful life.What she describes is a process of empowerment through personal practices, taking back control via many very simple actions of caring that appear inconsequential but embody an expression of one's values and persona.They can be the start of deeper transformation.
What Iana Nesterova then recounts is her experience confronting tensions and structural problems when, as an informed social ecological economist, one seeks to tread lightly on the planet and make allowance for others both human and non-human.Anyone who has tried knows the value conflicts and difficulties that arise both in biophysical and social terms.Modern production involves multiple polluting and socially exploitative activities, but the complexity of modern supply chains works against simple action.Avoiding insecticides and pesticides by buying organic fruit and vegetables confronts the purchaser with products shipped around the world (e.g.Chilean organic apples in UK supermarkets).A vegan avoiding leather shoes is confronted with plastic products from a notoriously polluting petrochemical industry.Economists' ideas of individuals making trade-offs between known consequences with given costs and benefits simply fail to even approximate the issues involved.
Focussing on individuals and their isolated practices easily treats the world as if already in a Panglossian perfect state, because it appears to be the world people have chosen (Mishan, 1971).For example, all environmental problems can be explained away as due to consumer preferences, that is, failing to buying 'green' products.The argument typically appeals to unquestionable exogeneous preferences and the concept of consumer sovereignty as found in neoclassical and Austrian economics with roots in classic liberalism (Fellner and Spash, 2015).Neoliberals assume market structures provide individuals with the greatest freedom from coercion and reject challenging markets politically or by collective action (e.g.unions).An anti-regulatory position here is based on defining freedom negatively as non-domination and emancipation as liberating individuals from the coercive power of State intervention and interference in personal choice by other individuals.
That capitalist markets fail to produce or supply what is ethical, just and equitable is part of the structural problem that such liberal theories fail to recognise.In contrast, Marxist theory identifies capitalism as a social structure that forms material and social conditions (e.g.power, class relations and property rights) backed by legal institutions acting on behalf of capitalists.Structure and technology go hand-in-hand to create lock-in and are purposefully used by capitalists to do so.The Coronavirus crisis provides a good example where hi-tech corporations exploited the opportunity to promote surveillance, home schooling, telehealth, smart cities, exclusively electronic money commerce (removing physical currency), driverless vehicles and 5G super connectivity (Klein, 2020).Governments committed to growth economies supported such corporate lobbying to 'save the economy' (Spash, 2021c).
Structural lock-in occurs both physically, socially and through institutional practices.For example, mobile phone use has become socially normalised, although avoidable.However, avoidance of technology changes once certain practices become impossible, as with the introduction of security procedures requiring text messaging for everything from buying a train ticket to banking and government services.Similarly, the attempts to move to digital currencies and spread of QR codes are premised on ownership of smart phones.Technologically driven norms change to make dissenters from technological change abnormal and eventually lead to their becoming ostracised (a common intergenerational issue).
What then becomes evident is the need to understand how different social structures restrict or enable the potential for self-emancipation and deliberate social transformation.Methodological individualism fails to recognise that the social is more than the sum of its parts and group dynamics have emergent properties.Individual humans do not exist in isolation but as social beings whose actions relate to others either negatively (e.g.conspicuous consumption) or positively (e.g.care).Those dynamics create feedback loups that affect behaviour and become institutionalised (e.g.racism and consumerism), as explained by the concept of cumulative causation (Kapp, 2011).This reveals the ongoing tension between theories devoted to either structure or agency rather than their interconnection.As Castoriadis (1991: 166) explains, an individual can neither be free on their own, nor under all forms of social structure.
Iana Nesterova recounts the social and psychological challenges that deep transformation entails in a largely unchanging society.Social structure and norms of behaviour are institutionalised in expectations at work such as using a smart phone, car or flying.If group norms are violated then social disapproval is likely to follow.Changing personal practices is not just challenging personally but confronts institutional barriers and challenges others' practices simply by exemplifying alternative ways of being.Consider the case of climate scientist Kevin Anderson who travelled by train from the UK to China to attend a climate conference, but was criticised by, and publicly defended himself against, fellow climate researchers who flew.Challenging unsustainable practices occurs simply by non-conformity to those practices, and it inevitably challenges others engaged in those practices, whether colleagues, family or friends.
The relationships humans form during their lifetimes create associations and attachments that are constitutive of their identity and help them live with vulnerability and uncertainty.An individual's identity develops in the context of social and ecological relationships.Social ecological transformation requires policy interventions that involve changing manifestly unsustainable practices, but unstainable practices are maintained because of a person's psychosocial biography.So, this means that removing practices can create vulnerability.Successful intervention will then need to recognise and address people's vulnerabilities and attachments.The fact that individuals and their practices are embedded in societal structures and institutions means both must change together to achieve social ecological transformation (Spash, 2016).
Living in modern consumerist capitalist society means being subject to the creation of ones identity via products and the ensuing creation of social relations via in-group/outgroup consumer behaviour.As Iana Nesterova explains, one result of actions like avoiding the latest fashions and gadgets is that people engaged in these activities no longer regard you as in their group.Corporate marketing purposefully targets this mechanism in its advertising, product differentiation and pricing policies.Conspicuous consumption is encourage as a means of showing social status (Veblen, 1991(Veblen, [1899]]), but is inherently limited and continuously undermined by others' consumption (Hirsch, 1977).
In his book on behavioural economics, Peter Earl (2022: 431-447) explores the challenges of environmental transformation.He contrasts materialism with environmentalism using two hypothetical Australian couples: one united in status driven consumerism, including designer babies, and the other divided over what constitutes being 'green'.His deep green character is described as inspired by my writings and facing value conflicts and challenges, rather similar to Iana Nesterova in reality.His account of modern materialism exemplifies the extent to which it has become embedded in people's behaviour and constructs their identities.As Peter Earl notes, 'there has been a tendency for consumption to resemble an arms race between status-hungry consumers that manifests itself in larger and larger products' (Earl, 2022: 443).
Despite the rhetoric of (neo)liberalism and Austrian economists, consumerism is not freedom.Being free requires a sustained effort against internal and external forces that constantly try to impose their meanings on individuals, holding them captive and impeding alternative possibilities.I believe that Iana Nesterova's approach to a particular mode of being matches Cornelius Castoriadis' ideas that personal freedom involves rejecting being a passive product of one's psyche and history.Being an active co-author of one's own life involves practices of introspection, critical reflection and deliberation in order to distance oneself from internalised behaviours, routines, beliefs and desires, and requires critically evaluating practices as to their meaning, validity and desirability.An individual aspiring to be free is engaged in a continuous struggle in which they aim to attain an active relationship to, and engagement with, their own psyche and societal influences, while accepting that they are unable to fully control them.Practices of selfengagement, emancipation and self-empowerment are important mechanisms for achieving freedom (Windegger and Spash, 2023).
Valuing relations between society and Nature
Our sense of who we are involves interactions with the natural world as mediated via cultural practices.Engaging in interdisciplinary projects on environmental values made this evident while stimulating my interest in the combination of ethics, attitudes, norms and institutional context.For example, as I learnt from Jacquie Burgess, sense of place is very much about a person's psychosocial biography.When I went to work at the Commonwealth Scientific Industrial Research Organisation (CSIRO) in 2006 my research proposal included investigating Aboriginal relations with Nature via the environmental values held by different social groupings (e.g. from the outback to city, within and across tribal groups), and the extent to which their conceptualisations matched or diverged from Western ones.That project never materialised, primarily because of my fight with the CSIRO over emissions trading (see Spash, 2014b).However, the proposal indicates the direction that emerged from the European networks and projects on environmental values in which I was involved.
Within that research community, John O'Neill and Alan Holland were key to the philosophical reflections on why the environment matters to humans and summarised this in terms of living from, in and with the world (O' Neill et al., 2007: 1-4).The idea of 'living in the world' has aspects of sense of place, local aesthetics and connections to social and cultural meanings of Nature as constitutive of our worldly experiences and so our personas and communities.This appears to correspond to what Gould et al. call relational values, while 'living from' and 'living with' correspond broadly to what they term instrumental and intrinsic values, respectively.
Relational values form a recent conceptual category.In Gould et al., all works referenced that use the term in the title have appeared since 2017 and 60% of those in the past 3 years.The concept was adopted by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) under an approach named 'Nature's contribution to people' (Díaz et al., 2018).In part this derived from concerns over the restrictive perspectives on the values of Nature imposed in public policy and more specifically the exclusion of Indigenous peoples and local communities (IPLC).As Spash and Smith (2022) note, these IPLCs challenge the hegemonic discourse of market value and economic growth, and represent non-Western forms of non-instrumental reasoning.The development seems to align with the aforementioned direction taken towards psychosocial aspects of environmental values.
Indeed, Gould et al. argue that relational values can provide a response to some of my critical reflections on evaluation methods and specifically with respect to environmental CBA and the role of (preference) utilitarianism in economics.They identify three broad critical arguments in my work: the ethical basis of economics which typically claims to be value free, problems of commensuration that they frame as 'aggregation of substitutable preferences', and the limited scope of permitted values (e.g.monetary and quantifiable).Relational values are then described as taking this agenda of value concerns forward.To date, I have only briefly commented on relational values a few times and have remained somewhat sceptical as to its potential contribution (Spash, 2020b(Spash, , 2022;;Spash and Smith, 2022).My substantive concerns have been the lack of clarity as to what constitutes the distinctiveness of a relational value, as opposed to other value, and, leading on from this, the role such a value is expected to play in policy.I note the concept is used in Gould et al. as a set of values (plural), but what exactly constitutes the distinct value here remains unclear.
Paraphrasing Gould et al., relational values are defined as preferences, principles and virtues associated with relationships that go beyond means to an end and that exist between humans and nature, and among humans spatially and temporally through nature.They are regarded as describing a set of values missing from existing approaches to environmental valuation.However, while contrast with instrumental and intrinsic values, and framed as a break from them, what constitutes a relational value is actually defined in terms of these two categories.Relational value might then appear as a form of synthesis, because Gould et al. attribute it with 'instrumental threads' as well as being 'non-substitutability in principle' like intrinsic value.In Table 1 of Gould et al. (see their paper), intrinsic value is defined as concerning 'Entities that are ends themselves, whose value is expressed independently of reference to people', while instrumental value is 'What entities provide to people as a means to an end'.Elsewhere, they state that, 'This dichotomy represents two understandings of nature's value: "nature for its own sake" (regardless of reference to people's needs or preferences) and "nature for us" (as provider of benefits and services to society)'.
The dichotomy here appears problematic because it is presented as equating intrinsic value exclusively with the non-human and instrumental value exclusively with the human.In terms of instrumental values, the means-ends definition given by Gould et al. is equally applicable to non-human entities.That is, there can be instrumental values in the absence of humans.Non-humans can live and flourish having their own ends and means to meet them.At the same time, intrinsic value is not limited to non-humans and the standard versions of all three major Western ethical theories include claims about intrinsic values.As Spash and Clayton (1997: 152) note: 'A utilitarian philosophy sees only instrumental value in acts but intrinsic value in the consequences of those acts.Human welfare, or happiness, is then seen as the only intrinsically valuable thing: an anthropocentric value system'.Deontologists take conformity to principles of right action to be intrinsically valuable, and virtue ethicists take virtue or eudaemonia to be intrinsically valuable.Intrinsic value is present, but plays different roles, in each of these theories (McShane, 2017).As this indicates, intrinsic value has a variety of forms, but does not stand in opposition to human ethical theories.The debate is rather whether intrinsic value is also applicable to non-humans and, if so, in what form.
I also find Gould et al. are too eager to classify my work within their own dichotomous frameworkecocentric versus anthropocentric, intrinsic versus instrumental, deontological versus utilitarianwith the overall aim of promoting relational values as a third way.More specifically, they claim that: 'Spash often relies on a dualistic, oppositional framework (intrinsic versus utilitarian (monetary) values; non-anthropocentrism vs. anthropocentrism).He largely neglects alternative articulations of value […]'.I feel this is rather misleading and, because there is no reference to my work here, I am not sure where the dualism is meant to occur.Thirty years ago I did contrast intrinsic and utilitarian values in conservation (Spash andSimpson, 1993, 1994) and in my critical reflections on natural capital and sustainability (Spash and Clayton, 1997).However, my empirical work on values had no simple dichotomy, as claimed here, and intrinsic value does not appear in such work, even going back to my earliest publication on biodiversity valuation (Spash and Hanley, 1995).This, as well as later work, is formulated around non-compensatory choice and rights-based beliefs (see overview in Spash, 2000b).
Several of the papers Gould et al. reference in connection with my use of intrinsic value are also misleadingly cited.One does not even include the term intrinsic value (Spash, 2000b), while others recognise intrinsic value in humans and/or acts not just nonhumans and their collectives (Spash, 1997, ), or do not use the term as implied and relate to the misuse of the term by economists (Spash, 1999(Spash, , 2006a)), and/or briefly mention it as supporting variety in individual motives and ethical approaches (Spash, 2008c(Spash, , 2015(Spash, , 2022)).The most recent of these references actually connects to relational values and states the following: "The intrinsic vs instrumental value debate is also something that, while fading in and out of focus, has never gone away.Most recently it has reappeared in the discussion of a relational environmental value (Neuteleers, 2020;Norton and Sanbeg, 2021;Deplazes-Zemp and Chapman, 2021).While different positions and interpretations exist as to the meaning of intrinsic value in Nature, there remains at core something that, while challenging to conceptualise, is hard to dismiss (Vetlesen, 2015: Ch. 3).As McShane (2007) has argued, it is something that appears central to environmental ethics, and also has relevance for valuation as (mis)conceptualised by environmental economists (McShane, 2017).There is in this the concern for understanding and connecting to the otherness in Nature (Hailwood, 2000)."(Spash, 2022: 8) Gould et al. state that: 'Spash's critiques of utilitarian valuation typically assume that intrinsic values of nature are morally justified, i.e. that they articulate moral reasons for promoting or protecting nature'.They then contrast this as a 'normative' claim with relational value research as being 'more situated in literature that aims to describe how people actually express values and why'.I am again somewhat mystified as to the interpretation.My empirical research has always been about investigating and describing 'how people actually express values and why'.For example, my survey work in the 1990s included allowing respondents to place human interests above those of Nature if they chose.Over 20 years ago I had already highlighted the occurrence of multiple and plural values as an actual empirical phenomenon (Spash, 2000c).
Similarly, Gould et al. offer a misleading interpretation when they state that: 'Spash discusses anthropocentrism and economic utilitarianism together (Spash 2015), tacitly implying that using and transforming nature reduces it to a mere means to human ends'.In fact, the article Spash (2015), that they cite, only uses the word 'anthropocentric' once and does not discuss it in relation to economic utilitarianism.What I stated was that: "A shift is perceptible in conservation from the protection of Nature for non-instrumental and ecocentric reasons such as duty of care, prevention of harm and protection of non-humans to the anthropocentric, instrumental and economic.Matching the rise of neoliberal political economy, the role of Nature has become exclusively that of value provision in the global economy."(Spash, 2015: 550) Gould et al. follow-on from this critique by recommending the alternative relational value approach as 'weak anthropocentrism' that 'highlights the value of non-instrumental, respectful and responsible relationships with other-than-humans'.Non-instrumental, respectful and responsible relationships with non-humans do not appear inconsistent with my arguments, nor absent from them (see, e.g.Spash and Smith, 2022: which present three modes of society-Nature interaction).I suspect where differences may lie is in consideration of non-human autonomy.
I think briefly explaining what I did in my research may help clarify some of the misleading interpretations and especially with respect to my work on utilitarianism and intrinsic values.My Ph.D. supervisor, Ralph d'Arge, was interested in intergenerational ethics and explored different rule based ethics (e.g.Benthamite, Rawlsian, Pareto criterion and elitist), although still all placed within a utilitarian framing (e.g. d'Arge et al., 1982).So, I became familiar with such aggregative utilitarian theories.However, I worked on intergenerational rights in contrast to welfarism and specifically within the context of the enhanced greenhouse effect and exploring the ethics of compensation for deliberate harm (Spash and d'Arge, 1989;d'Arge and Spash, 1991).My work on rights did not originate with intrinsic value nor was it simply equated with that value approach.Neither did my original work on utilitarianism restrict itself so narrowly as Gould et al. imply when they state (footnote 1) that 'Utilitarianism is a rich and complex moral philosophy that is greatly simplified in Spash's analysis'.The simplification to preference utilitarianism (a term they do not employ but is appropriate) is not mine but that of my object of study, namely the economics profession!Similarly, my empirical research referencing utilitarianism and rights (deontology) was motivated by environmentalism as the object of study and investigating the hypothesised prevalence of these alternatives, that is, how people actually express values and why.
From my Ph.D. onwards, I was interested in values beyond those found in economics.Intrinsic value as a concept formed a challenge to the position in environmental economic valuation, but more generally to the rising neoliberalism and the price-making market trade-off approach.As a researcher, I was interested in whether people actually held that such values exist in Nature and the implications for refusals to make trade-offs.This lead me into the work on lexicographic preferences, because while refusals to tradeoff are excluded in economics this form of preference allowed it into the theory, even if it was relegated to being an anomaly.So, my research 30 years ago oriented around the complex of rights, refusals to trade, lexicographic preferences and intrinsic values in Nature.This research related to the use of environmental CBA and preference utilitarianism using CVM to survey respondents about their actual values and not simply to obtain a monetary willingness to pay/accept.I explored that potential both in my Ph.D. research on climate change (Spash, 1993b), and initial work on biodiversity (Spash and Hanley, 1995).My later work then built from these foundations.
While I can see connections to the concerns for plural values that then arose, I feel the dichotomous approach presented by Gould et al. misses some important aspects of the debate.The meaning of a relational value also remains unclear.For example, instrumental values are of essence relational for a given entity, involving its means for achieving a given end.Yet relational value is stated to go beyond means to an end, but what exactly is the additional element?A phrase like 'Nature's contribution to people' can easily be taken to be exclusively instrumental, as appears to be the case in practice (e.g.see Brauman et al., 2020), and despite claims of difference from ecosystems services it is 'still rooted in the MA [Millenium Ecosystem Assessment] ecosystem services framework' (Díaz et al., 2018: 271), and relational values are rather easily classified as such (Helseth et al., 2023).Relational values would then, at best, appear to be just a larger category of value into which instrumental values fall.In this case, the approach would merely be a new dichotomy, that is relational (subcategory instrumental) versus intrinsic values.An argument against this position is the definition of instrumental values as 'always substitutable in principle', while relational values are 'non-substitutable in principle'.However, this definitional distinction appears flawed because something uniquely instrumental for a given end is by definition non-substitutable in principle.So, substitutability is not the defining feature of instrumental value.My point here is that the concept of value remains indistinct.
Pinning down exactly what is meant by relational value is far from easy.Gould et al.'s definition, borrowing from the IPBES, involves 'preferences, principles and virtues associated with relationships'.This would imply the value could be preference based, related to some principle or embedded in virtue ethics.How are such diverse approaches meant to relate to the same category of value?For example, is the value subjective as in preferences or objective as in a virtue and how could it be both?In my previous readings on the topic I found there seemed to be a close proximity between relational values and the Aristotelian concepts of eudaemonia, virtues, flourishing and the good life, but what then is the distinction from virtue ethics?There are many loose ends here, but as mentioned earlier, and discussed by Gould et al., there is a line of reasoning that does connect to concerns I and others have raised.Overall, I maintain the conclusion of Spash and Smith (2022: 329) that 'Even though relational values seem to suffer from vagueness, complex definitions and potentially having confusing overlaps with instrumental and intrinsic values, they are an important expression of the discontent with current polices and of the ongoing struggle to (conceptually) capture the complex relationships humans have with Nature.' At the end of their discussion, Gould et al. raise issues of validity and accept the need for avoiding 'anything goes' pluralism, but what criterion of validity might be employed is left unspecified.As occurs in the IPBES typology, there remains a tension between a supposedly neutral empirical description, where whatever values arise are meant to be equally valid, and identifying what is necessary to avoid social ecological crises and exploitation.A clear rejection of the fact-value dichotomy then occurs because research can identify institutions denying the existence or relevance of certain values.Not just any institutions will achieve the kind of goals, such as environmental justice, that Gould et al. reference as desirable.More specifically, capitalism, with its commodification and financialisation, is a denial of Nature and human psychosocial interdependence with nonhumans (Vetlesen, 2015).
Value suppression and value articulating institutions
O 'Neill and Spash (2000) note that environmental economics tries to measure preference intensity but ignores the reasons for stated preferences.The plurality of reasons behind people's values is noted by Gould et al. to raise questions as to their role in 'decision making', or what I would refer to as decision processes.They remark that my work 'stops short of showing how we might get there', although they then note I have worked on institutional design.In fact, the role of deliberative institutions, processes and their design has been something explored in my research for over 20 years (Spash, 2001a(Spash, , 2001b(Spash, , 2007a(Spash, , 2008a;;Niemeyer and Spash, 2001) and in collaboration with others via various research grants and projects (e.g.Kallis et al., 2006).This has concerned a variety of reflections on deliberative inclusive participatory process in relation to a model of human psychosocial behaviour involving ethics, attitudes, norms and set within an institutional context.
In developing this approach, I was originally looking behind what people are actually doing when answering a stated preference survey.This revealed scientific flaws in environmental economics, including its framing of choice within the institutions of market capitalism and interpretation of Nature as only being valuable if expressed as a money metric.Relating to this, Lina Isacs et al. use qualitative methods to explore what people actually think they are valuing when engaged in stated preference value elicitation.Revealing the failings of the stated preference approach to environmental values leads to the need for expanding upon the means by which people are able to express and articulate their actual values.This is addressed by Jacob Ainscough et al. who conduct a deliberative process that investigates valuation as a communal activity in contrast to being a matter of tapping into isolated individual exogenously given and preformed preferences.Both studies are something of a retrospective for me on the failings of mainstream economic valuation.
The study by Isacs et al. immediately brought to mind the research I and colleagues were doing into environmental valuation some 25 years ago.In particular, I was reminded of Jacquie Burgess and Judy Clark and their Pevensey Levels case study on CVM (Burgess et al., 1998(Burgess et al., , 2000;;Clark et al., 2000), and was glad to see that work referenced by Isacs et al.The parallel with their study is the concern with actually probing respondents as to their own understanding of the willingness-to-pay figures they have given, and their knowledge of the valuation process into which they are feeding, with its related public policy implications.The question is what do people actually want to articulate rather than what they are incentivised and manipulated to do (i.e.preference economisation and moralisation, see Lo and Spash, 2013).Environmental economists have then persistently ignored participants' actual values because their stated preference methods have successfully provided a money number assumed to reflect unquestioned and fully (or at least well) informed preferences.
The claim that there are 'true preferences' has long been undermined and orthodox dissenters in economics often talk of endogenous preferences.However, as with much else in mainstream theory the logic is never taken to its ultimate conclusionschoices are not preferences, preference utilitarianism is fundamentally flawed, appeals to truth lack any substanceand the core theory remains intact.Isacs et al. reference environmental economists, like my former colleague Nick Hanley, still persisting with such mainstream dogma as 'true preferences', despite the evidence and the lack of meaning given to truth in such a concept.
Isacs et al. conducted follow-up interviews with respondents to a stated preference survey.They identified various aspect of respondents' perceptions and understanding of their willingness-to-pay: its relationship to the environmental improvement being evaluated, the extent of conformity to economic theory, and the legitimacy attributed to the approach and its use.They found willingness-to-pay had little connection to respondents' concepts of valuing the natural world.The CBA process into which this might feed was unfamiliar to respondents and so opaque and hidden.Once this was revealed there were general retractions by respondents from the valuation process.
The contrast with environmental economists' claims is stark.Isacs et al. summarise this orthodox position on respondents as follows: 'the decision to pay (or not) is assumed to be a conscious, intentional act of commensuration of the relative welfare obtained by the things they choose (a so-called trade-off), and their WTP is taken to reveal the strength of their "true" preferences (e.g.Hanley and Czajkowski, 2019 Isacs et al. point to is the need to address valuation within an appropriate institutional context.The desire of their interviewees is to expresses their values and contribute to worthy projects, but not as environmental economists want them to do.This is exactly why many of us working on environmental values moved to exploring deliberative inclusive participatory processes. Lina Isacs has also worked with Jasper Kentner to highlight the communicative rationality of group deliberation and the failure of economists' trade-off approach to relate to participant concerns over value conflicts and ethics (Isacs et al., 2023).They have identified the need for processes that allow for compromise and respect incommensurabilties, rather than assuming them away or regarding them as irrational.Jasper Kentner has been particularly engaged in researching deliberation in relation to monetary valuation, as evident in authoring or co-authoring 13 papers referenced by Ainscough et al. in their contribution to this special issue.
Ainscough et al. discuss a theory of deliberative monetary valuation that parallels the framework I put forward synthesising and contextualising such practice in stated preference work (Spash, 2008a).My research related to alternative value articulating institutional designs based on group deliberation.In fact, I think that work can help clarify some aspects of the paper by Ainscough et al. and what their case study is actually doing.Their stated aim is to conduct a deliberative monetary valuation process in a research project 'exploring participants' preferences between fair prices […] as opposed to […] conventional individual willingness-to-pay'.The former is meant to be elicited under what is termed deliberative democratic monetary valuation (DDMV), while the latter adds deliberation to obtain a shadow exchange price, termed a deliberative preference (DP) approach.I am not convinced that this terminology adds much over my earlier classification, but appreciate that what is being highlighted by DDMV is a specifically democratic aspect that they relate to my work with Alex Lo (Lo and Spash, 2013).Unfortunately, the case study adds two more approachesdeliberative value formation and deliberative mini-publicsabsent from, and so unrelated to, their theoretical discussion.However, what concerns me is the potential loss of useful distinctions due to the reduction to the dichotomous categorisation into DP versus DDMV.In my reviews and resulting synthesis of the literature on deliberative monetary valuation I proposed a four-way classification (Spash, 2007a(Spash, , 2008a)), which I believe still has advantages and is reproduced here in Table 1.
A particular aspect relevant for the study by Ainscough et al. is the understanding given to a fair price, which as noted is central to their research question.Fair prices are defined as 'an appropriate price to expect those in society to pay', but then used for both the outcomes of DP and DDMV (see Table 1 in Ainscough et al. and also Kenter, 2017).In their Scottish case study this is elicited as a percentage of council tax, which is a local government tax based on the value of domestic property and used to fund local public services.On this basis, the approach appears closer to participatory budgeting than setting a price.While Ainscough et al. recognise this divorce from market prices and exchange value they still maintain the result expresses 'the monetary worth' of an environmental change.However, it seems to share characteristics of what I termed an arbitrated social willingness-to-pay.As they state: 'The valuation derives its legitimacy as an expression of an agreed position through inclusive process and reasoned, non-coercive debate, or a workable compromise between those who would have to live with the consequences'.The issue is how to fairly raise government taxation to achieve a public project and not how much is fair for an individual to pay (per unit) in order to receive a good or service, the amount of which they can choose.The distinction is, I think, one worth making and actually developing further.
Historically the concept of fair or just prices applied to a market transaction, but one where the exchange price had a moral pegging.Such a difference in instituted processes is discussed by Polanyi (1957) in terms of exchange at set prices versus bargaining/haggling in an antagonistic process in price-making markets.The latter he believed was universally banned for food and foodstuffs in primitive and archaic society (Polanyi, 1957: 255).Whether this was universally the case or not, what seems clear is that the occurrence of unjust prices, such as arose by withholding staples such as grain to inflate prices for profiteering, repeatedly lead to riots as capitalism spread.The crowd tried to enforce a moral economy (Thompson, 1993).Capitalism and legitimation of price-making markets for everything diminished the idea of a fair or just pricing.For Polanyi, the process of arriving at a set-price offers the potential for social integration as opposed to the antagonism of price-making markets.
Besides raising the concept of a fair price, my four-way classification, as shown in Table 1, also recognised that environmental economists' adoption of deliberation to legitimate stated preferences resulted in charitable contributions, rather than a welfare theoretic value of an environmental change, species or ecosystem.The contrast is between buying and expecting to receive a range of benefits as opposed to merely supporting a good cause, where the amount given is liable to be invariant with factors economists regard as key to determining an economic value and variant with supposedly (economically) irrelevant psychosocial factors (Spash, 2000b).I identified such charitable contributions as arising under standard non-deliberative stated preference approaches.Under the preference economisation approach (Lo and Spash, 2013), which elicits an individual stated preference within a group deliberative setting, a charitable contribution seemed even more likely and institutionally incentivised.
A key point here is how different institutional designs elicit different forms of non-equivalent money amounts.This can be related to the formation of preferences during a value elicitation process (Spash, 2002b), but actually goes well beyond this and the focus on preferences themselves, which I have criticised (Spash, 2008c).Thus, I reject the idea that stated preference approaches actually elicit a value in accord with economic theory or that '[i]ntegrating deliberation into valuation is a potential solution' to its problems as stated by Ainscough et al.The best efforts of neoclassical economists to force respondents into preference economisation cannot address the ethical and value conflict issues or those raised by Isacs et al.Instead, the form of value articulating institution chosen actually relates to different ways in which public policy is itself being formulated and how provision of a given environmental change is intended to be organised, for example, market with set prices, local government project, trust fund or charity, socially regulated commons.The money value being elicited is dependent on the institutional arrangement for providing the environmental change in question.Beyond this, value articulating institutions do not need to be monetary at all, any more than does social ecological provisioning (Spash and Ryan, 2023).
Concluding remarks
As under a dogmatic paradigm, the orthodox mainstream operates a system of conformity to ideas of prices as efficient allocators, competitive markets, freedom as preference based choice, a narrow utilitarian ethics, and productivist growth.There is also a prescriptive approach to epistemology that defines what makes a real economistmathematical formalism, modelling, arithmomorphism and quantification.Outside of their own circle, mainstream orthodox economists simply ignore social science research findings and operate in a totally closed paradigmatic community that reproduces their unscientific claims.Here we could place the old guard including of environmental economists such as David Pearce, Partha Dasgupta, Karl Goran Maler and William Nordhaus.Despite the inadequacies of their work it persists.For example, Dasgupta and Maler have been repeatedly honoured by factions of the ecological economics community and most recently by the Indian Society at their conference in 2021.The ideological loading and scientific inadequacies of Nordhaus' climate economics work have long been recognised (Daily et al., 1991;Spash, 2002c), but have failed to affect his being lauded and given the highest international prize in economics (namely the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel).That the term savant idiot is now being applied to such climate economists is a reflection of wider recognition of the problems with orthodox mainstream economics (Ketcham, 2023).
A more subtle position is orthodox dissent where economists claim to recognise heterodox and broader social science critiques, and may even criticise the orthodoxy, but still maintain most of its core tenets (Spash, 2024).Here we find many of the environmental economists regarded as being critical and progressive.A prominent example is Nicholas Stern, who recognises a range of problematic issues, and whose highly publicised report on climate economics Claudia Carter references as using 'a multi-criteria approach rather than narrowly CBA'.However, as I have explained, he actually focussed and headlined the CBA chapter from the report in public talks, press releases and debates, and this remains as flawed as the work of others whom he has criticised (Spash, 2006b(Spash, , 2007b(Spash, , 2007c)).He has also consistently promoted growth as good for the climate (Spash, 2014a).Along with Stiglitz, he criticises Nordhaus and Richard Tol (Ketcham, 2023), while having employed the same unreconstituted growth and value theory.Orthodox dissent is a popular position found in a range of supposedly alternative literature including doughnut economics, circular economies and wellbeing economics let alone green growth (see Spash, 2021a).
I have argued for a more radical rejection of mainstream thought and theory (Spash, 2024).Clearly, science progresses by rejecting ideas and theories, using epistemic criteria to support that judgement, and rationalises grounds for rejection informed by philosophy of science.As this special issue shows, the paucity of some economic research does not require a lot of angst over its validity.However, the persistence of bad theories and approaches in economics is deeply disturbing and why papers like those of Isacs et al. and Ainscough et al. are still necessary despite similar work in the past.
In critically reflecting on and rejecting theories some worry that unity is lost and any new theory becomes intolerant in the same way as the dogmatic paradigm it rejects.Here I believe there is some confusion.The search for unity through common understanding lies at the heart of science.Science seeks knowledge of causation and given biophysical and social reality there is the possibility for recognising concepts held in common.While this involves socially constructed thought objects they should be based on social ecological reality, which then provides a basis for grounding knowledge claims.More generally, communication requires that we understand the concepts being used by others, but that does not mean all concepts and theories are equally valid, which is akin to eclecticism.Achieving interdisciplinary engagement is something I take seriously but reject as being achieved via eclecticism.There is a prevalent tendency to jump from rejecting dogmatic systems of naïve objectivism to accepting 'anything goes', including that which was rejected.A process of post-modern influence in the social sciences has seen the rise of anti-realism, promotion of diversity as inherently good and a radical relativism.Yet, concern for real social problems inevitably results in wanting to identify and conceptualise the real common causes of social ecological crises.Talk of multiple ontologies quickly leads to contradictions, because its advocates simultaneously seek to claim that there are no universal concepts while identifying and proposing universal concepts of social causation, such as colonialism, capitalism, growth, commoning, sufficiency, justice, equity, gender and so on.Avoiding dogmatism does not require rejecting science or realism.Instead, this requires being both critical and reflective while aiming to identify and validate claims about biophysical and social reality, which means claims of others will be invalidated, and researchers should make that explicit and act upon their findings.
In a recent interview I was asked, by Oliver Petit, 'Isn't it dangerous or counterproductive to stigmatise the work of certain colleagues?' (Spash et al., 2023: 6).Of course this is a rhetorical question that implies not only that it is most certainly wrong to criticise others in this way but this is actually what I do.Actually, I regard my work as having sought common understanding while remaining critical and not being afraid of exposing flaws, whether in my own or others approaches.Some years ago, I was trying to published an updated study, addressing biodiversity and ecosystem valuation (Spash, 2000a), that paper critically reviewed my own earlier work, which one referee took to be an indication of the paucity of my approach.Apparently, one should neither be self-critical nor seek to learn nor improve.A similar issue is the difficulty of publishing studies that fail in some way, such as valuation studies or participatory approaches that do not produce the expected results.The problem with economics is not its failings but its failure to learn from them.Recognising this and seeking to learn is exactly why I moved away from environmental and resource economics, despite having established good standing in the field early on, and also why I am highly critical of various positions held in ecological economics, such as new resources economics and new environmental pragmatism (Spash, 2013(Spash, , 2024)).
Indeed, what Olivier Petit had in mind when asking his question was my classification of ecological economics into seven different positions based on the hypothesised mixed adoption of what I termed three camps consisting of two opposing paradigms (mainstream and heterodox economics) and a political strategy based on a simplistic naïve pragmatism (Spash, 2013;Spash and Ryan, 2012).The contradictory position of many in ecological economics is to adopt an eclectic pluralism, sometimes confused with political toleration, while simultaneously wanting to reject orthodox mainstream theory (see response to my critiques in Spash, 2024).My scientific aim has been to rather look at people's claims and evaluate them and reject what fails to match reality, and/or is inadequate, judged by a variety of epistemic criteria (e.g.coherence, descriptive realism, noncontradiction and practical adequacy), while accepting I may be wrong in my judgements.This requires being open to revising positions, but not without being convinced by rational argument and debate.
What often goes unrecognised is that debates about economics go well beyond the narrow confines of the economics profession.There are multiple disciplinary contributions and contributors from a range of backgrounds, such as political scientists like Ulrich Brand, human geographers like David Harvey and Andrew Sayer, anthropologists such as Alf Hornborg and David Graeber, philosophers like Alan Holland and John O'Neill, and feminists such as Ariel Salleh and Corinna Dengler.In developing social ecological economics one aims is to openly recognise and engage with these different perspectives and seek common understanding.What unifies such diverse researchers is the desire to improve the world and contribute to positive ethical transformation away from its current trajectory.Interdisciplinary engagement and critique leads to better understanding of causal mechanisms and open systems reality.The challenge is to convert that into action at multiple spatial scales to achieve transformation to social ecological provisioning systems that are constituted of institutions that make for meaningful and worthwhile human lives and allow for non-human flourishing.
Navigating through these times of social ecological crises, with the hope of achieving progressive transformation, both in economics as a discipline and actual economies, has been my journey as an academic economist and environmental activist.Reading through and reflecting on the papers in this issue shows both the challenges and progress.There is clear recognition of the failings of economics, the need for alternatives and importance of interdisciplinarity.There are unifying ideas about the need for value articulating institutions while recognising that their design can bring forward or suppress values and is never neutral.Perhaps most interesting there is recognition of the complex psychosocial aspects of human interactions with Nature and how this involves the creation of self-identity.In the end, social ecological transformation will require structural change and personal transformation.
)'; cited by Isacs et al.The divorce from reality could hardly be larger.Isacs et al. find the use being made of stated preferences and CBA both undemocratic and unethical.They exemplify how economists perversely employ monetary values by referencing a recent study claiming to measure the existence value of the Indigenous American Hopi tribe's culture!In light of such treatment of Indigenous communities' values one can see why relational values have arisen.What
Table 1 .
Realms of Value Under DMV.
|
v3-fos-license
|
2021-08-30T16:00:36.065Z
|
2021-01-01T00:00:00.000
|
239761421
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/j/rarv/a/FbJXRnp4Pj4cGWPPRsZHBsG/?format=pdf&lang=en",
"pdf_hash": "de12f19ba7cd79d78c9d18a4f8aa5e47370f8dd9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1143",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "614e31fc75370692ba37352424572359e6e3f93f",
"year": 2021
}
|
pes2o/s2orc
|
GERMINATIVE PERFORMANCE OF MULUNGÚ SEEDS (Ormosia grossa RUDD) AFTER DORMANCY OVERCOMING
Ormosia grossa Rudd is an Amazonian species that presents bicolor seeds, allowing its exploration for handicraft and decoration making. This paper aimed to analyze the infl uence of diff erent methods to overcome dormancy on the germinative performance of Ormosia grossa seeds. To conduct the experiment, the following treatments were established: T1= scarifi cation with 80-grit sandpaper and water immersion at room temperature for 24 hours; T2= puncturing and water immersion at room temperature for 24 hours; T3= scarifi cation with 80-grit sandpaper; T4= puncturing; T5= immersion in water heated to 80 °C for fi ve minutes, and T6= Control — seeds without any treatment. The seeds germinate slowly and irregularly. Depending on the treatment, germination started between 10 and 32 days and, if there is no adequate pre-germinative treatment to overcome dormancy, it can exceed such time. The phytomass performance and seedling lengths were superior in the T1 and T2 treatments. The scarifi cation by abrasiveness and puncturing treatments are effi cient to overcome dormancy, thus increasing the speed (3.76 and 3.12) and germination percentage (98% and 96%) after ten days. The control was 0.01 (IVG), and germination of 37%. Therefore, it is recommended the method of scarifi cation with sandpaper followed by seed imbibition in water at room temperature for 24 hours, as it provides the best seedling performance and germination.
INTRODUCTION
In some species, seeds do not germinate even when environmental conditions are favorable (Gama et al., 2011). This happens due to the impermeability of integument associated with several botanical species, more frequently those of the Fabaceae family (Carvalho and Nakagawa, 2012). This characteristic is associated with the hardness of the seeds and the tegument histology (Venier et al., 2012). The seed coat's histological characteristics are related to the epidermal cells compacted in palisades and various chemical substances (lignin, calluses, lipids, phenolic deposits, cutin, wax, and suberin) in any layer of the coat (Jayasuriya et al., 2007). Besides, hormones such as abscisic acid (ABA) and gibberellic acid (GA) can infl uence the type of dormancy and seed germination (Kang et al., 2015) because they act as integrators between environmental signals and molecular signals for the regulation of gene expression. Therefore, the balance between ABA and GA content and sensitivity is critical in regulating seed dormancy and germination status (Tognacca and Botto, 2021).
Among the methods to overcome physical dormancy, the mechanical scarifi cation -the partial rupture of the integument of the seed, aff ects its metabolic process and consequently, the dormancy, since such method provides better conditions for water absorption, gas permeability, light, and temperature sensibility (Basqueira et al., 2011).
The Ormosia Jacks genus is part of the Fabaceae family and comprises 130 species, 80 of which occur in Central and South America, whilst the remaining can be found in Asia and Australia. In this context, Ormosia grossa seeds are commonly used to make handcrafted products due to their stand-out coloring, as they are red with black spots. This species presents pod-like fruits and disperses its seeds in the Amazonian summer between June to September. However, there is still no detailed information on its germination process and neither on seedlings production. Therefore, there is no available information about seed handling and analysis for most native forest species, to provide data to characterize physical and physiological attributes. Basic information on the germination, cultivation, and potentiality of native species is needed to improve seedling production in forest nurseries, ensure good seed emergence in forest restoration projects through some techniques such as direct sowing (Araújo et al., 2012).
For seeds of Ormosia arborea and Ormosia nitida Vog, Lorenzi (2010), and Lopes et al. (2006), recommend mechanical scarifi cation before sowing to increase germination. Ormosia grossa seeds -although there is no specifi c information, present several obstacles to germination, due to being covered by a hard integument that restrains water fl ow. Therefore, this paper aimed to analyze the infl uence of diff erent methods to overcome dormancy on the germinative performance of Ormosia grossa seeds.
MATERIALS AND METHODS
The experiment was conducted at the Didactic Laboratory of Seed Analysis of the post-graduation program at the Federal University of Pelotas, located in Capão do Leão, RS. The authors used freshly harvested seeds of Ormosia grossa from the Humaitá forest reserve, in the research area of the Acre Federal University in Porto Acre, AC. The seeds were dispersed from June to September, being collected in the soil.
To accomplish the dormancy overcoming experiment, the seeds were submitted to the following treatments: T = 80-grit sandpaper scarifi cation and 24-hours immersion in water at room temperature; T = Puncturing and 24-hours immersion in water at room temperature; T = 80-grit sandpaper scarifi cation; T = puncturing; T = immersion in water heated at 80 ºC for fi ve minutes and T = Control: -seeds without any treatment to overcome dormancy.
The seed mechanical scarifi cation was made 180 degrees from the seed hilum (in the opposite direction from the hilum). The puncturing was performed by a perforation in the lateral medial portion of the seed until it surpassed the 0.10 mm thickness of the integument.
One hundred seeds were used per treatment, divided into four repetitions of 25 seeds, sown on three sheets of Germitest ® paper moistened with distilled water, in an amount corresponding to 2.5 times the weight of dry paper (MAPA, 2009). The seeds were kept in a germination chamber at 30 °C temperature under continuous light exposure (artifi cial fl uorescent lamps). The germinated seed ones were counted daily, considering as germinated those who presented epicotyl emission and development of its fi rst pair of leaves (germination from the technological point of view). The duration of the experiment was 90 days.
Germination percentage (G%), mean germination time (MGT), mean germination speed (MGS), the relative frequency of germination (RF), and germination speed index (GSI) were evaluated. The G%, MGT, MGS were calculated according to equations cited by Labouriau and Valadares (1976): -Germination percentage (G%): Eq.1 In which: G= germination percentage; N= number of germinated seeds; A= total number of seeds set to germinate. In which: S = mean germination speed (days); t = mean germination time.
The relative frequency of germination and germination speed index were estimated according to Lopes and Franke, (2011): Eq.4 In which: RF = relative germination frequency; ni= number of germinated seeds per day, Σni = total number of germinated seeds.
Eq.5
In which: GSI= germination speed index; G1, G2, Gn mean the number of seeds germinated at the fi rst, second, and last count, and N1, N2, Nn represent the number of the days after sowing, equivalent to the fi rst, second, and last count.
At the end of the evaluation, the seedlings were measured, assessing means root length (RL), mean shoot length (SL), and total mean length (TL) using a millimeter ruler. The results were described in centimeters. The fresh mass of the aerial part, root, and the total mass was analyzed with the aid of an analytical balance (precision ~ 0.0001 g), and then the dry mass of the aerial part (SDM), root (RDM), and total mass (TDM) was also determined. For dry mass determination, the plant material was placed in a drying oven with forced air at 75 ºC until a constant mass was obtained. Said mass was determined in grams. The experimental design was completely randomized in a 6 x 4 factorial scheme (six treatments and four repetitions). The obtained data were submitted to variance analysis when the F test was signifi cant. The mean comparison was performed using the Tukey test at a 5% probability. The software used for the analysis was winStat (Machado et al., 2001).
RESULTS
Ormosia grossa seeds germinate slowly and irregularly, according to the treatment used. When submitted to dormancy overcoming, a period of 10 days after sowing was verifi ed for the beginning of germination. When no method was performed, Table 1 -Mean values of the Germination Speed Index (GSI), Mean Germination Time (MGT), Mean Germination Speed (MGS) and germination percentage (G%) of Ormosia grossa seeds. Tabela 1 -Valores médios de índice de velocidade de germinação (IVG), tempo médio de germinação (TMG), velocidade média de germinação (VMG) e porcentagem de germinação (G%) de sementes de Ormosia grossa. germination began in 21 days. Such a late germination process occurs due to dormancy caused by the integument impermeability. It is common in most species belonging to the Fabaceae family.
Germination speed index (GSI), mean germination time (MGT), mean germination speed (MGS), and germination percentage (G%) characterize the germinative behavior of the species and allow further understanding regarding reproductive aspects ( Table 1).
The mean values presented for seeds scarifi ed with sandpaper and soaked in water for 24 hours were higher than in other treatments, indicating a better result in GSI, yet there was no diff erence in MGS regarding the fi rst three treatments (sandpaper soaked in water for 24 hours, puncturing plus water soaking and sandpaper scarifi cation). There was no diff erence in MGT among the four initial treatments (Sandpaper scarifi cation + H O/ 24 h, Puncturing + H O/ 24 h, Sandpaper scarifi cation and Puncturing). Germination percentage did not diff er in scarifi cation with 80-grit sandpaper and water immersion at room temperature for 24 hours, puncturing, and 24-hours immersion in water at room temperature and 80-grit sandpaper scarifi cation, demonstrating high germination (≥86%) when such methods for dormancy overcoming are applied (Table 1).
Germination began on the tenth day after the experiment had been installed for all treatments except for control, in which germination began after the twentieth day (Table 1). The distribution of germination frequency evidenced polymodality for puncturing, immersion in water heated at 80 °C for fi ve minutes, and control treatments when the polygonal line touches the horizontal axis more than once, indicating several germination peaks (Figure 1e and 1f). As for sandpaper scarifi cation and water immersion at room temperature for 24 hours, puncturing and water immersion at room temperature for 24 hours followed by sanding: unimodality was shown, characterizing germination homogeneity (Figure 1a and 1b).
Based on the daily germination frequency distribution, the following observations were made: in the scarifi cation with sandpaper and water immersion, puncturing and water immersion and sandpaper treatments, the highest germination rate occurred between 10 and 13 days after sowing, completing the entire germinative process in a maximum of eight days after the fi rst evaluation, quickly and regularly. As for puncturing, the highest peak occurred between days 10 and 13, with a lower number of germinated seeds per day, characterizing several peaks during evaluation and perduring for another 11 days.
The soaking in water heated at 80 °C for 5 minutes treatment showed no expressivity in the number of germinated seeds per observed day, starting on the tenth day and slowly extending for further 33 days with several germination peaks. The control treatment seeds began their germinative processes on the twentieth day after sowing, which was extended for further 40 days, showing that the natural germinative process happens slowly and irregularly.
During the evaluation of the length of the seedlings, some diff erence was observed between treatments scarifi cation with 80-grit sandpaper and water immersion at room temperature for 24 hours and control for the analyzed variables ( Figure 2). The seedling originated from seeds that received the scarifi cation followed by water immersion for 24 hours treatment showed greater total length when compared to the ones derived from the puncturing, hot water soaking and control treatments. There was no statistical diff erence between the treatments evaluated for root length (P > 0.05).
As for the total fresh mass and the shoot fresh mass, the sandpaper scarifi cation and water soaking, puncturing and water soaking treatments stand out yet again, showing superior results to the control ( Figure 3). The accumulation in the biomass following the treatments of higher values of phytomass may be associated with the high vigor of seeds that express their maximum performance after germination.
As for the dry mass of the seedlings, the best results were also observed for the sandpaper scarifi cation and water soaking for 24 hours treatment for all the variables, being superior to the other methods for TFM and SDM, and also the only treatment which diff ers to the Control in all evaluated variables (Figure 3). The higher values were observed in the cited treatment because the seeds present a higher germination speed, resulting in the most prolonged dry mass accumulation period until evaluation day.
The lowest mean values were found in the seed immersion in water heated at 80 °C for fi ve minutes and control treatments, which were not adequate for seedling establishment due to an uneven and slow germination process. On the other hand, abrasive scarifi cation and perforations favor the seedling establishment and a higher germination speed and should be recommended to achieve uniform germination. When the seed coat is broken down and soaked in water, it speeds up metabolic activation, allowing them to germinate simultaneously. The treatment of scarifi cation in sandpaper with immersion in water for 24 hours at room temperature showed good performance (Figure 4).
DISCUSSION
The methods used to overcome dormancy in the seeds of Ormosia grossa through germination showed that the scarifi cation in sandpaper with water immersion (T1) provided better germination uniformity, a greater number of normal seedlings, and a smaller number of hard seeds among the evaluated treatments. According to Nascimento et al. (2021), scarifi cation with sandpaper allows obtaining more homogeneous and synchronous germination, which is desirable in the production of seedlings, in addition to not incurring damage to the environment. And in the case of this study, it appears that immersion in water for a certain period contributes more to the germination process, with faster metabolic activation.
High germination is also associated with high vigor and seed germination speed (GSI, MGT, and MGS). This parameter is indicated to detect diff erences in vigor between lots -meaning that those with the highest germination speed also are the most vigorous (Krzyzanowski et al., 1999) -and can also be used to evaluate diff erent treatments for the same lot of seeds. In this context, all metabolic processes for germination are activated allowing, to a lesser or greater degree, the germination time, which is constituted by the diff erence between treatments applied in relation to the vigor -measured by the GSI and MGS. Therefore, vigor is not an easily measured characteristic, but a concept that gathers a set of characteristics associated with seeds' performance (ISTA, 2011).
The pre-germination treatments which determined the highest percentages and average germination time of the seeds were sandpaper scarifi cation, water soaking, and water soaking followed by sandpaper. Although the methods of abrasiveness and perforation Source: own elaboration. Fonte: elaboração própria. added in water may show more eff ective results in overcoming dormancy, these treatments require greater care not to cause damage to the embryo (Lopes et al., 1998). Despite being an eff ective . CV= coeffi cient of variation. T = scarifi cation with 80-grit sandpaper and immersion in water at room temperature for 24 hours; T = puncturing and immersion in water at room temperature for 24 hours; T = scarifi cation with 80-grit sandpaper; T = puncturing; T = immersion in water heated at 80 °C for fi ve minutes and T6= control -seeds with no treatment. Mean values followed by the same letters, comparing the cited treatments, did not hold a signifi cant diff erence by the Tukey test (P<0.05). *represents the existence of diff erence between variables. Bars represent the standard error of the mean of four repetitions. Figura 3 -Resultados de massa das plântulas (TFM=MFT= massa fresca total), (SFM=MFPA= massa fresca da parte aérea), (RFM=MFPR= massa fresca da parte raiz), (TDM=MST= massa seca total), (SDM=MSPA= massa seca de parte aérea), (RDM=MSPR= massa seca parte raiz) de Ormosia grossa. (sl = pl= Plântula). CV= coefi ciente de variação. T = escarifi cação com lixa número 80 e embebição em água à temperatura ambiente por 24 horas; T =punção e embebição em água à temperatura ambiente por 24 horas; T = escarifi cação com lixa número 80; T = punção; T =imersão em água aquecida a 80 °C por cinco minutos e T = Controle-sementes sem nenhum tratamento.
Source: fi rst author photos. Fonte: fotos do primeiro autor.
The treatment with immersion in heated water 80 °C/ 5' demonstrates that the method could not fully overcome seed dormancy of Ormosia grossa. It was also found that the seeds exposed for fi ve minutes remained viable. It is possible to notice that this method did not signifi cantly overcome seed dormancy since the germination in this treatment was the same as the others that obtained an opening in the seed coat. This may occur because the mother plant develops control mechanisms in progeny seeds, where the seed receives information from the mother plant when it detects high temperature silencing the genes that do not allow germination. The seed can perceive temperature variation in up to 1 °C diff erence in the environment (Chen et al., 2014). Immersion in water with high temperatures may cause the embryo to die or not. In the case of Ormosia grossa seeds exposed to 80 °C for fi ve minutes, dead seeds were not verifi ed. The maternal processes, together with the gene expressions in the zygote that act in blocking the seeds' metabolic activity, may explain the fact that some seeds germinate and others do not. Penfi eld (2017), reports that the environmental signs are perceived by the mother plant and the developing zygote and are used to control the germination in progeny seed. The result is that a mother plant can transmit seasonal information to a progeny and also use environmental changes to generate variations in the progeny's dormancy states. The temperature that the mother plant experiences throughout its life cycle, including whether the plant undergoes vernalization or not, both have signifi cant impacts on its progeny's seed dormancy (Springthorpe and Penfi eld, 2015).
Ormosia grossa occurs in a place with temperatures ranging from 30 to 37 °C, allowing these environmental perceptions between the mother plant and the zygote. Under these conditions, seed growth may be blocked, and reserves build up. In most species, the acquisition of tolerance to low water content allows the seed to survive in humid or dry places for long periods in the environment (Penfi eld, 2017).
In studies with Enterolobium contortisiliquum (Vell.) Morong., Silva et al. (2020b), applied heat treatments in dry heat using 60, 80, and 105 °C for 5 minutes and found that it was not enough to overcome seed dormancy. It is believed that heat treatments can be pretty advantageous due to the practicality of execution, allowing to work with high numbers of seeds. On the other hand, mechanical scarifi cation allows the highest values of germination. However, the process is slow since one works with few seeds or even one at a time. The germination frequency distribution tending to the right showed a shorter mean germination time than the predicted in the Forest Species Seed Analysis (MAPA, 2013) for the same genus species (Ormosia), which indicates 21 days for the fi rst counting and 28 days for the fi nal counting. In Ormosia nitida, the germination time has also been reduced by the administration of a dormancy overcoming treatment with mechanical scarifi cation, demonstrating germination uniformity (Lopes et al., 2006).
According to Pinheiro et al. (2017), when the polygonal line displacement to the right does not touch the horizontal axis, there are several daily germination occurrences. Otherwise, after germination, the peaks represented in non-collinear lines, when touching (or approaching) the horizontal axis, generate unequal germination peaks, showing no germination in the observed days of some of the repetitions. So, through the frequencies, it is possible to observe that, over time, the seeds germinate until reaching maximum value and then decline (Santana and Ranal, 2004).
The seedling length results demonstrate the variables were quite similar, evidencing that even when applying pre-germinative treatment, the growth did not have a high expressiveness in the diff erences for seedling growth. Taking such characteristics (total length, root length, and shoot length of the seedling) into consideration for forest species is an important factor for seedlings transplanting because, depending on the size class, it is a way of making decisions to take to the fi elds and succeed in establishing the seedlings and achieve a higher survival rate (Viani and Rodrigues, 2007).
The fresh mass and dry mass of seedlings are some of the patterns to evaluate the plant's growth ( Figure 3). However, it is possible to accurately determine the transfer from the organic material from the reserve tissues to the embryonic axis by the evaluation of the dry mass of the seedlings (Krzyzanowski et al., 1999).
The perforation of the integument with puncturing and the sandpaper scarifi cation eliminate integumentary dormancy, accelerate and unify seed germination and seedlings emergence the Schizolobium amazonicum Herb (Dapont et al., 2014). According to Pacheco et al. (2014), mechanical sandpaper scarifi cation and water soaking for 24 hours pre-germinative treatments allow for better expression of the seeds and vigor of seedlings of Combretum leprosum Mart. This was also verifi ed in the results of this study.
CONCLUSION
Ormosia grossa seeds present dormancy due to integument impermeability. Treatments with scarifi cation by abrasiveness are effi cient in overcoming dormancy, increasing germination speed and percentage. Therefore, it is recommended the method of scarifi cation with sandpaper followed by seed imbibition in water at room temperature for 24 hours, as it provides the best seedling performance and germination.
AUTHOR CONTRIBUTIONS
Pinheiro RM: data analyze and text written, Soares NS and Almeida AS: research supervision and text review, Gadotti GI: technical review and Silva EJS: text review and translation.
ACKNOWLEDGEMENTS
The presented study was performed with the support of the Coordination for the Improvement of Higher Education Personnel -Brazil (CAPES) -Financing Code 001.
|
v3-fos-license
|
2018-12-14T14:28:10.735Z
|
2017-11-01T00:00:00.000
|
55593947
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.26464/epp2017007",
"pdf_hash": "c87e26fd8963638e553982761a8ce90b662ea6a0",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1150",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"sha1": "73d6d2d051fa36ccd4b5b118b8e28398e31d44fd",
"year": 2017
}
|
pes2o/s2orc
|
Observations of loading‐unloading process at Saturn's distant magnetotail
Using in‐situ measurements from the Cassini spacecraft in 2013, we report an Earth substorm‐like loading‐unloading process at Saturn's distant magnetotail. We found that the loading process is featured with two distinct processes: a rapid loading process that was likely driven by an internal source and a slow loading process that was likely driven by solar wind. Each of the two loading processes could also individually lead to an unloading process. The rapid internal loading process lasts for ~ 1–2 hours; the solar wind driven loading process lasts for ~ 3–18 hours and the following unloading process lasts for ~1–3 hours. In this letter, we suggest three possible loading‐unloading circulations, which are fundamental in understanding the role of solar wind in driving giant planetary magnetospheric dynamics.
Introduction
The energy loading-unloading process in a magnetosphere has been reported at Earth (Akasofu, 1964;McPherron et al., 1973), Mercury (Slavin et al., 2010;Sun WJ et al., 2015), Jupiter (Kronberg et al., 2005) and Saturn (Mitchell et al., 2005). The loading-unloading concept was originally introduced to describe Earth substorm. The loading process is associated with a growth phase of a substorm, when the magnetospheric current is enhanced, current sheet thins and the lobe magnetic field increases. The unloading process is responsible for the substorm expansion phase, when the magnetospheric currents divert into the ionosphere; current sheet expands in north-south direction and the lobe magnetic field decreases. An unloading process is usually much more rapid than a loading process (see a recent review paper by Akasofu (2017)).
There are various timescales of loading-unloading processes at different planets. At Mercury, a loading-unloading process usually lasts for a few minutes (Slavin et al., 2010). At Earth, this process lasts for tens of minutes to a few hours (e.g., Akasofu, 1964;Lui, 1996;Pu, ZY et al., 2010;Yao, ZH et al., 2012). At Jupiter and Saturn, the unloading process has been found to last for a few hours to tens of hours (Kronberg et al., 2005;Mitchell et al., 2005). A loading process is usually much longer than the unloading process. For example, Ge YS et al. (2007) showed that the growth phase for a Jovian substorm lasts for about 3 days, which is also consistent with the occurrence rate of energetic particles (Kron-berg et al., 2007;Krupp et al., 1998). We need to be aware that most of the previous loading-unloading processes are based on measurements from co-rotating magnetosphere, suggesting that the internally driven process would significantly contribute to these processes, or even dominate them. In addition, the planetary periodicities exist in almost the whole magnetosphere (inner, middle and outer), although their mechanisms are still under debate (Arridge et al., 2011;Carbary et al., 2007;Espinosa et al., 2003;Southwood and Kivelson, 2007).
The internally driven unloading processes and their auroral consequences are widely identified at Saturn Mitchell et al., 2005Mitchell et al., , 2016Radioti et al., 2013;Russell et al., 2008), which shows different features from that at the Earth. For example, Hill et al. (2005) found that the energetic particle injections at Saturn's inner magnetosphere are almost randomly distributed, which is significantly different from the local time dependent substorm injection at the Earth (Birn et al., 1997). It is poorly understood how a solar wind driven loading-unloading process would differ from the internally driven process at Saturn.
In this letter, we investigate the loading-unloading process in the magnetotail, using Cassini measurements from mid-November in 2013, when the spacecraft was at ~ 60 R S (1 R S = 60268 km), with local time at ~1.7 LT and close to the plasma sheet on the northern hemisphere. Specifically, we aim to understand the contributors from the solar wind and internal sources in loading Saturn's nightside distant magnetotail. Figure 1 shows 1-min resolution magnetic field data from the Cassini magnetometer (Dougherty et al., 2004) in Kronographic Radi-al-Theta-Phi (KRTP) coordinates during 13 November and 16 November 2013. From the top to bottom, plotted are the magnetic components and the magnetic strength. During this period, Cassini was located near midnight, at ~ 60 R S . Previous studies have shown that signatures of the tailward reconnection site (e.g., B θ <0) are often observed within 60 R S (Jackman et al., 2014), suggesting that the open-closed field line boundary is usually around this distance. We thus call this region distant magnetotail, where the most distant closed field lines are located. The magnetic field at the distant magnetotail is less affected by planetary rotation, as no clear planetary spin modulation was observed for this event. A spin modulation signature shows periodic oscillation of current sheet, which is very different from the measurements presented in this letter. Please see the signals of spin modulation from previous literature (Arridge et al., 2009;Carbary and Mitchell, 2013;Yao ZH et al., 2017a). Yao ZH et al. (2017b) identify two types of dipolarization using measurements from multiple Cassini instruments. The localized reconnection generated transient dipolarizing flux bundle (TDFB) would show simultaneous discontinuity-like enhancements on both B θ and |B r |. However, an Earth substorm-like current redistri-bution dipolarization (CRDD) is featured with B θ increase that is accompanied by |B r |. This is because the TDFB front boundary is a discontinuity, while the CRDD that is caused by the current sheet expansion corresponds to the reconfiguration of magnetic topology. Five current sheet expansions (green shadow) during this period are identified from variations of the magnetic field components B θ increase and |B r | decrease (mostly from |B r | and B T decrease), which we call unloading process (labeled at the top of Figure 1). Prior to each current sheet expansion, there was a longer period with an opposite trend (blue and pink shadow) that increase |B r | and B T , which we call loading process.
Observations
During each unloading period, B Φ perturbation is also detected, which usually suggests a formation of field-aligned current Sergeev et al., 1996;Yao ZH et al., 2013). The fieldaligned current formation is also a key phenomenon in a substorm (Boström, 1964;Lui, 1991).
It is clear that the loading process could be divided into two periods, i.e., the rapid one marked by the blue shadow, and the slow one marked by the pink shadow. As we have previously introduced that the loading process at Earth is usually much slower than the unloading process, the "rapid" or "slow" are thus introduced based on the comparison with the time scale of the unloading process. For the five loading-unloading events in Figure 1, the rapid loading processes last for ~1-2 hours, the slow loading processes last for ~3-18 hours and the unloading processes last for ~ 1-3 hours.
The pink shadow marked loading processes are much slower than the unloading processes; we thus suggest that these loading processes were mainly driven by solar wind, as at Earth. The rapid loading process is significantly different from the loading process at Earth, which we suggest to be driven by internal source. We will discuss the detailed relation between the two loading processes and the unloading process in next section.
Discussion and Summary
A loading process at Earth that is only driven by solar wind usually lasts for a few hours, while the unloading process usually lasts for tens of minutes. Regarding the much larger magnetosphere, and much further from the Sun, we would expect the solar wind driven energy loading at Saturn to be slower than at Earth. At Earth and Mercury, the loading-unloading process is only driven by solar wind, while at the fast rotating Saturn and Jupiter, the internal sources are suggested to dominate these loading-unloading processes. In this letter, we examine the energy loading-unloading process at Saturn's distant magnetotail where solar wind has a maximum impact in driving the magnetotail dynamics, and we have found that solar wind could play very crucial role in driving the loading-unloading process at this distance. We also notice that there was one other period that Cassini travelled into a similar region in 2006, and observed multiple enhancement of negat-ive B θ , which is usually considered as a signature of magnetic reconnection (Jackman et al., 2007). The enhancement of positive B θ in this paper suggests a more tailward extended magnetotail plasma sheet, and thus we observe the Earth substorm-like magnetic dipolarization. The continual positive B θ and its multiple positive enhancements all suggest that the spacecraft was in the closed field line, so we suggest the open field line during a quasisteady state is beyond 60 R S .
We present three possible loading-unloading circulations in Saturn's distant magnetotail (near midnight, beyond 60 R S ) in Figure 2. Figure 2a shows the initial magnetic topology (the red curve). Figure 2b and 2c show the stretching process (from red to black curves) that was driven by an internal source and solar wind source, respectively. The green arrows show the motion of magnetic field for the two processes. Figure 2d shows the unloading process that is associated with Saturn's distant magnetic reconnection, which drives a dipolarization towards the planet and a plasmoid towards the tail. We here point out that the global magnetic topology change is caused by the magnetospheric current redistribution associated with reconnection, but not a direct consequence of the reconnection process. This is understandable from the Ampere's law that electrical current directly changes the magnetic field. The three possible loading-unloading circulations are described as below.
(1) As indicated by the red arrows (a→b→c→d), an internal loading process (blue periods in Figure 1) rapidly stretches the field lines in the distant magnetosphere, followed by a solar wind driven slow loading process (pink periods in Figure 1). This type of loading process preceded the 1 st , 3 rd and 4 th energy unloading in our event. (2) The black arrows (a→b→d) show the unloading process following a single loading process from the internal source. The 2 nd unloading process belongs to this category.
(3) The blue arrows (a→c→d) show the only solar wind driven loading process that is followed by an unloading process. In our event, the 5 th unloading is this type.
Since the solar wind loading process is much slower than the internally driven loading process, we thus expect a much longer time scale for a loading process that is only driven by solar wind. This is consistent with the fact that the loading process prior to the 5 th unloading event lasts for a much longer period (~ 18 hours) than the other four loading processes.
It is interesting to notice that a rapid internal loading process immediately follows an unloading process (except the 4 th unloading process). We here suggest a potential physical explanation for this phenomenon. For the stretched distant magnetosphere ( Figure 2b or 2c), a dynamic balance exists between the tailward transport driven by the centrifugal force and the planetward transport associated with the Dungey cycle. The current disruption ( Figure 2d) initiated by reconnection would thicken the current sheet, and thus depress the Dungey cycle reconnection, consequently, the inner side centrifugal force driven tailward transport would dominate, and rapidly load magnetic energy in the distant magnetotail. The 4 th unloading process was much less dramatic than the 1 st , 2 nd and 3 rd unloading processes, thus we suggest that the 4 th unloading process did not produce a highly imbalanced condition in the radial direction, so that no significant internal loading process was initiated afterwards.
In conclusion, we report the Earth-like magnetic energy loadingunloading process at Saturn's distant magnetotail, where the plasma does not co-rotate with the planet. The rapid loading processes last for ~1-2 hours, the slow loading processes last for ~3-18 hours and the unloading processes last for ~ 1-3 hours. The loading-unloading duration is ~5-10 times longer than such a process at Earth. Unlike the Earth, the loading process is not fully controlled by solar wind, in contrast, the inner source could provide a much more rapid loading process. Considering the two distinct contributors in the loading process, we propose three types of loading-unloading circulations for Saturn's distant magnetotail. Coincidently, each of the proposed circulations has been supported by at least one event during the time period presented in Figure 1.
|
v3-fos-license
|
2021-11-01T15:08:49.369Z
|
2021-10-29T00:00:00.000
|
240321633
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/2835638",
"pdf_hash": "2d8c4a53737d64081c383d62ffc1ffefc322af1f",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1151",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "f998dff4d9d415efb4e850021bc82d523b39860f",
"year": 2021
}
|
pes2o/s2orc
|
Reconfigurable Reflectarray Antenna: A Comparison between Design Using PIN Diodes and Liquid Crystals
This work presents the design and analysis of active re fl ectarray antennas with slot embedded patch element con fi gurations within an X -band frequency range. Two active re fl ectarray design technologies have been proposed by digital frequency switching using PIN diodes and analogue frequency tuning using liquid crystal-based substrates. A waveguide simulator has been used to perform scattering parameter measurements in order to practically compare the performance of re fl ectarray designed based on the two active design technologies. PIN diode-based active re fl ectarray unit cell design is shown to o ff er a frequency tunability of 0.36 GHz with a dynamic phase range of 226 ° . On the other hand, liquid crystal-based design provided slightly lower frequency tunability of 0.20GHz with a dynamic phase range of 124 ° . Moreover, the higher re fl ection loss and slow frequency tuning are demonstrated to be the disadvantages of liquid crystal-based designs as compared to PIN diode-based active re fl ectarray designs.
Introduction
A reflectarray, as suggested by the name, is a flat reflecting array of resonant patch elements that can be used for a number of applications where high gain antennas are required. Some of the main characteristics of a reflectarray antenna are its lower cost, lower mass, and smaller stowage volume, which is generally demanded in the spacecraft antennas in order to reduce payload weight and required shroud space to minimize overall launch cost. Conventionally, high-gain applications have counted on parabolic reflectors and phased arrays [1]. Nevertheless, due to the curvature of their surface, parabolic reflectors are challenging to be manufactured in many cases at higher microwave frequencies [2]. The shape of the parabolic reflector also causes an increase in the weight and size of the antenna. Moreover, it has also been established in [3] that wide-angle electronic beam scan-ning cannot be achieved using a parabolic reflector. On the other hand, high-gain phased array antennas offer the opportunity to electronically scan the main beam along wide-angle positions provided that they are equipped with controllable phase shifters. However, the main shortcoming of phased array antennas is their large hardware footprint, as each element of an array or subarray needs to be connected to a dedicated transceiver module. These modules are usually high profile, thus making phased array antennas a costly solution for high-gain applications.
Direct Broadcast Satellites (DBS) and Multibeam Antennas (MBA) are also considered potential applications of reflectarrays apart from recent investigations on their applicability in 5G communication systems [4][5][6][7]. Reflectarrays can also be used as amplifying arrays by adding an amplifier in each of the unit cells [8]. Despite many potentials, the main shortcomings of a reflectarray antenna are its limited bandwidth and high loss performance as compared to the parabolic reflector antennas [9][10][11]. Researchers have proposed a number of configurations in the past few years for the bandwidth and loss performance improvement of reflectarrays [12][13][14][15][16]. However, considerable efforts are still required to improve the bandwidth performance of reflectarrays. In order to steer the main beam of an active reflectarray, the reflected phase from each of the resonant elements can be controlled. Therefore, the reflected beam can be directed in the desired direction, which makes a reflectarray capable of achieving a wide-angle electronic beam scanning. Such a beamforming approach can have many advantages over traditional tunable antenna array architectures, including a significant reduction in hardware required per element and increased efficiency [17]. There has been considerable research in beam steering of reflectarray antennas such as the use of nonlinear dielectric materials [18][19][20] and the integration of Radio Frequency Micro-Electro-Mechanical Systems (RF MEMS) as switches [21,22], using aperturecoupled elements where the tuning circuit can be located on the nonresonating surface of the element in order to control the contributed phase from each element [23] and using mechanical movement of the antenna [24].
In this work, slot embedded patch element configurations have been proposed for active reflectarray designs. PIN diodes have been proposed to be incorporated directly on the resonant elements for frequency switching in reflectarrays, while liquid crystals have been proposed in separate designs to be used as a substrate for tunable reflectarray design. The 3D EM computer simulation software results of CST MWS and Ansoft HFSS have been verified using waveguide scattering parameter measurements. Detailed comparisons between different performance parameters of the two design technologies are discussed in this work. Commercially available computer models of CST Microwave Studio and Ansoft HFSS were used to design unit cell patch elements with proper boundary conditions in order to analyze the scattering parameters of an infinite reflectarray. Initially, a reflectarray with a rectangular patch element was designed to resonate at 10 GHz using Rogers RT/Duroid 5880 (εr = 2:2 and tan δ = 0:0010) as a substrate with a thickness of 0.3818 mm. Then, rectangular slot configurations are introduced in the patch element, and the effect on the performance of the reflectarray was observed. Reflectarray unit cells consisting of two patch elements were used for the waveguide scattering parameter measurements [25].
Frequency Switchable Reflectarray Design
Using PIN Diodes In this work, apart from the rectangular slot, a vertical gap was introduced in the resonant slot embedded patch element for the practical implementation of PIN diodes. This gap provides an option to connect the diode in a way that it can have different potentials at the two connecting ends. Moreover, the vertical gap does not affect the resonance performance of the unit cell as the maximum surface currents were observed to be along the width of the patch and slots. PIN diodes were integrated in the gap introduced on the slot embedded patch element. Scattering parameter measurements were carried out for a unit cell that comprised two patch elements with dimensions of L p × W p = 9:4 mm × 10 mm each, which were printed on a substrate of L s × W s = 15 mm × 30 mm (L p and W p are the length and width of the patch element, respectively, while L s and W s are the length and width of the substrate, respectively). The slot length was kept at 0.6 mm, while the width was 0:5 W p . The vertical gap was introduced with a 0.6 mm width in order to fit the PIN diode. Figure 1 shows the unit cell of the PIN diode-based planar reflector with its equivalent circuit representation. Equivalent circuits can also be used for the characterization of the planar reflector unit cells based on the lumped components. In this design, L p , C p , and R p represent inductance, capacitance, and resistance of a passive planar reflector unit cell, respectively, while C d , R o , and R d are used to represent the on and off states of the PIN diodes. For the electronic switching of PIN diode-based design, a GaAS MA4GP907 PIN diode manufactured by Figure 2 shows the fabricated unit cell, the biasing circuit connected with the unit cell, and the complete setup for scattering parameter measurements. As shown in Figure 2, the PIN diodes were soldered on the surface of the patch element and were powered by a power supply using a biasing circuit. A high-accuracy SMT fabrication facility was used to solder the PIN diodes on the resonant patch structure as accurately as possible. 1.33 V were supplied and a 100 Ω resistor was used to forward bias the diodes while no negative voltage (0 V) was required to reverse bias the diodes. RF choke was implemented using quarter-wavelength segments and radial stub on the biasing circuit. DC block capacitors were not required in this case because there is no physical connection between the RF source (network analyzer) and DC source (power supply).
Reflection loss and reflection phase were measured within an X-band frequency range, and a close agreement between measured and simulated results was observed. Figure 3 shows a comparison between measured and simulated reflection loss curves for fabricated samples. It can be observed from Figure 3 that in the off state of the PIN diode, the measured resonant frequency is close to the simulated resonant frequency. The fabricated unit cell resonated at 9.40 GHz with a reflection loss of 2.60 dB while the simulations for the off state of the PIN diode provided a resonant frequency of 9.38 GHz with 1.61 dB reflection loss. When the PIN diodes were switched on, a clear change in frequency was observed for the fabricated samples. In the on state, the measured resonant frequency was observed to be 9.04 GHz with a reflection loss of 3.91 dB. In comparison, the simulation results for the on state of the PIN diode exhibited a reflection loss of 2.88 dB at a resonant frequency of 8.99 GHz.
The highest discrepancy between measured and simulated reflection loss was observed to be 0.99 dB and 1.03 dB in off and on states of PIN diodes, respectively. Moreover, extra noise or ripples with a maximum level of 0.25 dB were observed. The reason for this discrepancy is due to fabrication quality, including the soldering of PIN diodes and the difference between actual material properties and the properties given in the datasheet. Furthermore, the diode was intentionally placed tilted in order to optimize the reflection loss and reflection phase results in simulations. The optimization was carried out keeping in mind the maximum current distribution on the surface of the patch. However, in measurements, it was not possible to place the diode at the Figure 3. The discrepancy can be minimized with the more careful fabrication of unit cells and a thorough investigation of the actual material properties of the substrate used after going through the fabrication process. Figure 4 shows a comparison between the measured and simulated reflection phases. A close agreement between the measured and simulated phases can be observed from Wireless Communications and Mobile Computing Figure 4, except for the ripples found towards the edges of the measured curves. These ripples can be linked to the same sources, which caused a discrepancy in the reflection loss curves. Table 1 provides a comparison between simulated and measured results of the frequency tunability and dynamic phase range. The dynamic phase range was calculated at the central frequency of two resonant curves in off and on states of PIN diodes as shown in Figure 4. It can be observed from Table 1 that a maximum frequency tunability of 0.36 GHz and a dynamic phase range of 226°were demonstrated by PIN diode base unit cell measurements. The results are in close agreement with the results obtained by 3D EM simulators of CST MWS and Ansoft HFSS, which practically validates the proposed design.
Reconfigurable Reflectarray Design Using Liquid Crystal Substrates
The change in the molecular orientation of liquid crystals (LC) can be done by applying a bias voltage [26,27]. This change in molecular orientation gives rise to the dielectric anisotropy (Δε) of LC, which makes them suitable to be used as a tunable dielectric substrate in reflectarrays. Δε can be explained as where ε⊥ and ε║ are the magnitude of the dielectric constant measured perpendicular and parallel to the applied electric field. The reflection phase and resonant frequency of reflectarrays can be tuned for various values with the help of an external tunable bias voltage [16]. The basic design topology of a unit cell reflectarray with periodic boundary conditions has been used in Ansoft HFSS to represent a single patch element as an infinite array. The resonant patches, as shown in Figures 5(a) for resonance within an X-band frequency range. It can be observed from Figures 5(c) and 5(d) that the E-fields are sinusoidally distributed with maxima at the corners of the resonant patch element. Therefore, the surface currents will be maximum in the centre of the patch element along the direction of field excitation (x-axis). Figure 6 shows the equivalent circuit representation of an LC-based unit cell planar reflector design. Apart from the basic RLC circuit, extra inductance, capacitance, and resistance have to be considered because of the introduction of liquid crystal under the patch element within the solid substrate cavity.
Wireless Communications and Mobile Computing
In order to design a frequency tunable reflectarray unit cell, the properties of K-15 nematic LC have been exploited. For this type of LC, a voltage variation from 0 V to 20 V can be applied to change the orientation of K-15 nematic LC molecules from perpendicular (εr = 2:7 and tan δ = 0:04) to parallel (εr = 2:9 and tan δ = 0:03). Different rectangular slot embedded unit cell patch elements have been fabricated for X-band frequency range operations, as shown in Figure 7(a). Encapsulations made of aluminium shown in Figure 7(b) have been used to keep intact different parts of unit cells, and a connecting wire has been used to electrically short the two patches in order to apply the desired voltage. Figure 8 shows the measurement procedure and LC filling inside the cavity constructed under the resonant patch element. The complete assembly of unit cell patch elements filled with LC has been inserted in the aperture of the waveguide, and scattering parameter measurements have been carried out using a waveguide simulator with a vector network analyzer while the voltage from 0 V to 20 V has been supplied by a function generator to the resonant patch elements.
The scattering parameter measurements of the LC-based rectangular slot embedded patch element unit cells have been carried out, and a comparison between simulated and measured results has been presented in Figure 9. The simulated and measured results provided a close agreement with a variation in measured resonant frequency from 10 GHz to 9.88 GHz with an increase in voltage from 0 V to 20 V. Moreover, a dynamic phase range of 103°measured from the reflection phase curve has been demonstrated by the proposed design of a reconfigurable LC-based unit cell.
In order to further investigate the proposed design, different voltage levels have been applied to K-15 nematic LC and the effect on reflection loss and resonant frequency has been observed, as shown in Figure 10. It can be observed that each increment in voltage level contributes to a small frequency tunability, which reaches to 180 MHz at 20 V level. It can also be observed from Figure 10 that the reflection loss also decreases from 8.5 dB at 10 GHz to 6.2 dB at 9.88 GHz with an increase in voltage from 0 to 20 V. The decrease in reflection loss is because of a decrease in loss tangent value of K-15 nematic LC material from 0.04 to 0.03 with an increase in voltage from 0 to 20 V.
Comparative Summary
Active planar reflectors using PIN diodes and liquid crystals provided interesting results for the design for switchable planar reflectors for beam shaping realization in an X-band Furthermore, the PIN diode embedded planar reflector unit cell exhibited a maximum measured reflection loss of 3.91 dB, which is much lesser as compared to the 8.56 dB of reflection loss observed for liquid crystal-based unit cell design. The tunable loss factor, which is measured as the difference of reflection loss between the two extreme tunable frequencies, is also higher (1.91 dB) in the case of liquid crystal-based design as compared to PIN diode-based design (1.43 dB). A higher tunable loss factor can be attributed to the properties of the liquid crystals used in the design. Table 2 provides the measured performance comparison for the planar reflector design using the two technologies.
Additionally, as far as the complexity of design is concerned, a uniform deposition layer is required in the case of LC-based design in order to achieve full anisotropy of LC molecules. Moreover, keeping the liquid crystal fully filled inside the cavity is a challenging task and requires a perfect design of the encapsulator. On the other hand, because of the tiny size of the PIN diodes, it was difficult to handle and solder the diodes on the resonant patch elements. However, professional skills and equipment can help to resolve these problems. A general summary of the outcomes of the two design techniques is provided in Table 3.
Conclusions
Slot embedded patch element configurations have been identified as a potential configuration for the improved design of passive and reconfigurable reflectarray antennas.
The slot embedded patch elements also provide an extra parameter, which is the dimensions of the slot for the control of the resonant frequency and reflection phase of the reflectarray antenna. PIN diode-based design provides a number of advantages over liquid crystal-based design in terms of frequency tunability, dynamic phase range, static phase range, and tunable loss factor. However, liquid crystal-based design provides an edge over PIN diodebased active reflectarray design because of analogue control, which provides the option of tunability over a range of frequencies. It can be concluded from this investigation that there is a trade-off between the performance parameters and continuous tunability achieved by LC-based design. Further investigations are required to improve the frequency tunability and reflection loss performance of reconfigurable reflectarray antennas by investigating the properties of materials and the applied electronic components.
Data Availability
Data is available on request. The corresponding author can be contacted for any relevant data.
Conflicts of Interest
The authors declare that they have no conflicts of interest. Table 3: General comparison between the PIN diodes and liquid crystal base planar reflector design ("+," "0," and "-" symbols refer to good, neutral, and poor, respectively).
|
v3-fos-license
|
2024-06-24T15:10:38.307Z
|
2024-06-01T00:00:00.000
|
270692158
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/264797/20240622-29405-1dlki8p.pdf",
"pdf_hash": "a4021d8344aecaf282277a5df4f13ab2a12b222a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1152",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a4c8a3f491cb40bf4c84ab60ff9661f85c78b54f",
"year": 2024
}
|
pes2o/s2orc
|
A Case Report and a Literature Review of Traumatic Abdominal Aortic Dissection: An Uncommon Complication Following Vertebral Fractures Due to Blunt Trauma
Aortic dissection is a rare consequence of blunt trauma with potentially fatal consequences requiring prompt identification and management. The most common site for dissection or transection is the thoracic aorta due to anatomical fixation points. Current literature describes four cases of traumatic abdominal aortic dissection with intimal tear associated with vertebral fractures due to falling. We present a 30-year-old gentleman who attended following a fall from a first-floor window. Whole-body computerised tomographic imaging demonstrated superior endplate fractures of L1-L3 vertebral bodies and an acute infra-renal abdominal aortic dissection. He was transferred to the regional tertiary vascular centre and managed conservatively. Clinicians should be conscious of potential aortic dissection in trauma, especially where there is evidence of vertebral fractures. Imaging should be evaluated at the time to specifically exclude such injuries.
Introduction
Acute aortic syndrome (AAS) describes a syndrome of four acute aortic pathologies: penetrating atherosclerotic aortic ulcer, intramural aortic haematoma, classical aortic dissection and, more recently, incomplete aortic dissection which is characterised by an intimal tear without an intimal flap or haematoma [1].Classical acute aortic dissection (AAD) is characterised by an intimal flap separating the true and false lumen of the vessel wall.Population studies estimate an annual incidence of AAD of between 2.6 and 7.2 cases per 100,000 people, with a higher incidence in men than in women [2].Aortic dissection resulting from blunt trauma is a rare occurrence, with 1.79% of patients with blunt trauma experiencing a traumatic aortic dissection (TAD), most commonly in the thoracic aorta [3].
TAD most frequently results from a high energy impact due to rapid deceleration and high energy transfer through tissues including the aorta, such as in high-speed road traffic accidents (RTAs).Up to 80% of patients with TAD die before reaching the hospital and those who reach the hospital alive have a high mortality rate [4].The major complications of aortic dissection are extension of the tear, thrombosis of the false lumen of the dissection, periaortic haematoma and aortic rupture, which is most often fatal [1].When TAD does occur, the tear is most likely to occur at the points of greatest hydraulic stress in the right lateral wall of the ascending aorta or the proximal segment of the descending thoracic aorta; rarely does TAD occur in the distal descending aorta [2,5].The proposed mechanism for the formation of an intimal tear at the aortic isthmus is due to the stretching of the isthmus between the moveable ascending aorta and the fixed descending aorta, attached to the posterior chest wall by the ligamentum arteriosum [5].
TAD due to fall from height is rare as compared to RTAs, with few case reports publishing this potentially life-threatening pathology [5][6][7][8].Consequently, this is an important pathology for clinicians to be aware of to initiate prompt investigation and management to avoid the sequelae that can be rapidly fatal.This case report describes TAD due to vertebral fractures following a fall from height.
Case Presentation
We present a 30-year-old man brought in by ambulance to our emergency department (ED) following a fall from a first-floor window of a building (three metres high).He landed on his feet and subsequently hit his head and chest on the ground.He complained of facial and chest injuries and was found at the scene by an ambulance crew walking unaided.This gentleman was previously fit and well, with no past medical history of note.On examination, his airway was patent, and he was talking in full sentences.There was equal bilateral chest rise and air entry.Mild tenderness was noted on the right side of the chest wall.Saturations were 98% on room air.He was well perfused peripherally, with a blood pressure of 123/77 and a pulse of 88 beats per minute.Glasgow Coma Scale (GCS) was 15/15, although he was restless lying in bed.A three-centimetre laceration was noted above the left eye.He was moving his neck freely without pain and moving all four limbs.No obvious chest bruising was seen.His abdomen was soft and non-tender on palpation.There was no long bone tenderness although he was complaining of pain in both feet.
Full body trauma series computerised tomography (CT) was performed which included a non-contrast CT head, CT cervical, thoracic, lumbar and sacral spine and CT of the chest, abdomen and pelvis with contrast.X-ray imaging was performed on the left knee, both feet and calcanei.CT imaging of the head, chest, abdomen and pelvis showed no acute abnormalities.CT imaging of the spine showed non-displaced fractures of the superior endplates of L1, L2 and L3 vertebral bodies with no retropulsion into the spinal canal.Of note, an intimal flap in the infra-renal abdominal aorta was seen, demonstrating an aortic dissection.This is demonstrated in Figures 1-2.There was no evidence of retroperitoneal haematoma and no active contrast extravasation.The patient was given analgesia and referred to a tertiary care centre specialising in vascular surgery where he received conservative medical management for the traumatic intimal flap in the infra-renal abdominal aorta.Observations were monitored with a target systolic blood pressure of <110 mmHg to reduce the risk of progression of the dissection [7].The patient was monitored for signs of mal-perfusion with regular checks of peripheral pulses, abdominal examination and blood tests, particularly for lactate and renal function monitoring.Fluid input and output were closely monitored.Calcaneal tuberosity fractures and L1-L3 superior end-plate fractures were managed conservatively.
He was discharged after seven days of in-patient treatment.He was followed up with a repeat CT angiogram aorta two weeks after discharge which showed a stable unchanged abdominal aortic dissection flap.
Discussion
Isolated TAD of the abdominal aorta is rare, with dissection most commonly occurring in the ascending aorta and proximal descending thoracic aorta [2,5].TAD is often a consequence of high-impact and rapid deceleration events, such as RTAs, resulting in high energy transfer through the body [2,5].Rarely does TAD result due to falling, a mechanism of far lower velocity impact than RTAs [5][6][7][8].
Current literature describes a few case reports of patients suffering TAD following a fall with co-existing vertebral fractures.Table 1 shows a summary of case reports of blunt TAD with intimal tears associated with vertebral fractures.Our literature search yielded only four cases of TAD with intimal tear associated with vertebral fractures due to falls, with the lowest height of fall being from five metres.The majority of cases resulted from RTAs.The mechanism of aortic injuries due to trauma involving vertebral column fractures is thought to be a combination of high energy transfer through the bodily tissues and stress on the aorta; aortic dissection can occur through shear, rotation, flexion and flexion-distraction movements of the spine due to trauma [14,16].
Citations
Studies have demonstrated associations between thoracic spine fractures and aortic rupture as well as between displaced fractures involving the thoraco-lumbar junction (T11-L2) and abdominal aortic injury [11,19,20].AO Spine classification C-type displaced fractures have been found in more than 70% of cases of TAD with vertebral fractures [11].Our patient had undisplaced L1-L3 superior endplate fractures.We theorise that in our patient the energy causing vertebral fractures was transferred to the aorta resulting in an intimal tear.This case builds on the evidence of an association between blunt TAD and vertebral fractures, even in low-impact falls.Low-velocity trauma does not exclude TAD.Additionally, this case demonstrates the rare entity of traumatic abdominal, rather than thoracic, aortic dissection occurring without displacement of the vertebral fractures.
Conclusions
Traumatic abdominal aortic dissection following trauma is a rare entity that clinicians should be aware of especially where there is evidence of vertebral fractures.TAD can be associated with non-displaced vertebral fractures in low-impact mechanisms of injury, such as fall from a low height.
FIGURE 1 :
FIGURE 1: Axial section computed tomography (CT) imaging with contrast showing an intimal tear and aortic dissection (A) in the infrarenal abdominal aorta.
FIGURE 2 :
FIGURE 2: Coronal section computed tomography (CT) imaging with contrast showing an intimal tear and aortic dissection (B) in the infrarenal abdominal aorta.
|
v3-fos-license
|
2020-06-11T09:09:36.775Z
|
2020-06-01T00:00:00.000
|
219606557
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-9721/8/2/19/pdf",
"pdf_hash": "7fd5a42bf866a2055a8ba3e0c6a5b64f87aeaf72",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1154",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0db51c845915318c0fdc6e10f6507031c09e42ff",
"year": 2020
}
|
pes2o/s2orc
|
The m.9143T>C Variant: Recurrent Infections and Immunodeficiency as an Extension of the Phenotypic Spectrum in MT-ATP6 Mutations?
Pathogenic variants in the MT-ATP6 are a well-known cause for maternally inherited mitochondrial disorders associated with a wide range of clinical phenotypes. Here, we present a 31- year old female with insulin-dependent diabetes mellitus, recurrent lactic acidosis and ketoacidosis recurrent infections with suspected immunodeficiency with T cell lymphopenia and hypogammaglobulinemia as well as proximal tetraparesis with severe muscle and limb pain and rapid physical exhaustion. Muscle biopsy and respiratory chain activities were normal. Single-exome sequencing revealed a variant in the MT-ATP6 gene: m.9143T>C. Analysis of further specimen of the index and mother (segregation studies) revealed the highest mutation load in muscle (99% level of mtDNA heteroplasmy) of the index patient. Interestingly, acute metabolic and physical decompensation during recurrent illness was documented to be a common clinical feature in patients with MT-ATP6 variants. However, it was not mentioned as a key symptom. Thus, we suggest that the clinical spectrum might be expanded in ATP6-associated diseases.
Introduction
Pathogenic mitochondrial DNA (mtDNA) variants are associated with a wide range of clinical phenotypes, often involving multiple organ systems. Mitochondria are known to be the "powerhouse of the cell", by providing cellular energy (adenosine triphosphate (ATP)). One indispensable component of this is complex V (CV, ATP-synthase), which is composed of 15 structural and two assembly subunits encoded by both the mitochondrial DNA (mtDNA; ATP6 and ATP8) and nuclear genome (15 subunits) [1]. The MT-ATP6 gene encodes for a subunit of the key enzyme complex V [2].
A recent cohort study reported novel findings associated with MT-ATP6 variants, e.g., an overlap in some patients with non-syndromic neurological manifestation and in asymptomatic individuals. Furthermore, patients across all subtypes harbouring an MT-ATP6 variant tend to have recurrent acute metabolic and physical decompensation during illness [3].
Here, we present a novel variant in the MT-ATP6 gene in a 31-year old female with the clinical leading symptoms of proximal tetraparesis, insulin-dependent diabetes mellitus, recurrent lactic acidosis, and ketoacidosis during recurrent infections. As in most cases with MT-ATP6 variants, muscle biopsy and respiratory chain activities were normal. Single-exome sequencing revealed a homoplasmic variant in the MT-ATP6 gene: m.9143T>C. Segregation studies were performed in different samples of the index patient and her mother, who did not show any evidence for a neuromuscular disorder. Interestingly, the m.9143T>C variant is reported on mitoMAP and GenBank (GQ119047) as a variant [11]. However, no clinical information was given and the mutation is not reported as pathogenic.
The presented case extends the phenotypic spectrum of reported MT-ATP6 mutations by recurrent infections and immunodeficiency as a possible key symptom.
Clinical Description
Here, we present a 31-year old female who developed insulin-dependent diabetes mellitus at the age of 26 years. From there on, she suffered from recurrent lactic-and ketoacidosis, recurrent infections, sometimes requiring ventilation as well as severe muscle and limb pain and rapid physical exhaustion. Her medical history included bronchial asthma as well as suspected polyglandular autoimmune syndrome with type I diabetes mellitus with insulin pump therapy and autoimmune thyroiditis. She had a pulmonary artery embolism, suspected immunodeficiency with T cell lymphopenia and hypogammaglobulinemia, intervertebral disc herniation (L5/S1 left) requiring surgery as well as a bilateral cataract operation in the 30th year of life.
Histopathology and Activities of Respiratory Chain Complexes
Cryostat sections were cut from transversely orientated muscle blocks from the vastus lateral muscle of the patient and subjected to standard histological and histochemical analysis including COX (cytochrome C oxidase), succinate dehydrogenase (SDH) and COX-SDH oxidative enzyme reactions. Respiratory chain complex activities were determined spectrophotometrically according to standard protocols [12].
Molecular Genetic Studies
Exome sequencing and mtDNA analysis using off-target reads was performed as previously described [13]. For this purpose, DNA was extracted from peripheral whole blood. In brief, exome enrichment was done using an Agilent SureSelect Human All Exon Kit V6 (Santa Clara, CA, USA) and libraries were sequenced on an Illumina (San Diego, CA, USA) NovaSeq 6000. The mtDNA reads were aligned to the revised Cambridge Reference Sequence (rCRS) with the Burrows-Wheel Aligner (BWA) 0.7.5a using the mem algorithm. Variant calling was carried out with GATK 3.8. Variant filtering included a filter for putative biallelic non-synonymous variants (missense-, frameshift-, nonsense-, stoploss-and aplice-variants) with a minor allele frequency (MAF < 0.01) as well as heterozygous variants (MAF < 0.001). The latter were ranked based on a phenotype filter. For this purpose, an Online Mendelian Inheritance in Men (OMIM) full-text search was conducted with the search term "mitochondrial" and the respective gene from the results were queried for variants. The phenotype-based search could not identify (likely) pathogenic variants in genes associated with mitochondrial disorders in OMIM.
Determination of mtDNA Heteroplasmy Levels (RFLP Analysis)
For restriction fragment length polymorphism (RFLP) analysis, total DNA was extracted from different samples using the peqGOLD tissue DNA Mini Kit (Peqlab Biotechnologie GmbH, Erlangen, Germany) according to the manufacturer's instructions. The presence of this variant m.9143T>C was determined by restriction digestion of the 220 bp PCR amplified product obtained using a forward 5′ ACC ATT AAC CTT CCC TCT ACA C 3′ and a reverse 5′ GAG GTC ATT AGG AGG GCT GAG A 3′ primer with TseI (New England Biolabs GmbH (NEB), Frankfurt am Main, Germany). The digested products were separated on agarose gels. The amplified fragments containing the mutation yielded two fragments of 67 and 153 bp, while the amplified wild-type fragment remained undigested (due to the loss of the restriction site). Triplet-analysis of mtDNA heteroplasmy (signal intensity of the bands) was carried out with ImageJ software.
Ethical Statement
The index patient and the mother gave written informed consent for study inclusion prior to analysis. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of the University Ulm (Project identification code 20/10) and of the Technical University in Munich.
Clinical Findings
Clinical examination revealed mild proximal tetraparesis (upper extremities: MRC 4/5, lower extremities: MRC 3/5) with a positive Gowers' phenomenon and residual sensorimotor L5 / S1 radiculopathy on the left leg with mild paresis of the foot-lifting (MRC 4/5) and lowering (MRC 4/5). No ptosis or ophthalmoplegia were found. Achilles tendon reflex could not be triggered on the left, the remaining deep tendon reflexes were normal and symmetrical and pathological reflexes could not be provoked. Muscle MRI (thigh muscles) showed normal findings. CK (creatine-kinase) was found to be normal (33 U/l; norm < 145U/l) even during metabolic decompensation and infections.
Neuropsychological examination revealed substantial cognitive impairment with reduced cognitive processing speed and dysfunction of attention, memory and word-finding. There were indications of relevant depressive symptoms. Conception, spontaneous speech, and social behaviour were inconspicuous.
Neurography of the medial nerve, visually evoked potentials, and somatosensory evoked potentials of the medial and tibial nerve revealed normal findings on both sides. Electromyography detected a slightly increased number of turns at standard amplitude potentials, which was mostly compatible with a primary myopathic process. Transthoracic echocardiography was inconspicuous.
The mother did not report any symptoms of muscle weakness and did not show any evidence for a neuromuscular disorder during clinical examination nor any recurrent deterioration. Family history was inconspicuous for neuromuscular disorders and metabolic diseases. The patient has one brother who declined neurological examination and genetic testing.
Histopathology
Muscle biopsy of the vastus lateral muscle showed no myopathic or neurogenic changes. There was no evidence of COX-deficient or ragged-red-fibres and only partially evidence for subsarcolemmal accentuation.
Biochemical analysis of the muscle from the index patient showed normal activities of respiratory chain complexes I, II/III, and IV as well as normal citrate synthase activity.
Molecular Genetic Studies and Determination of mtDNA Heteroplasmy Levels
Exome sequencing revealed a variant in the MT-ATP6 gene: m.9143T>C (variant coverage: 22 × and average coverage mtDNA: 52 × ) in a homoplasmic state which has not been published as pathogenic before. The amino acid position is highly conserved across species reflected by a PhastCons score of 0.978 and a PhyloP score of 3.94. The variant is predicted to disturb the C-terminal helical trans-membrane domain. The highest level of heteroplasmy in the index patient was found in muscle (99%) with lower levels present in urinary epithelial sediment (99%), buccal epithelial cells (96%), hair shafts (96%) and blood (95%) as determined by RFLP ( Figure S1). Levels of mtDNA heteroplasmy detected in the patient's mother are shown in Table 1. Table 1. Segregation study to determine the level of mtDNA heteroplasmy (%) of the novel MT-ATP6 variant in the index patient and mother; values given as means (%) by triplet-analysis + SD.
Discussion
A disease causing variant in the MT-ATP6 gene was first described in 1992 [4]. Since then, several cohort studies expanded the phenotypic spectrum, ranging from asymptomatic carriers to fatal early onset and multisystemic diseases [2,3]. Several "variants of unknown significance" (VUS) have been reported in the MT-ATP6 gene. Due to the absence of clinically available CV activity testing, it remains challenging to fully assess if these variants contribute to clinical disease manifestations.
Interestingly, the reported m.9143T>C variant is listed on mitoMAP and GenBank (GQ119047) as a variant in the Philippine population [11]. However, no clinical information was given and the mutation is not reported as disease causing.
It is already known, that histochemical analysis of muscle biopsy and respiratory chain analysis are usually normal in patients with pathogenic MT-ATP6 mutations, adding to the diagnostic challenges in the respective patients [1,3]. Though, definite pathogenic MT-ATP6 variants are reported, in which standard biochemical findings can be subtle or inconsistent [1].
Generally, pathogenicity for mtDNA variants is proven by identifying the mtDNA variant to be present in symptomatic patients in a heteroplasmic state, rather than homoplasmy, and mtDNA variant heteroplasmy level in the affected patient being higher than in asymptomatic relatives [1]. Interestingly, it has already been shown that these approaches may be particularly problematic in the specific case of MT-ATP6 variants [1]. In a recently published cohort study [3], the majority of patients presented with variable non-syndromic features including ataxia, neuropathy and learning disability. Maternal inheritance was confirmed in 39 families and demonstrated that tissue segregation patterns and phenotypic threshold are variant dependent [3]. The difference of the mutant heteroplasmy levels between different tissues (blood, urinary epithelial cells and buccal mucosal cells) was typically <10% in the majority of MT-ATP6 variants reported in this study [3], which is in accordance to the abovementioned study and our results. In another cohort study, heteroplasmy levels were high in both clinically affected (mean 95%) and unaffected (mean 73%) individuals [2]. These findings were again consistent with our results, where the index patient and mother showed > 90% level of heteroplasmy in all tissues tested. However, it is already known, that the level of heteroplasmy did not clearly correlate with disease severity in several mitochondrial diseases [2,14]. In MT-ATP6 variants, homoplasmic variants were even found in asymptomatic probands [2]. Further, due to the rapid heteroplasmy shifts that may occur in the level of MT-ATP6 mutation load, pathogenic variants may appear to be homoplasmic and carrier patients may express symptoms which can be rather subtle [1]. As a result, apparently unaffected relatives may have MT-ATP6 variant heteroplasmy levels that are high as that seen in clinically affected members of the same family [1]. Moreover, mutation loads of m.8993T>G [15][16][17], m.8993T>C and m.9185T>C overlap in some patients with non-syndromic neurological manifestation and in asymptomatic individuals [3]. This is absolutely concordant with our findings, where mother and index patient showed >90% level of heteroplasmy in all tested specimen.
Acute metabolic and physical decompensation during intercurrent illness was documented in 27 patients harbouring a MT-ATP6 mutation [3] which was seen in the presented patient, too. Interestingly, episodic metabolic decompensation (21/23 versus 6/36, p < 0.001) were significantly more common in patients with LS compared to those without LS [3]. It is already known that immunity and mitochondria (e.g., mitochondrial metabolism) are interlinked with each other [18,19], e.g., mitochondria can regulate activation and transcription of immune cells, differentiation and survival of immune cells. Furthermore, there is emerging evidence that both, autophagy and mitophagy play definitive roles in the control of mitochondrial homeostasis and regulation of innate and inflammatory responses [20]. Appropriately it was reported that in a cohort of 221 paediatric patients with mitochondrial disease, the global mortality rate was 14%, with sepsis (55%) and pneumonia (29%) being the two most common causes of death [21]. Furthermore, there are some animal models, underlying the connection of immunological dysfunction and mitochondrial diseases [22,23]. The presented index patient had a history of recurrent lactic acidosis and ketoacidosis; an immunodeficiency with T cell lymphopenia and hypogammaglobulinemia was suspected. However, it remains indistinct, if recurrent illness may be a part of the MT-ATP6 mutation associated disease spectrum.
Conclusions
The present study provided important insights in the phenotypic spectrum of disease-causing MT-ATP6 variants. Especially against this background, mitochondria are known to play a key role in immunity and regulation of innate and inflammatory responses. It can be concluded that recurrent infections and immunodeficiency may be a possible key symptom in MT-ATP6-related variants.
Limitations
Since the asymptomatic proband's mother showed very similar, nearly homoplasmic values of heteroplasmy, the pathogenicity of this variant is not easy to ascertain. Further functional studies are needed to assign definitive pathogenicity.
|
v3-fos-license
|
2023-11-22T16:04:33.188Z
|
2023-11-20T00:00:00.000
|
265330977
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1269267/pdf?isPublishedV2=False",
"pdf_hash": "d3a6cdd774c2d26b2c884ede31181388eeaf7ca1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1156",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "e5cd36fbf8eeb5e2bfe4768b55151611141f2db1",
"year": 2023
}
|
pes2o/s2orc
|
Low-intensity pulsed ultrasound enhances neurite growth in serum-starved human neuroblastoma cells
Introduction Low-intensity pulsed ultrasound (LIPUS) is a recognized tool for promoting nerve regeneration and repair; however, the intracellular mechanisms of LIPUS stimulation remain underexplored. Method The present study delves into the effects of varying LIPUS parameters, namely duty cycle, spatial average-temporal average (SATA) intensity, and ultrasound amplitude, on the therapeutic efficacy using SK-N-SH cells cultured in serum-starved conditions. Four distinct LIPUS settings were employed: (A) 50 mW/cm2, 40%, (B) 25 mW/cm2, 10%, (C) 50 mW/cm2, 20%, and (D) 25 mW/cm2, 10%. Results Immunochemistry analysis exhibited neurite outgrowth promotion in all LIPUS-treated groups except for Group D. Further, LIPUS treatment was found to successfully promote brain-derived neurotrophic factor (BDNF) expression and enhance the phosphorylation of extracellular signal-regulated kinase (ERK)1/2, protein kinase B (Akt), and mammalian target of rapamycin (mTOR) signaling pathways, as evidenced by western blot analysis. Discussion The study suggests that the parameter combination of LIPUS determines the therapeutic efficacy of LIPUS. Future investigations should aim to optimize these parameters for different cell types and settings and delve deeper into the cellular response mechanism to LIPUS treatment. Such advancements may aid in tailoring LIPUS treatment strategies to specific therapeutic needs.
Introduction: Low-intensity pulsed ultrasound (LIPUS) is a recognized tool for promoting nerve regeneration and repair; however, the intracellular mechanisms of LIPUS stimulation remain underexplored.
Results: Immunochemistry analysis exhibited neurite outgrowth promotion in all LIPUS-treated groups except for Group D. Further, LIPUS treatment was found to successfully promote brain-derived neurotrophic factor (BDNF) expression and enhance the phosphorylation of extracellular signal-regulated kinase (ERK)1/2, protein kinase B (Akt), and mammalian target of rapamycin (mTOR) signaling pathways, as evidenced by western blot analysis.
Discussion: The study suggests that the parameter combination of LIPUS determines the therapeutic efficacy of LIPUS.Future investigations should aim to optimize these parameters for different cell types and settings and delve deeper into the cellular response mechanism to LIPUS treatment.Such advancements may aid in tailoring LIPUS treatment strategies to specific therapeutic needs.KEYWORDS low-intensity pulsed ultrasound, ultrasound parameters, neurite outgrowth, serumstarved cell model, SK-N-SH cells Highlights -Our study reveals the capacity of LIPUS to promote nerve regeneration and repair.
-By systematically exploring the effects of varying LIPUS parameters on SK-NSH neuroblastoma cells, our research uncovers crucial mechanisms that underscore the therapeutic efficacy of LIPUS.-These findings are profoundly important for neurological treatment, particularly considering the pervasive challenge of nerve regeneration in various neurodegenerative conditions.
Introduction
Neurological disorders are often associated with severe consequences, impacting the affected individuals and exerting a substantial burden on the healthcare system and society (D'Andrea et al., 2003;Winkler et al., 2011;Wen and Huse, 2017;Lizarraga-Valderrama, 2021).Prevalent conditions such as traumatic brain and spinal cord injuries, cerebrovascular incidents, Alzheimer's disease, and peripheral nerve injuries significantly reduce a patient's quality of life (Seddon, 1942(Seddon, , 1943;;McKhann et al., 1984;Bramlett and Dietrich, 2007;Bains and Hall, 2012;Smajlović, 2015;Scheltens et al., 2016).Numerous studies have highlighted the importance of promoting nerve regeneration and repair as a solution to recover impaired nerve functionality (Schwob, 2002;Steward et al., 2013).Consequently, considerable interest has converged toward investigating effective therapeutic strategies to enhance neural repair and regeneration.
A range of potential interventions is emerging with advances in neurobiology and related technologies.These encompass but are not limited to, stem cell therapy (Lavorato et al., 2021), gene therapy (Müller et al., 2006), utilization of biomaterials (Subramanian et al., 2009;Joung et al., 2020), and electrical stimulation (Gordon and English, 2016;Zhang et al., 2021).Supported by experimental evidence, these innovative therapeutic approaches show promise for managing neurological disorders.A further step is understanding the molecular and cellular mechanisms associated with these therapeutic approaches that control nerve repair and regeneration, which is critical to developing treatment protocols (Scheib and Höke, 2013).
Low-intensity pulsed ultrasound (LIPUS) has recently become a safe and effective method in non-invasive physical therapy, significantly advancing in various treatment areas (Khanna et al., 2009;Zhao et al., 2012;Shaheen et al., 2013).It is postulated that the therapeutic efficacy of LIPUS is based on the mechanical and non-thermal influences of ultrasound waves, leading to biologically beneficial effects within the intra-and extracellular environment (Khanna et al., 2009).Evidence shows that LIPUS can help improve bone healing and bone density recovery in cases of fractures.Similarly, LIPUS has demonstrated commendable therapeutic outcomes on soft tissue injuries (Lai et al., 2021), wound healing (Iwanabe et al., 2016), inflammation (Nakao et al., 2014), tooth-root healing (Ang et al., 2010), and others (Zhao et al., 2014;Jiang et al., 2019;Huang et al., 2020).
In addition to the aforementioned applications, accumulated evidence has revealed the vital role of LIPUS in promoting nerve regeneration and repair.For instance, a study conducted by Zhao et al. (2016) demonstrated the efficacy of combining LIPUS and nerve growth factor (NGF) in promoting neurite outgrowth via mechanotransduction-mediated extracellular signal-regulated kinase (ERK)1/2-CREB-Trx-1 signaling pathways.In addition, Han et al. (2020) reported that LIPUS can enhance the regeneration of injured dorsal root ganglion neurons through mTOR upregulation.Interestingly, mTOR has been recognized as a vital regulator in neuronal development and plasticity by participating in multiple signaling pathways, whereby disturbed mTOR signaling correlates with abnormal neuronal function and failure of many cellular processes (Licausi et al., 2010;Archer et al., 2018).Therefore, these findings underscore the considerable promise of LIPUS as a potential therapeutic strategy for neural regeneration and repair.
Despite the numerous advancements made by LIPUS in treating neurological disorders, several key issues must be addressed before clinical translation.One of these challenges is the determination of specific LIPUS parameters, including the ultrasound fundamental frequency (UFF), pulse repetition frequency (PRF), spatial average-temporal average (SATA) intensity, and duty cycle (DC).Optimal parameter settings may differ according to specific neurological conditions.Moreover, the molecular and cellular mechanisms underlying the therapeutic effects of LIPUS in neurological disorders remain inadequately explored, and conclusive assertions have yet to be established (Miller et al., 2012).Therefore, future research endeavors should focus on identifying the optimal ultrasound parameters for various neurological disorders and exploring the therapeutic mechanisms of LIPUS.
To explore the effects of LIPUS on the nervous system, we conducted an in vitro study utilizing SK-N-SH cells in a serum-starved environment as the experimental model.The SK-N-SH cells are derived from a human neuroblastoma cell line.Extensively employed in neurobiological research, they serve as an archetypal model for the nervous system (Green et al., 1996;Wang et al., 2007;Zhou et al., 2021;Journal of Healthcare Engineering, 2023).Our study aimed to evaluate the influence of LIPUS on neural cell growth and its interaction with protein signaling pathways.The serum-deprivation model is frequently employed as an in vitro injury model due to its ability to generate oxidative stress and disrupt protein expression (White et al., 2020).It has become a standard approach in numerous prior studies focusing on therapeutic development and investigating the mechanisms of recovery (Zhao et al., 2016).Building upon existing research, we utilized the serum-starved model to examine the positive influences of LIPUS on neuronal growth and uncover the related biochemical mechanisms.Specifically, we focused on proteins related to the growth and proliferation of neurons, encompassing the mammalian target of rapamycin (mTOR), ERK1/2, Protein kinase B (also known as Akt), and brain-derived neurotrophic factor (BDNF).mTOR is a serine/threonine protein kinase, functioning as a core regulator of cell growth, metabolism, and protein synthesis, and playing an essential role in neural development, synaptic plasticity, and memory formation (Saxton and Sabatini, 2017).ERK, a subset of mitogen-activated protein kinases (MAPKs), governs a range of cellular processes such as cell survival, proliferation, and differentiation, and is integral to neuronal plasticity and long-term memory formation (Roskoski, 2012).Akt is a serine/threonine kinase implicated in regulating cell survival, growth, and metabolism, and its dysregulation is linked to various neurological disorders (Manning et al., 1995).The customized miniature LIPUS driver system.(A) The circuit block diagram of the customized miniaturized LIPUS driver system and (B) the prototype.
BDNF, a neurotrophin, promotes the survival of existing neurons and stimulates the growth, differentiation, and synaptic plasticity of new neurons (Bramham and Messaoudi, 2005).In addition to investigating these signaling pathways, we also examined different ultrasound parameters to identify optimal conditions for LIPUS treatment.Overall, this research sought to explore the therapeutic mechanisms of LIPUS and its effective parameters.
Customized miniaturized LIPUS driver system
In this study, we engineered a customized miniaturized LIPUS driver system.Figure 1A depicts the overall circuit block diagram, while Figure 1B displays the prototype.The solution for the Bluetooth communication and microcontroller unit (MCU) control module was implemented using the ESP32-PICO-KIT V4 development board [Espressif Systems (Shanghai) Co., Ltd., China].The MCU output signal controls the output amplitude of the buck-boost converter (5-40V), UFF, PRF, and DC.The buck-boost converter is based on the synchronous 4-switch buckboost DC/DC controller IC chip LT8390A.The system employs a half-bridge driver to drive the transducers, composed of a halfbridge gate driver IC chip DGD05463FN-7 and a dual N-channel MOSFET IC chip NTTFD4D0N04HLTWG.This device facilitates precise adjustment of ultrasound amplitude, UFF, PRF, and DC, thus allowing for quick and simple configuration and delivery of a wide array of LIPUS parameters.
LIPUS exposure system
The well-on-transducer method was employed for the in vitro ultrasound therapy, where each well was coupled with a planar transducer via a gel medium.This setup is prevalent in cell and tissue sample ultrasound studies due to its simplicity (Hensel et al., 2011).The coupling layer, composed of gel or water, facilitates acoustic matching, ensuring optimal energy transfer from the transducer to the sample.Customized ultrasound transducers for this experiment have a single resonant frequency of 1.5 MHz and a 25 mm diameter.Before each use, the SATA intensity of the ultrasound transducer was quantified utilizing the ultrasound power meter UPM-DT-1000PA (OHMIC Instruments, MO, USA).The ultrasound coupling agent and the base of the cell culture well were attached to the ultrasound transducer so that the penetrating loss of ultrasound energy would be included in the measurement.
A 12-well cell culture plate, arranged in a 3 × 4 matrix, was used for the cell culture.Each well had a diameter of 22 mm, smaller than the ultrasound transducer, which exposed all cells within the well to the LIPUS.The cells were seeded in the wells located at the four corners.This enabled simultaneous LIPUS treatment across four wells, enhancing experimental efficiency.The unoccupied central wells minimized ultrasound crosstalk between neighboring wells.To assure efficient ultrasound transmission, an ultrasound gel (Wavelength R MP Blue Multi-Purpose Ultrasound Gel, ON, Canada) was used as the coupling medium between the ultrasound transducer and the cell culture plate, as depicted in Figure 2A.
To measure the ultrasonic field intensity distribution at the well bottom, a customized 3D-printed jig was developed for accurate hydrophone (HNR-1000, Onda Corporation, CA, USA) placement.The sampling procedure commenced from the center of the well, systematically moving outward at 30 • intervals and sampling every 1 mm in the radial direction.Each sampled point represented the local pressure distribution, which was subsequently converted into power intensity using the equation I = p 2 0 2z , where I, p 0 , z stands for the power intensity, ultrasonic pressure, and acoustic impedance, respectively.During testing, the ultrasound system was adjusted to output a constant sound field SATA intensity of 125 mW/cm 2 at a duty cycle of 100%, as measured by an ultrasound power meter UPM-DT-1000PA.This resulted in a SATA intensity of 25 mW/cm 2 by adjusting the duty cycle to 20%.The outcome of our measurements is presented in Figure 2B, where the red dot signifies the location of the test point.Although some variations in the sound field distribution were observed, leading to higher (200 mW/cm 2 ) and lower (50 mW/cm 2 ) intensity areas, the majority of the tested regions remained from 100 to 150 mW/cm 2 .
Cell culture and LIPUS treatment
The overall LIPUS stimulation protocol is shown in Figure 3.The SK-N-SH cell line was kindly provided by Dr. Tom Hobman, Department of Cell Biology, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada.SK-N-SH cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; 319-005-CL, WISENT INC.) with 10% fetal bovine serum (FBS; 090150, WISENT INC.) within a humidified 37 • C incubator.This culture medium is referred to as complete media.For seeding, SK-N-SH cell suspension was adjusted to a concentration of 10,000 cells/mL, and each well was loaded with 1 mL of the suspension, resulting in an initial seeding density of 10,000 cells per well.Approximately 6 h later, following adherence of the majority of cells to the well surface, the culture medium was aspirated and the wells were washed 3 times with DMEM without FBS.The culture medium was then replaced with a low-serum media, composed of DMEM supplemented with 1% FBS, creating a serum-starved cell model.
Two control groups were included: (1) SK-N-SH cells cultured in complete media (10% FBS) without LIPUS stimulation were used as a healthy control group, and (2) SK-N-SH cells grown in low-serum media without LIPUS stimulation were used as the control for serum starvation.Four LIPUS treatment groups grown in low-serum media (1% FBS) were tested.To investigate the effect of various SATA intensities (mW/cm 2 ) and DC (%) on cellular responses, four distinct LIPUS treatment parameter configurations were tested: (A) 50 mW/cm 2 , 40%, (B) 25 mW/cm 2 , 20%, (C) 50 mW/cm 2 , 20%, and (D) 25 mW/cm 2 , 10%.Groups A and C shared the same ultrasound intensity but varied in duty cycles.This pattern was mirrored in groups B and D. Notably, the ultrasound amplitude in groups B and D was approximately 0.707 times that of groups A and C. UFF was set at 1.5 MHz and the PRF was set to 1 kHz in the four groups and remained constant.LIPUS treatment began 18 h post-transition to the low-serum media (on day 2) for 10 min.To ensure a uniform ultrasound distribution, the transducer was rotated 180 • following the initial 5-min treatment period, then continued for 5 min.The 10-min LIPUS treatment was repeated every 24 h.Ten minutes after the fourth LIPUS treatment (on day 5), the supernatants were collected for cell viability assessments, and the cells were either collected for protein extraction or imaged using immunocytochemistry (ICC).
Cell cytotoxicity quantification
The LDH-Cytotoxicity Assay Kit II (ab65393) was utilized to quantify lactate dehydrogenase (LDH) release to indicate cytotoxicity.Cell supernatants were collected and cleared of debris via centrifugation (600×g).Following the manufacturer's instructions, a water-soluble tetrazolium (WST) substrate mix was added to the cleared supernatant and mixed thoroughly in a 96-well plate.Following a 30-min incubation at room temperature, the absorbance was measured at 450 nm, with a reference wavelength of 650 nm using a colorimetric microplate reader (FLUOstar Omega Microplate Reader, BMG).A low LDH control, composed of cell-free media, and a high LDH control, composed of the supernatant of the SK-N-SH cells cultured in the complete media for 5 days, were used to calculate cell cytotoxicity.Cytotoxicity (%) was calculated using the following calculation: Cytotoxicity = (Test Sample -Low Control)/(High Control -Low Control) × 100%.Results were normalized to the cytotoxicity of the healthy control group.
Cell visualization
ICC was employed to visualize cellular structures to observe cell morphology.For ICC experiments, glass coverslips were washed, sterilized, and placed in the 12-well cell culture plate before SK-N-SH seeding.Cells were treated as outlined above.On the fifth day of the experiment (after the fourth LIPUS treatment), cells were rinsed once with phosphate-buffered saline (PBS; 311-010-CL, WISENT INC.) and then fixed in a solution of 4% paraformaldehyde (PFA; 441244, Sigma-Aldrich) diluted in 1 × PBS.This fixation process was carried out at room temperature for 10 min.Subsequently, cells were rinsed twice with 1 × PBS.Cell membranes were permeabilized using 0.2% Triton X-100 (A16046, Thermo Fisher LIPUS stimulation protocol.Serum-starved SK-N-SH cells were treated with 10 min of LIPUS stimulation daily for four consecutive days (UFF: 1.5 MHz; PRF: 1 kHz; SATA intensity: either 25 or 50 mW/cm 2 ; DC: either 10, 20, or 40%).
Scientific) diluted in 1 × PBS, for 5 min at room temperature, followed by three washes with 1 × PBS.The coverslips were then transferred onto a strip of parafilm in a humidifying chamber and incubated with blocking buffer (0.5% bovine serum albumin (BSA; A2134, Biomatik) and 6% normal goat serum (ab7481, Abcam) diluted in 1× PBS) for 1 h at room temperature.Following this, primary antibodies (Tubulin, 1:500, MAB1637, Sigma-Aldrich) diluted in a 1:1 mixture of blocking buffer and 1 × PBS were added to the samples and incubated either for 2 h at room temperature or overnight at 4 • C. The coverslips were then washed three times for 5 min in 1 × PBS and incubated with the secondary antibody solution [goat anti-mouse (1:1000, Alexa Fluor 594, A-11032, Invitrogen)] diluted in 1 × PBS for 1 h at room temperature.Subsequently, the secondary antibody solution was aspirated, and the coverslips were rinsed three times with 1 × PBS.The coverslips were then mounted using ProLong Gold antifade reagent with DAPI (P36935; Invitrogen) and were allowed to dry overnight at room temperature, protected from light.Finally, the ICC imaging was conducted using the EVOS M5000 Imaging system (Thermo Fisher Scientific Inc., MA, USA).
Statistical analysis
Statistical analysis was conducted on data for LDH quantification results, neurite length, soma size, and western Normalized percent cytotoxicity from SK-N-SH cells treated with LIPUS.LDH levels from the supernatant of untreated (healthy control and serum-starved control) and LIPUS-treated SK-N-SH cells were quantified using a commercially available LDH assay.All results were normalized to the healthy control group.N = 5/group.ns: no statistical significance; * * * * p < 0.0001.
blot quantification outcomes using GraphPad Prism software (GraphPad Software, MA, USA, version 9.4).The comparison among the healthy control, serum-starved control, and LIPUStreatment groups was based on the one-way analysis of variance (ANOVA) followed by a post hoc multiple comparison test.The observed power for each analysis was calculated to ascertain that the sample size was sufficient to substantiate the findings.Statistical significance was designated at a p-value threshold of less than 0.05.
Cytotoxicity results
To determine if the LIPUS treatment altered cell viability, LDH levels from all treatment groups were evaluated as a measure of cytotoxicity.Dying cells release LDH, which can be measured using a commercially available kit.Analysis of the cytotoxicity assay revealed that LDH levels in all serum-starved groups (serumstarved control and LIPUS treatment groups A, B, C, D), were significantly lower than the healthy (complete media) control with a value of roughly 0.7 (p < 0.0001), as presented in Figure 4.However, no statistical significance was observed among the serumstarved control group and the LIPUS-treatment groups A, B, C, and D, suggesting that the LIPUS treatments did not alter cell survival in serum-deprived conditions.
ICC results
An in-depth analysis of neurite lengths in the serumstarved control and LIPUS-treatment groups was performed to understand how different LIPUS settings can influence nerve cell growth.Representative ICC images of each group are shown in Figure 5.The lengths of the neurites were measured using NeuronJ, an ImageJ plugin tailored for neurite tracing and analysis.In the analysis, we exclusively focused on the longest neurite of each cell, typically considered the axon.The length of the axon is crucial for nerve signal transmission and the overall function of the nerve cell (Debanne et al., 2011).Additionally, we measured the width of the cell body at its widest point.Any neurite lengths less than the diameter of the cell body were excluded as they were considered inadequately developed neurites.As the healthy control group exhibited a high cell density with most of the neurites densely interwoven, neurite length was not quantified.To facilitate statistical analysis, neurite length was quantified from the edge of the cell nucleus (DAPI) to the distal end of the cell skeleton (Tubulin) using the NeuronJ.The measurements were conducted following the guidelines provided by NeuronJ, and default parameters were employed for the neurite length measurements.
No significant differences were observed among the cell body/soma diameters between SK-N-SH cells in control or LIPUS treatment groups under low-serum conditions, as presented in Figure 6A.However, LIPUS-treatment groups A, B, and C, exhibited a significant enhancement in neurite growth compared to the serum-starved control group (p < 0.005), whereby the mean ± standard error length for each group was as follows: serum-starved control group 117.1 ± 57.0 µm, Group A 138.9 ± 63.5 µm, Group B 137.7 ± 64.1 µm, and Group C 138.7 ± 70.5 µm, as shown in Figure 6B.The LIPUStreatment group D (25 mW/cm 2 , 10%) did not show a significant difference in neurite length (mean ± standard error neurite length: 118.6 ± 60.8 µm) compared to the serum-starved control group.
Western blot results
Western blot analyses were conducted to evaluate BDNF levels and the activation of the BDNF signaling pathway, namely by assessing the phosphorylation status of ERK, Akt, and mTOR, in response to the LIPUS treatments.Serum starvation significantly decreased BDNF levels compared to cells grown in complete media.However, this decrease was ameliorated by three of the four LIPUS treatment parameters (p < 0.01, Figure 7).SK-N-SH cells treated with a SATA of 50 mW/cm 2 (Groups A and C) or with a SATA of 25 mW/cm 2 group and the higher duty cycle of 20% (Group B) exhibited increased BDNF levels.Notably, while the Group D treatment parameters (SATA 25 mW/cm 2 , 10% DC) significantly upregulated BDNF levels, these parameters failed to elicit significant elevation in downstream signaling events.Increased BDNF levels were associated with an increase in phosphorylation levels of Akt, ERK1/2, and mTOR for LIPUS-treatment groups A, B, and C, though not group D (Figure 6), suggesting that these pathways were activated in the SK-N-SH cells upon treatment.
Discussion
Previous research has demonstrated that LIPUS can stimulate nerve regeneration and neurite outgrowth through the ERK1/2 and mTOR signaling pathways (Miller et al., 2012;Lv et al., 2015;Jiang et al., 2016;Sato et al., 2016;Zhao et al., 2016Zhao et al., , 2017;;Han et al., 2020).This study sought to determine the optimal intensity and duty cycle of ultrasound stimulation on serum-deprived SK-N-SH cells Western blot analysis of BDNF growth factor signaling pathway activation upon LIPUS stimulation.and explore the underlying molecular mechanisms under LIPUS stimulation.
In cell culture, serum is an important source of nutrients and growth hormones.Decreasing the serum level in media can induce a starvation state, which can alter cell signaling and growth.In this study, SK-N-SH cells were cultured in serum-starved conditions for 4 days, leading to limited nutrient availability.This resulted in a reduced metabolic state and slower cell division due to nutrient deprivation.Compared to cells in complete media (healthy control), those in serum-starvation conditions exhibited slower growth, as evident from the ICC analysis and cytotoxicity levels.In addition, lower cell growth rates were observed alongside reduced BDNF levels and decreased activation of the Akt, ERK1/2, and mTOR pathways compared to healthy controls.These findings confirm the successful establishment of the experimental starvation model.Interestingly, LIPUS stimulation under the tested conditions had no significant impact on cell proliferation or cytotoxicity when compared to the serum-starved control group.Furthermore, LIPUS treatment did not affect the soma size of SK-N-SH cells.Altogether, these results suggest that the LIPUS treatment does not impact neuronal proliferationeither negatively or positively.
While the LIPUS treatment appeared to have no impact on cell proliferation, enhanced neurite growth was found in three out of the four tested LIPUS treatments.In addition, the three LIPUS conditions (A, B, C) leading to increased neurite length were associated with higher BDNF expression and increased phosphorylation of ERK1/2, Akt, and mTOR, compared to the Hypothesized mechanism by which LIPUS stimulates neurite outgrowth.LIPUS stimulation increases the release of the growth factor BDNF, activating the mTOR pathway downstream of Akt and ERK1/2, leading to neurite outgrowth.
serum-starved control.Previous studies have highlighted the pivotal roles of the ERK1/2 and Akt signaling pathways in mediating the effects of growth factors such as nerve growth factor and BDNF on neuronal growth under LIPUS treatment (Zhao et al., 2016;Guo et al., 2021).Additionally, evidence supports the role of LIPUS in promoting the regeneration of injured dorsal root ganglion neurons through activation of the mTOR pathway (Han et al., 2020).This study replicates these findings, demonstrating the upregulation of these proliferation-related proteins in nerve cells stimulated by LIPUS, and proposes a comprehensive mechanistic pathway, as depicted in Figure 8.The discrepancy in p-Akt levels among conditions A, B, and C can be attributed to variable responses induced by different LIPUS parameters.While condition B showed lower p-Akt levels compared to A and C, it still elevated p-Akt levels compared to the serum-starved control.Moreover, the consistent expression of p-mTOR across conditions suggests that ERK1/2 may play a predominant role in mTOR regulation within LIPUS therapy.
This study investigated the effects of four distinct ultrasound parameter settings on SK-N-SH cells cultured in a low-serum environment.It is widely believed that the therapeutic effects of LIPUS arise mainly from non-thermal effects, including cavitation and mechanical effects (Snehota et al., 2020).Our experimental results corroborated this perspective, underscoring that the therapeutic efficacy of LIPUS is not primarily attributable to its thermal effects.Within the LIPUS treatment groups, groups B and D shared identical SATA intensities, indicating a similar thermal influence.Nonetheless, while group B exhibited enhanced neurite growth associated with increased BDNF signaling, group D displayed no significant deviation from the serum-starved control group.Notably, although group B had only half the thermal effect of groups A and C, they showed a similar promoting effect on neurite outgrowth.Overall, these findings suggest that the non-thermal effects of LIPUS, rather than the thermal effects, play a more significant role in promoting neurite growth and altering protein signaling pathway expression.Moreover, from a safety perspective in the context of LIPUS treatment, group B, possessing only half the SATA power of groups A and C, consistently demonstrated a comparable enhancement in neurite growth.This observation underscores the potential of parameter B (25 mW/cm 2 , 20%) as a safer choice for both future research endeavors and clinical applications.
Future studies are required to systematically investigate the effects of different LIPUS parameters, encompassing duty cycle, SATA intensity, and ultrasound amplitude, on therapeutic efficacy.The comparison between group D and other groups reveals that slight adjustments in these parameters can significantly alter the therapeutic effect of LIPUS.Understanding the contribution of these parameters to the overall treatment effect is crucial for developing more effective LIPUS treatment strategies.In addition, in-depth studies of the molecular and cellular processes that control the observed therapeutic effects are also needed.Expanding knowledge of the underlying mechanisms and interactions among LIPUS parameters would enable researchers and clinicians to tailor LIPUS therapies to specific therapeutic needs.
Conclusion
This study investigated the effects of various LIPUS parameters on SK-N-SH cells cultured in serum-starved conditions.Four parameter settings were studied, altering either the SATA (mW/cm 2 ) or the duty cycle (%): A (50 mW/cm 2 , 40%), B (25 mW/cm 2 , 10%), C (50 mW/cm 2 , 20%), and D (25 mW/cm 2 , 10%).ICC results revealed that parameter groups A, B, and C stimulated neurite outgrowth, associated with increased BDNF expression and enhanced the phosphorylation of ERK1/2, Akt, and mTOR signaling pathways.The investigation also revealed that the combination of SATA intensity, duty cycle, and ultrasound amplitude critically determined the therapeutic efficacy of LIPUS, which appears unrelated to any thermal effects of ultrasound.Future research is required to optimize different parameters for various cell types and experimental settings, and explore the indepth mechanism of cellular response to LIPUS treatment.These advancements will help researchers and clinicians tailor LIPUS treatment strategies to specific treatment needs.
FIGURE 1
FIGURE 1 FIGURE 2 (A) Sketch of the LIPUS exposure setup.A 12-well culture plate (well diameter of 22 mm) was placed on a 3D-printed base.Four 25 mm transducers placed in the four corners of the plate contacted the well through ultrasound (US) gel medium.The transducers were connected to the customized miniaturized LIPUS driving device.(B) The ultrasonic sound field distribution measured from the bottom of the well (SATA intensity 125 mW/cm 2 with a duty cycle of 100%).
|
v3-fos-license
|
2021-01-14T14:22:23.832Z
|
2021-01-14T00:00:00.000
|
231597817
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.615598/pdf",
"pdf_hash": "17a5c66a78704d6b83f13b06f8d9567f770d4abd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1157",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "17a5c66a78704d6b83f13b06f8d9567f770d4abd",
"year": 2020
}
|
pes2o/s2orc
|
Thymoquinone Prevents Dopaminergic Neurodegeneration by Attenuating Oxidative Stress Via the Nrf2/ARE Pathway
Studies have indicated that oxidative stress plays a crucial role in the development of Parkinson’s disease (PD) and other neurodegenerative conditions. Research has also revealed that nuclear factor erythroid 2-related factor 2 (Nrf2) triggers the expression of antioxidant genes via a series of antioxidant response elements (AREs), thus preventing oxidative stress. Thymoquinone (TQ) is the bioactive component of Nigella sativa, a medicinal plant that exhibits antioxidant and neuroprotective effects. In the present study we examined whether TQ alleviates in vivo and in vitro neurodegeneration induced by 1-methyl-4-phenylpyridinium (MPP+) and 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) by acting as an activator of the Nrf2/ARE cascade. We showed that TQ significantly reduced MPP+-mediated cell death and apoptosis. Moreover, TQ significantly elevated the nuclear translocation of Nrf2 and significantly increased the subsequent expression of antioxidative genes such as Heme oxygenase 1 (HO-1), quinone oxidoreductase (NQO1) and Glutathione-S-Transferase (GST). The application of siRNA to silence Nrf2 led to an abolishment in the protective effects of TQ. We also found that the intraperitoneal injection of TQ into a rodent model of PD ameliorated oxidative stress and effectively mitigated nigrostriatal dopaminergic degeneration by activating the Nrf2-ARE pathway. However, these effects were inhibited by the injection of a lentivirus wrapped Nrf2 siRNA (siNrf2). Collectively, these findings suggest that TQ alleviates progressive dopaminergic neuropathology by activating the Nrf2/ARE signaling cascade and by attenuating oxidative stress, thus demonstrating that TQ is a potential novel drug candidate for the treatment of PD.
INTRODUCTION
Parkinson's disease (PD) is a non-reversible and age-linked chronic neurodegenerative condition that is typified by the depletion of nigrostriatal dopaminergic neurons. The resulting depletion of dopamine causes resting tremor, postural instability, rigidity, and bradykinesia. Although the precise cause of PD has yet to be elucidated, an accumulating body of evidence suggests that oxidative stress has a significant influence on the pathogenesis of PD (Grayson, 2016). Multiple biomechanisms have been proposed to affect the mitochondria of dopaminergic neurons, result in increased production of reactive oxygen species (ROS) (Schapira and Jenner, 2011). ROS can result in covalent oxidative modifications such as the oxidation of RNA and can induce mutations in mitochondrial DNA (mtDNA), thus affecting the stability of nucleic acids (Angelova and Abramov, 2018). Moreover, oxidative modifications are known to interfere with protein homeostasis by expediting the aggregation of α-synuclein and parkin, and by dissociating the proteasome (Scudamore and Ciossek, 2018). These modifications may cause cellular dysfunction and even apoptosis. The mitochondrial-dependent caspase pathway is known to play an essential role in apoptosis (Van Opdenbosch and Lamkanfi, 2019). Research has shown that the stimulation of this cascade induces the release of proapoptotic factors into the cytosol, including cytochrome c (Cyc), activating caspase-9, and caspase-3; thus, triggering cellular apoptosis (Green and Llambi, 2015). Therefore, it is evident that the antioxidant pathways that regulate mechanisms to ameliorate oxidative damage may also exhibit neuroprotective effects (Izumi et al., 2018).
The most significant of the endogenous antioxidant cascades is the Nrf2/ARE signaling cascade (Buendia et al., 2016). When exposed to ROS or other exogenous toxicants, the cytoplasmic NFE-related factor 2 (Nrf2) becomes activated, disengages from kelch-like ECH-associated protein (Keap1) and translocates into the nucleus. Within the nucleus, activated Nrf2 then interacts with antioxidant reactive elements (AREs) which have been shown to be distinct regulators of antioxidant molecules (Zhang et al., 2016). Subsequently, ARE-mediated processes induce the activation of a range of antioxidative enzymes and detoxifying enzymes, including heme oxygenase 1 (HO-1), quinone oxidoreductase (NQO1), NAD(P)H, and glutathione-S-transferase (GST). Collectively, these factors play critical roles in sustaining cellular function (Buendia et al., 2016). An accumulating body of evidence now suggests that the Nrf2-ARE cascade plays a significant role in the development of PD (Fão et al., 2019;Gureev and Popov, 2019). For example, clinical studies have reported that patients with PD showed reduced expression levels of 31 genes that contained the ARE-sequence in their promoters; these patients also expressed increased levels of Nrf2 (Wang et al., 2017). Moreover, research has shown that the Nrf2-ARE axis forms a crucial antioxidant defense pathway that demonstrated neuroprotective effects in an experimental model of PD by inhibiting oxidative stress and neuroinflammation (Wang et al., 2017). Collectively, these findings indicate that drugs that activate Nrf2 could impede the pathogenesis of PD.
Thymoquinone (TQ) is the primary active ingredient of the vaporous oil from Nigella sativa and is known to exhibit antioxidant and anti-inflammatory properties (Darakhshan et al., 2015). A number of studies have demonstrated that TQ can exert effects against apoptosis in primary dopaminergic cells triggered by exposure to 1-methyl-4phenylpyridinium (MPP + ) and rotenone (Radad et al., 2009;Radad et al., 2015). Another study reported that TQ specifically averted rotenone-triggered motor defects and variations in the levels of TH, dynamin-related protein-1(Drp1), dopamine, and Parkin (Ebrahimi et al., 2017). In the Hemi-Parkinson rat model, induced by exposure to 6-hydroxydopamine (6-OHDA), TQ has been reported to reduce the levels of malondialdehyde (MDA) and prevent the degeneration of dopaminergic neurons, thus suggesting that TQ has an antioxidant effect (Sedaghat et al., 2014). Evidence therefore indicates that oxidative stress, as well as dysregulation of the Nrf2-ARE signaling cascades, plays a key role in the development of PD (Gureev and Popov, 2019). However, a precise mechanism that links TQ, oxidative stress, and the Nrf2-ARE signaling cascades, has yet to be identified in PD. Further research should be conducted to establish whether TQ represents a promising therapeutic drug with which to modulate the Nrf2-ARE signaling pathways to confer strong neuroprotective effects against the development of PD.
In the present study, we established a MPP + -induced cytotoxicity model, and a mouse model of 1-methyl-4-phenyl-1, 2, 3, 6-tetrahydropyridine (MPTP)-induced PD, to investigate the biomechanisms that may underlie the neuroprotective effects of TQ. Our findings suggested that TQ activates the Nrf2 signaling cascade in PD and therefore represents a potential drug candidate for the treatment of PD.
Cell Growth and Treatment
Human neuroblastoma SH-SY5Y cells were obtained from a cell bank held by the Chinese Academy of Sciences and were cultured Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 615598 in a 1:1 mixture of minimum Eagle's medium (MEM ) and F12 containing 100 IU/ml of penicillin, 0.1 mg/ml of streptomycin, and 10% heat-inactivated fetal calf serum. Cells were cultured in a humidified atmosphere with 5% CO 2 at a temperature of 37°C. Neuronal differentiation was triggered by adding 10 μM retinoic acid (RA) in MEM/F12 containing 10% FBS and allowing cells to culture for a further 7 days. A 10 mM stock solution of TQ was prepared in DMSO. We ensured that the final concentration of TQ added to MEM/F12 did not exceed 0.01%. We performed siRNA interference by transfecting cells with Nrf2 siRNA or negative control (NC) siRNA in a 6-well plate using Lipofectamine 3,000 reagent for 24 h in accordance with the manufacturer's guidelines. After 24 h of transfection, the cells were treated with MPP + with or without TQ. The cells were then separated into seven groups: 1) a control group in which RAdifferentiated SH-SY5Y cells were administered with MEM/F12 containing 10% fetal calf serum; 2) a model group in which RAdifferentiated SH-SY5Y cells were administered with 1 mM MPP + for 24 h; 3) a TQ-treated group in which RAdifferentiated SH-SY5Y cells were pre-treated with TQ (0.25, 0.5, and 0.75 μM) for 2 h and then exposed to 1 mM MPP + for 24 h; 4) a negative control (NC) siRNA group in which RAdifferentiated SH-SY5Y cells were administered with NC siRNA for 24 h followed by 1 mM MPP + for 24 h; 5) a TQtreated NC siRNA group in which RA-differentiated SH-SY5Y cells were administered with NC siRNA for 24 h followed by TQ with MPP + ; 6) a Nrf2 siRNA group in which RA-differentiated SH-SY5Y cells were treated with Nrf2 siRNA for 24 h followed by 1 mM MPP + for 24 h; and 7) a TQ-treated Nrf2 siRNA group in which RA-differentiated SH-SY5Y cells were administered with Nrf2 siRNA for 24 h followed by TQ with MPP + .
Animals and Drug Administration
All in vivo experiments involved C57/BL6 mice (Beijing Vital River Laboratory Animal Technologies Co. Ltd, Beijing, China). All mice (age: 4-5 months; weight: 25-30 g) were housed in standard laboratory cages (three to five mice per cage) under typical laboratory conditions (12-h light/dark cycle, a temperature of 20-22°C, and a humidity of 50-60%) and were provided with food and water ad libitum. The mice were handled in strict accordance with guidelines provided by the National Institutes of Health Guide for the Care and Use of Laboratory Animals. For gene knockdown, lentivirus-wrapped negative control siRNA (NC), lentivirus-wrapped Nrf2 siRNA with GFP (siNrf2-GFP) or lentivirus-wrapped Nrf2 siRNA (siNrf2; GenePharma, Shanghai, China) were injected into the tail vein (20 μl/mouse, 10 9 TU/ml). After 1 week, mice received a daily intraperitoneal (i.p.) injection of 25 mg/kg MPTP for 5 days. For the TQ treatment studies, mice received a daily injection (i.p) of 10 mg/kg body weight TQ or normal saline for 1 week, starting on the day before each dose of MPTP. At the end of TQ or saline treatment, the mice were culled, and their brains were harvested for analysis. We randomly divided the mice into eight groups: a control group receiving only the vehicle; a TQ group receiving TQ treatment; an MPTP group to act as a model for PD; an MPTP + TQ group in which the PD model was treated with TQ; an NC group in which the PD model was injected with negative control siRNA; an NC + TQ group in which the PD model was injected with NC siRNA and TQ; an siNrf2 group in which the PD model was injected with Nrf2 siRNA; and an siNrf2+TQ group in which the PD model was injected with Nrf2 siRNA and TQ.
Assessment of Cell Viability
We seeded SH-SY5Y cells (2 × 10 4 cells/well) into 96-well culture plates. Following adherence, we pre-treated SH-SY5Y cells with TQ (0.25, 0.5, and 0.75 μM) for 2 h followed by exposure to 1 mM MPP + for 24 h. At the end of the experiment, we used the MTT assay to investigate cell viability.
Biochemical Examination
SH-SY5Y cells were incubated in culture medium containing TQ (0.25, 0.5, and 0.75 μM) for 2 h and treated with MPP + for 24 h. The contents of MDA and the activity of SOD and GSH-Px were measured in accordance with the manufacturer's guidelines. Briefly, cells and brain tissues were homogenized, sonicated, and centrifuged. The concentration of protein in the supernatants was measured using BCA protein quantification kit. Then, we measured levels of MDA and activities of SOD and GSH-Px, using commercially available kits following the instructions provided by the manufacturer. The multimode microplate reader was used to determine the content of MDA and the activities of SOD and GSH-Px in the samples at 532 , 560 , and 340 nm.
TUNEL Staining
The quantification of apoptotic neuronal cells was measured using TUNEL staining method following the instructions provided by the manufacturer and as previously described (Wu et al., 2017). The samples were visualized using an orthofluorescent microscope by an experienced pathologist blind to the experimental condition, and the TUNEL-positive cells were assessed.
Estimation of Reactive Oxygen Species Production
Levels of ROS were determined in cells using a ROS assay kit containing dichlorofluorescein diacetate (DCFH-DA). SH-SY5Y cells were grown in 6-well plates treated with TQ and/or MPP + . On the day of the experiment, we removed the medium and incubated the cells with 10 μM DCFH-DA for 20 min at 37°C. We then used PBS to wash off excess DCFH-DA and then determined the relative levels of fluorescence by flow cytometry at an excitation wavelength of 480 nm and an emission wavelength of 525 nm.
8-OHdG Evaluation
Brain tissues were removed from culled mice and ground gently in an automatic grinder for 1 min at room temperature. Next, each sample was diluted by 5-fold and the concentration of 8-OHdG was determined by Enzyme-linked immunosorbent assay (ELISA) in accordance with the manufacturer's guidelines. Absorbance was measured at a wavelength of 450 nm on a microplate reader (BioTeK, United States).
Immunofluorescence and Immunohistochemistry
Immunofluorescence (IF) and immunohistochemical (IHC) staining were performed as described previously (Han et al., 2020). In brief, mouse brains were perfused transcardially with 4% paraformaldehyde (PFA) and then fixed in PFA overnight. Brain tissues were then sectioned through the region of interest at a thickness of 20-30 μm. Antigen retrieval was then carried out by incubating sections in sodium citrate at pH 6.0 for 15-30 min at 80°C. The sections were then rinsed three times with 1X PBS and endogenous peroxidase was blocked using 3% H 2 O 2 at room temperature for 15 min. The sections were then blocked with 5% BSA in 0.3% Triton X-100 in PBS for 1 h at RT. Next, we incubated the sections overnight with an anti-TH antibody (1:1,000, Abcam, United Kingdom) and an anti-α-synuclein antibody (1:1,000, Abcam) at 4°C. The following morning, the sections were washed three times in PBS and then incubated with a secondary antibody for 1 h at room temperature. Positive immunoreactivity was then developed using DAB and examined with an Olympus BX53 microscope (Olympus, Tokyo, Japan). For fluorescent staining, the sections were blocked with 3% BSA for 30 min and then incubated overnight with anti-TH antibody (1: 250; Abcam) and anti-α-synuclein antibody (1:250; Abcam, United Kingdom) at 4°C. The following morning, the sections were rinsed three times in PBS and then incubated with fluorescent secondary antibodies for 1 h in the dark at room temperature. Subsequently, the sections were rinsed in PBS, counterstained with DAPI (Life Technologies) and mounted using Prolong Gold anti-fade (Servicebio, China). The samples were then evaluated using an Olympus BX53 microscope and images were analyzed using ImageJ (NIH, Bethesda, MD, United States).
Fluoro-Jade B Staining
The brain slides were incubated with FJB working solution (0.1% acetic acid) at 4°C overnight, rinsed with distilled water, and dried in an oven at 50-60°C for 15 min. Finally, the sections were visualized, and the number of FJB-positive cells was counted under BX53 microscope.
Western Blot Analysis
The nuclear protein was extracted and separated according to the nuclear and cytoplasmic protein extraction kit. SH-SY5Y cells were grown to 70-80% confluency in a culture flask and then cultured at 37°C in 5% CO 2 with TQ and/or MPP + for 24 h. Subsequently, cells were washed in PBS and lyzed with cytoplasmic protein extraction agent for 15 min. The samples were then centrifuged for 10 min at 700×g at 4°C and the supernatants were collected for further analysis. Then the nuclear protein extraction agent was added in the pellet. After 15 times vortex for 30 min and 14,000×g centrifuging for 10 min at 4°C, the supernatants were obtained as nuclear extracts. We also harvested a striatum tissue lysate by treating the lysate buffer with proteinase and phosphatase inhibitors. BCA was used to quantify the concentrations of protein samples acquired from each experimental group. For each sample, we separated 30 µg of protein per lane on a denaturing sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) gels and transferred the separated proteins onto nitrocellulose filter membranes. Next, membranes were blocked with non-fat dried milk for 1 h at room temperature and incubated overnight at 4°C with primary antibodies against Bax (1:1,000), Bcl-2 (1:1,000), caspase-3 (1:1,000), Nrf-2 (1:1,000), HO-1 (1: 2000), NQO1 (1:1,000), GST (1:1,000), TH (1:5,000), α-synuclein (1:1,000), β-actin (1:1,000), and Lamin B1 (1:500). The following morning, membranes were washed and incubated with appropriate secondary antibodies (1:1,000). Positive signals were then visualized by ECL (Thermo, United States) and imaged using an Image Quant chemiluminescence system (Tanon, China). Densitometric analysis was performed using ImageJ.
Statistical Analysis
Graph Pad Prism 8 (GraphPad, La Jolla, CA, United States) was used for all data analysis and to prepare graphs. Data are presented as the mean ± standard error of the mean (SEM). The Student's t-test was used to analyze single variables. One-way or two-way analysis of variance (ANOVA) was used to compare differences between the means of multiple groups. p ≤ 0.001 signified high significance (***), p ≤ 0.01 signified moderate significance (**), p ≤ 0.05 signified significant differences (*), and p > 0.05 indicated non-significant differences.
RESULTS
Thymoquinone Prevented MPP + -Induced Cell Death in SH-SY5Y Cells SH-SY5Y cells were administered with a series of MPP + concentrations over a known time period. Our analyses showed that MPP + reduced cell viability in a time-and concentration-dependent manner. Relative to the control group, SH-SY5Y cells showed significantly reduced levels of Figure 1A). Next, we investigated the impact of TQ on cell viability. When administered at concentrations of 0.25-2.0 μM for 24 h, TQ treatment had no cytotoxic effects on SH-SY5Y cells ( Figure 1B). Next, we investigated the neuroprotective effects of TQ in SH-SY5Y cells. Treatment with 1 mM MPP + significantly reduced the viability of SH-SY5Y cells while different concentrations of TQ pre-treatment (0.25, 0.5, and 0.75 μM) efficiently blocked MPP + -induced cell death ( Figure 1C). Compared to the control group, the numbers of TUNELpositive cells in the MPP + group were significantly elevated; however, TQ treatment caused a significant reduction in the number of TUNEL-positive cells ( Figure 1D). In addition, we used western blotting to determine the levels of caspase-3 in SH-SY5Y cells; this is a standard technique that is commonly used to assess apoptosis. We found that the expression levels of Bax were upregulated after treatment with MPP + for 24 h while the levels of Bcl-2 were downregulated. Pretreatment with TQ suppressed the increase levels of Bax expression in SH-SY5Y cells and suppressed the reduction in Bcl-2 expression ( Figure 1E). Similarly, TQ treatment reduced the MPP + -mediated increase in caspase-3 expression, thus suggesting that TQ inhibited MPP + -mediated apoptosis ( Figure 1E).
Thymoquinone Suppressed MPP + -Induced Oxidative Stress and Activated the Nrf2-ARE Signaling in Pathway SH-SY5Y Cells
Considering the balance between oxidation and antioxidation in a normal healthy cell, we next evaluated the effect of TQ on ROS generation and determined the levels of MDA and the activities of SOD and GSH-Px. A H2DCF-DA probe was used to determine the specific rate of ROS generation. MPP + increased the rate of ROS production; however, TQ treatment significantly reduced this increase in ROS generation (Figure 2A). Following treatment with 1 mM MPP + , there was a significant rise in MDA levels relative to the control group; furthermore, TQ reduced MDA activity levels in a concentration-dependent manner ( Figure 2B). In the model group, the bioactivities of certain antioxidant enzymes (SOD and GSH-Px) were significantly reduced. However, TQ treatment increased the activities of SOD and GSH-Px in MPP + -induced SH-SY5Y cells (Figures 2C,D). We (E) Bcl2/Bax ratio and the levels of caspase-3 in MPP + -induced SH-SY5Y cells with or without TQ pretreatment were quantified by western blotting (n 3). Data are indicated as the mean ± SEM; one-way ANOVA, # p < 0.05, ## p < 0.01, ### p < 0.001, relative to the control group; *p < 0.05, **p < 0.01, ***p < 0.001, relative to the MPP + group.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 615598 next examined the effect of TQ on the nuclear translocation of Nrf2. Relative to the model group, the expression of nuclear Nrf2 was markedly upregulated when treated with 0.5 and 0.75 μM of TQ ( Figure 2E). Furthermore, the result of immunofluorescence staining showed that TQ-treatment led to increasing the level of nuclear translocation of Nrf2 in MPP + -induced SH-SY5Y cells ( Figure 2F). These observations suggested that TQ promotes Nrf2 nuclear translocation. Following the nuclear translocation of Nrf2, we also investigated the expression of several proteins downstream of the Nrf2-ARE cascade (HO-1, NQO1, and Data are indicated as the mean ± SEM; one-way ANOVA, # p < 0.05, ### p < 0.001, relative to the control group; *p < 0.05, **p < 0.01, ***p < 0.001, relative to the MPP + group. Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 615598 6 GST) by western blotting following the pretreatment of MPP +induced SH-SY5Y cells with TQ. We found that the levels of these proteins were significantly upregulated in SH-SY5Y cells that were pre-treated with TQ ( Figure 2G). These data demonstrate that TQ might attenuate oxidative stress caused by MPP + in SH-SY5Y cells.
Thymoquinone Suppressed MPP + -Induced Cytotoxicity in SH-SY5Y Cells in a Manner That Was Dependent on Nrf2
To explore the potential role of the Nrf2/ARE signaling cascade in the TQ-mediated prevention of MPP + -induced cytotoxicity in SH-SY5Y cells, we investigated the suppressive influence of TQ on MPP + -induced neurotoxicity in SH-SY5Y cells. We did this by transfecting cells with either NC siRNA or Nrf2 siRNA in order to elucidate whether the Nrf2-dependent cascade was responsible for the neuroprotective effects of TQ against MPP + -induced oxidative apoptosis. Western blotting and RT-PCR results indicated that Nrf2 siRNA successfully silenced the expression of Nrf2 in cells while the NC siRNA did not ( Figures 3A,B). The silencing of Nrf2 in SH-SY5Y cells markedly reduced the expression of Nrf2. The expression of nuclear Nrf2 was significantly elevated after pretreating SH-SY5Y cells transfected with NC siRNA with TQ. Nevertheless, Nrf2 silencing suppressed the increased nuclear translocation of Nrf2 induced by TQ in SH-SY5Y cells ( Figure 3C). Moreover, the expression of several proteins downstream of the Nrf2-ARE axis (HO-1, NQO1, and GST) was abolished in Nrf2-silenced SH-SY5Y cells. The expression levels of these proteins were not affected in the group of cells transfected with Nrf2 siRNA and pre-treated with TQ ( Figure 3D). Furthermore, the TQ-induced increase cell viability was repressed in Nrf2-silenced SH-SY5Y cells, as determined by MTT analysis. Relative to the cells transfected with NC siRNA, the silencing of Nrf2 in SH-SY5Y cells resulted in increased susceptibility to MPP + cytotoxicity ( Figure 3E). These results illustrated that the cytoprotective effects of TQ on MPP + -treated SH-SY5Y cells were mediated through an Nrf2-dependent pathway.
Thymoquinone Provided Biological Protection Against Nigrostriatal Dopaminergic Degeneration in an Experimental Model of Parkinson's disease
In order to investigate the therapeutic effects of TQ in PD, we investigated the expression of TH and α-synuclein in the substantia nigra pars compacta (SNc) of an MPTP mouse model using IHC staining. Following MPTP treatment, we observed a significant depletion of TH + dopaminergic neurons in the SNc when compared with the control group. However, TQ treatment reduced the depletion of TH + dopaminergic neurons in the SNc ( Figure 4A). Previous research has shown that high levels of α-synuclein represent an important pathological characteristic of PD (Rocha et al., 2018). Therefore, to confirm that TQ facilitated the repression of α-synuclein-induced neurodegeneration, we investigated the expression of α-synuclein by IHC staining. Analysis revealed that there was an increase in α-synuclein immunoreactivity in mice administered with MPTP when compared with the The cell viability of SH-SY5Y cells transfected with Nrf2 siRNA or NC siRNA with or without TQ pretreatment (n 6). Data are indicated as the mean ± SEM; Student's t-test, two-way ANOVA, ### p < 0.001, relative to the control group; *p < 0.05, **p < 0.01, ***p < 0.001, relative to the corresponding NC siRNA group.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 615598 control group. However, TQ treatment remarkably reduced the expression of α-synuclein in the SNc when compared to MPTP alone ( Figure 4B). Similarly, western blotting assays of SNc lysates demonstrated that the expression of TH protein was reduced in the MPTP group, although this trend was attenuated by TQ treatment ( Figure 4C). Consistent with IHC findings in the SNc, the expression of α-synuclein was significantly elevated in the SNc of mice following MPTP injection. However, TQ treatment reduced the levels of α-synuclein in the SNc ( Figure 4C). We also investigated the neuroprotective effects of TQ using FJB staining which identifies denatured neurons by emitting green fluorescence. This assay showed that exposure to MPTP was accompanied by neurodegeneration in the mouse model of PD and that these effects were alleviated by TQ treatment ( Figure 4D). Therefore, our in vivo work indicated that TQ provides protection for nigrostriatal dopaminergic neurons in the mouse model of PD model against MPTP neurotoxicity.
Thymoquinone Attenuated Oxidative Stress and Activated the Nrf2-ARE Pathway in MPTP-Treated Mice
Next, we investigated whether TQ provided bioprotective effects for dopaminergic neurons against MPTP neurotoxicity in vivo and whether this was mediated via its antioxidant properties. First, we investigated oxidative activity in the SNc tissue of mice with PD. We found that the levels of MDA, a crucial product of membrane lipid oxidation, were significantly elevated in the SNc following MPTP exposure but were suppressed by TQ treatment in the SNc ( Figure 5A). SOD and GSH-Px are antioxidant markers and have the ability to prevent damage being incurred by important cell components (Niedzielska et al., 2016). Mice treated with MPTP expressed significantly reduced activities of SOD and GSH-Px in the SNc; these effects were reversed by TQ treatment (Figures 5B,C). In contrast with the control group, MPTP treatment increased levels of 8-OHdG in the SNc of mice with PD. Following treatment with TQ, the levels of 8-OHdG were significantly reduced when compared with the MPTP group ( Figure 5D). Collectively, these results indicated that TQ mitigates the oxidative damage induced by MPTP.
Next, we investigated whether the Nrf2-ARE cascade plays a role in the effects observed in our in vivo experiments. We found that the expression of Nrf2 in the SNc was significantly lower in the MPTP mice when compared with the control group ( Figure 5E). However, TQ increased the level of Nrf2 in the SNc of TQ-treated mice in response to MPTP; these effects were not observed in mice treated with MPTP alone. Furthermore, three anti-oxidative genes targeted by Nrf2 (HO-1, NQO1, and GST) were significantly upregulated in the SNc following TQ treatment when compared with MPTP treatment ( Figure 5E). Collectively, these data suggested that (E) Levels of Nrf2, HO-1, NQO1, and GST in the SNc of TQ or saline-treated MPTP mice were quantified by western blotting (n 3). Data are indicated as the mean ± SEM; two-way ANOVA, *p < 0.05, **p < 0.01, ***p < 0.001, relative to the corresponding saline group.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 615598 9 TQ activates the Nrf2-ARE signaling pathway in the mouse model of PD.
Thymoquinone Prevented MPTP-Induced Dopaminergic Degeneration and Rescued the Depletion of TH + Neurons in a Nrf2-dependent Manner
To further determine whether Nrf2-ARE signaling plays a critical role in the neuroprotective effects of TQ in PD, we injected siNrf2 into the tails of experimental mice. After the injection of siNrf2, we observed green fluorescence in sections of brain tissue ( Figure 6A) along with reduced levels of Nrf2 protein ( Figure 6B). The expression of Nrf2 protein increased in the SNc of NC-injected mice following TQ treatment; furthermore, this increase was alleviated by the injection of siNrf2 ( Figure 6C). Next, we determined if siNrf2 influenced the expression of genes downstream of the Nrf2-ARE pathway following TQ treatment in MPTP-treated mice. We investigated the expression of HO-1, NQO1, and GST. Following the downregulation of Nrf2 in the SNc, TQ failed to induce a significant elevation in NQO1, GST, and HO-1 expression in mice injected with siNrf2 ( Figure 6C). Collectively, these data showed that the silencing of Nrf2 in the brain via in vivo siRNA treatment repressed activation of the Nrf2-ARE signaling following the administration of TQ in MPTP-treated mice.
To investigate whether the administration of siNrf2 repressed TQ-induced antioxidation in the brain by modulating activation of the Nrf2-ARE signaling pathway, we determined the levels of two markers of oxidative stress (MDA and 8-OHdG) in MPTPtreated mice following the injection of siRNA. Our analysis identified reduced levels of MDA and 8-OHdG in the SNc of PD mice injected with NC following TQ administration. In contrast, the levels of SOD and GSH-Px showed a corresponding increase in mice injected with NC and treated with TQ. Treatment with siNrf2 reduced or even abolished the downregulation of MDA and 8-OHdG, and the upregulation of SOD and GSH-Px, following the administration of TQ ( Figures 7A-D). Next, we investigated the neuroprotective effect of TQ treatment following the injection of siRNA. We observed reduced levels of TH expression and higher levels of α-synuclein in the SNc of PD mice injected with NC following the administration of TQ. However, TQ failed to alleviate MPTP-induced TH + neuronal loss and the accumulation of α-synuclein in the SNc following the injection of siNrf2 ( Figures 7E,F). Finally, we investigated the neuroprotective effects of TQ by FJB staining; this stain identifies denatured neurons by green fluorescence. We found that TQ treatment induced a significant reduction in the number of denatured neurons in PD mice injected with NC and that this effect was suppressed by the injection of siNrf2 ( Figure 7G). Based on these results, it was evident that the anti-oxidative effects of TQ on the PD mouse model occur via the Nrf2 signaling cascade.
DISCUSSION
Globally, PD is one of the most common neurodegenerative diseases. Consequently, there is an urgent need to develop therapeutic drugs that can inhibit the neurodegenerative process associated with PD (Grayson, 2016). Presently, however, therapy is predominantly symptomatic (Beitz, 2014). Such treatment relies upon the substitution of dopamine with levodopa and is associated with a range of side effects. Oxidative stress, neuroinflammation, mitochondrial dysfunction, atypical protein aggregation, excitotoxicity, and variations in the autophagic-lysosomal cascade, are all known to be essential factors in the development and progression of PD and could Expression of Nrf2 in brain samples from MPTP mice injected with either Nrf2 siRNA or NC siRNA (n 3). (C) Levels of Nrf2, HO-1, NQO1, and GST in the SNc of TQ or saline-treated MPTP mice following the injection of NC or siNrf2 (n 3). Data are indicated as the mean ± SEM, Student's T-test, two-way ANOVA, *p < 0.05, **p < 0.01, relative to the corresponding NC group.
Frontiers in Pharmacology | www.frontiersin.org January 2021 | Volume 11 | Article 615598 be considered as intervention targets for PD therapy (Tarakad and Jankovic, 2017). Antioxidant therapy has proved very worthwhile in a range of diseases caused by ROS, including cancer, diabetes, and infectious diseases (Pennathur and Heinecke, 2004;Glasauer and Chandel, 2014). However, results from clinical trials involving the use of antioxidants to treat neurological diseases have reported conflicting outcomes with regards to their efficacy (Carvalho et al., 2017). Oxidants may play a role in PD by activating death-associated cascades and not by killing dopaminergic neurons. The Nrf2-ARE signaling cascade regulates the transcriptional expression of oxidative stress factors to re-establish redox homeostasis and is a flexible strategy for the treatment of neurodegenerative diseases (Buendia et al., 2016). Several studies have shown that Nrf2 deficiency increases the sensitivity of dopaminergic neurons to MPTP and 6-OHDA neurotoxicity, and that Nrf2-dependent antioxidants are associated with high transcriptional activity and confer protective effects in various models of PD (Chen et al., 2009;Li et al., 2018). Moreover, the activation of Nrf2 is known to confer neuroprotective effects against 6-OHDA-and MPP +induced neurotoxicity (Moreira et al., 2017;Zhu et al., 2019). These findings imply that the up-regulation of the Nrf2 cascade could be exploited to design new drugs for the treatment of PD. TQ, a significant active component in Nigella sativa, is a strong antioxidant and neuroprotectant (Darakhshan et al., 2015). TQ has been reported to exert efficacious anti-oxidative and antiinflammatory effects in a model of hippocampal neurodegeneration following chronic toluene treatment (Kanter, 2008). After head injury, TQ was shown to facilitate the healing process in neural cells in a manner that was moderated by the reduction of MDA levels in the nuclei and mitochondrial membrane of neurons (Gülşen et al., 2016). Moreover, TQ has been shown to protect primary mesencephalic cells from MPP + -induced dopaminergic cell death (Radad et al., 2009). Another study demonstrated that TQ provides protection against MPTP-induced PD by virtue of its antioxidant and anti-inflammatory properties (Ardah et al., 2019). However, the mechanisms by which TQ can regulate cytoprotective effects against oxidative stress in vitro or in vivo have yet to be elucidated. In the present study, we investigated the efficacy of TQ as a treatment for PD using a cellular model and a mouse model of PD to identify the neuroprotective and antioxidative effects of TQ. We found that the protective effects of TQ were mediated by the Nrf-ARE signaling cascade, thus providing strong evidence for the use of natural products for PD therapy.
In the current study, we found that the pretreatment of MPP + -induced SH-SY5Y cells with TQ resulted in a significantly reduced rate of apoptosis compared with controls. Furthermore, treatment with TQ restrained the expression of the pro-apoptotic proteins Bax and caspase-3 in MPP + -induced SH-SY5Y cells but increased the expression levels of Bcl-2. This implies that TQ attenuates MPP +induced apoptotic cell death. In addition, TQ reduced the formation of ROS by increasing the mitochondrial membrane potential. TQ also reduced the elevated levels of MDA and reversed the reduced activity of SOD and GSH-Px to levels that were within the normal range. These effects were also confirmed in an MPTP model of PD. Our results demonstrate that TQ increased the activity of anti-oxidative enzymes, including SOD and GSH-Px, reduced the levels of MDA and 8-OHdG, alleviated the depletion of dopaminergic neurons, and promoted the nuclear translocation of Nrf2 nuclear. Collectively, these processes exerted significant neuroprotective effects in the mouse model of PD. Nrf2 modulates several critical genes that can be induced by TQ, including HO-1, NQO1, and GST. Previous studies have shown that the generation of ROS results in the increased expression of HO-1 in order to provide protection to cells by amplifying antioxidant products. NQO1 has also been demonstrated to confer antioxidant properties to protect against ROS-mediated cell damage (Tufekci et al., 2011). GST codes for a detoxifying enzyme that suppresses the activation of ROS (Tufekci et al., 2011). These genes function to counter oxidative damage within brain tissues (Tufekci et al., 2011). Therefore, TQ plays a neuroprotective role by ensuring that Nrf2 is translocated into the nucleus to transactivate its target genes, thus reducing the production of ROS and helping to maintain the balance of oxidants and antioxidants. We also investigated the role of Nrf2 in the protective effects of TQ against MPP + -induced cytotoxicity in SHSY5Y cells via the transfection of Nrf2 siRNA. The transfection of Nrf2 siRNA blocked the TQ-induced expression of Nrf2-regulated genes and elevated the susceptibility of cells to MPP + -induced neurotoxicity. Next, we used injections of siNrf2 to silence the expression of Nrf2 protein in the mouse model of PD. Analysis revealed that the reduction of Nrf2 in the brain repressed TQ-mediated antioxidation and suppressed the TQinduced alleviation of the depletion in dopaminergic neurons and neurodegeneration in the SNc of the mouse model of PD. These data implied that the Nrf2 signaling cascade in the brain mediated the neuroprotective effects of TQ in the mouse model of PD.
Previous research has demonstrated that TQ exerts strong anti-oxidant bioactivity and can abolish superoxide radicals, alleviate lipid peroxidation, and regenerate antioxidant enzymes (Darakhshan et al., 2015). In the present study, we found that TQ promoted Nrf2 nuclear translocation in neuronal cells and the activation of Nrf2 signaling was directly responsible for the neuroprotective effects of TQ. Further studies should now be conducted to further elucidate these mechanisms. Our study had limitations that should be considered. For example, our data were generated by the administration of TQ prior to a neuropathological injury in a mouse model. It is now important that we carry out further studies to investigate the effect of commencing treatment at the onset of injury. These studies are needed so that we can determine whether TQ is beneficial against ongoing neuropathology as this is more pathologically relevant than the scenario described herein.
CONCLUSION
The present study found that TQ exerts neuroprotective and antioxidative effects on PD, both in vivo and in vitro. Our data indicated that these effects were mediated by modulation of the Nrf2-ARE signaling cascade. The administration of TQ could provide a novel therapeutic breakthrough for PD therapy (Figure 8).
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The animal study was reviewed and approved by Hefei Institutes of Physical Science, Chinese Academy of Sciences Animal Ethics Committees (IACUC19001).
|
v3-fos-license
|
2019-03-17T13:07:20.841Z
|
2017-08-28T00:00:00.000
|
4713242
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://resoncol.journals.ekb.eg/article_4200_dcbfe197b07476aa8d81885d3f9502e8.pdf",
"pdf_hash": "664d519a9deb2dff40ab42b5d847d4114851eaa7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1158",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "664d519a9deb2dff40ab42b5d847d4114851eaa7",
"year": 2017
}
|
pes2o/s2orc
|
A Prospective Randomized Study of Prophylactic Irradiation of Tracts in Patients with Malignant Pleural Mesothelioma
Background: Procedure tract metastasis (PTM) may complicate pleural procedures in malignant pleural mesothelioma (MPM) patients and cause significant morbidity. Aim: To evaluate the effectiveness of prophylactic radiotherapy (RTH) in preventing PTM and reducing pain. Methods: Forty patients with MPM, who had a pleural invasive procedure within the preceding 15 days, were randomized in a 1:1 ratio to receive prophylactic RTH to the procedure site (21 Gy in three consecutive daily fractions using 9MeV) vs. no RTH. During a 12-month follow up period, patients were examined monthly for PTM, toxicities and pain at the procedure site. Results: Patients receiving RTH had lower incidence of PTM than the control group (2/20, 10% vs. 5/20, 25%); however, this difference was not statistically significant. The proportion of patients who experienced pain at the pleural procedure site was significantly less in the RTH group compared with the control group (2/20, 10% vs. 12/20, 60%; p=0.001). Pain scores were significantly less in the RTH group compared with the control group (mean pain score 1.6 vs. 2.8, respectively; p=0.014). Conclusion: Prophylactic RTH to the pleural procedure site in MPM was not significantly effective in preventing or delaying PTM. However, prophylactic RTH reduced significantly the rate and severity of pain at the procedure site. Future studies may be needed to assess the effect of prophylactic RTH timing and its technique on preventing PTM.
INTRODUCTION
Malignant pleural mesothelioma (MPM) is a relatively rare aggressive tumor 1 .The incidence of MPM is rising to be almost 2500-3500 cases per year in the United States.Similarly, the incidence is increasing in other parts of the world especially in developing countries like Egypt, where asbestos exposure is high in certain areas with lack of proper protective devices 2,3 .There are many factories using asbestos in Egypt, like the Siegwart factories in Cairo (Shobra El-Khaymah and Helwan districts), and a rising incidence of MPM 3,4 .
Patients with MPM usually undergo pleural procedures during the course of their disease, such as pleural biopsy for tissue diagnosis or drainage of pleural effusion 5 .Pleural effusion occurs in almost all patients with MPM (about 95%) and dyspnea is the common presenting symptom in many patients.This pleural effusion is usually recurrent and requires frequent pleural tapping or pleurodesis 6 .
Mesothelioma cells have the ability to seed along the pleural procedure tract due to the ability of mesothelioma cells to spread in a sheet-like fashion along the serosal surfaces.Interruption of the tumor sheets allows the malignant cells to spread along the tract created during the pleural procedure from the pleura to the skin resulting in subcutaneous nodules 5 .The procedure tract may be painful and the subcutaneous nodule may be distressing for the patient.
There are few data about the incidence and risk factors for procedure tract metastasis (PTM) and the timing of its development following pleural procedures 7 .
To prevent PTM, prophylactic radiotherapy (RTH) to the sites of pleural procedures in MPM has been investigated in relatively few randomized clinical trials 5, 8-12 .Mesothelioma is radiosensitive and RTH has an established role in symptom palliation such as for localized pain.However, RTH is not used with a curative intent due to unacceptable toxicities such as pneumonitis and myocarditis.It has been suggested that prophylactic irradiation of the procedure site may prevent PTM especially with small tumors and that it is more effective than irradiation of already developed metastases 13 .
We conducted this study to evaluate the efficacy of prophylactic RTH in preventing or delaying PTM and improving pain at the site of pleural procedures in MPM patients.
METHODS
This was an open-label, randomized controlled trial conducted in the Clinical Oncology Department, Faculty of Medicine, Ain Shams University, Cairo, Egypt.The study was approved by the institutional ethics committee and all participants gave an informed consent.
Participants
We included patients with MPM who presented to our clinical oncology center from April 2013 till April 2015.Patients who met the following criteria were eligible for inclusion: age ≥ 18, histologically proven MPM, Eastern Cooperative Oncology Group (ECOG) performance status ≤ 2, inoperable or unfit for surgery, visible pleural procedure scare at the time of randomization and pleural procedure within two weeks from starting RTH.Patients were excluded in the following conditions: previous RTH to the the pleural procedure site, thoracotomy, other primary malignancy, currently receiving chemotherapy, metastatic disease and sarcomatoid pleural mesothelioma.
Intervention
The experimental group received prophylactic RTH to the site of pleural procedure while the control group did not receive RTH. Prophylactic RTH was delivered within a maximum of two weeks of the procedure using direct field 9 MeV electron beam in a dose of 21 GY in 3 consecutive daily fractions with 2 cm margin all around the procedure site if it was a needle site and 3 cm if it was an intercostal tube site.In case of obese patients or thick chest wall, a skin bolus with 1 cm thickness was used.The control group did not receive prophylactic RTH but patients who developed PTM during the follow up period received palliative RTH with the same protocol as the experimental group.
Outcomes
Patients were examined on monthly basis for PTM, RTH toxicities and pain persistence at the procedure site.All patients were followed up for one year from receiving RTH.We assessed acute and late skin toxicities according to the Common Terminology Criteria for Adverse Events (CTCAE) version 4.0 14 .Pain at the site of pleural procedure or PTM was assessed using the pain score of the National Initiative on Pain Control which is a numeric rating scale ranging from 0 to 10 with higher score indicating more severe pain 15 .
Sample size
The sample size was calculated using the StatsDirect software (professional version) with a power of 80% and an alpha level of 5% to detect a difference in the PTM rate as reported by Bydder et al 16 .The sample size needed for this study was 20 patients in the experimental group and 20 in the control group.
Randomization
Patients were randomly assigned in a 1:1 ratio to the two trial groups.The random sequence was generated based on the day of attendance of the patient.Patients attending on Saturday, Monday, and Wednesday were allocated to the experimental group while those attending on the other days were allocated to the control group until each group reaches a sample size of 20.
Statistical methods
Statistical analysis was conducted using the Statistical package for Social Science (SPSS 15.0.1 for Windows).Data normality was tested using the Kolmogorov-Smirnov test.Continuous variables were described as mean ± standard deviation in case of normal distribution and as median and interquartile range in case of non-normal distribution.Categorical data were presented as frequencies and proportions.Outcomes of the two groups were compared using the fisher's exact test.Pooled data from randomized controlled trials were analyzed using the Mantel-Haenszel method in the Rothman-Boice fixed effect model meta-analysis.An alpha level below 0.05 was considered statistically significant.We followed the CONSORT statement guidelines during the preparation of this manuscript 17 .
RESULTS
Forty-eight patients were assessed for eligibility.Of them 40 patients were recruited to the two groups (20 patients in each group).The CONSORT flow diagram of the study is shown in figure 1.The characteristics of the study population of both groups are presented in table 1.There was no statistically significant difference between the two groups in terms of age, gender, performance scores, or pathological type of the tumor.
The proportion of patients who developed PTM within the RTH field was less in the experimental group The mean pain score of the RTH group was significantly less than that of the control group (1.6 vs. 2.8, p=0.014).Moreover, the proportion of patients who complained from pain at the pleural procedure site was less in the RTH group compared with the control group (2/20, 10% vs. 12/20, 60%; p=0.001).
In the RTH group, only two (10%) patients experienced grade one skin erythema.
DISCUSSION
This randomized control trial showed that prophylactic RTH to the site of pleural procedure might be beneficial for patients with MPM.In this study, the proportion of patients who developed PTM was less in the RTH group than the control group.However, this difference was not statistically significant.Pain score was significantly lower with prophylactic RTH.In terms of safety, no serious adverse events were reported and RTH was well-tolerated.
Other reports in the literature showed lower rate of PTM with prophylactic RTH.In the study conducted by Low et al, none of the 20 MPM patients who received local RTH developed PTM during a follow up period ranging from 1 to 10 months 18 .However, that study lacked a comparator group.Our findings are consistent with that of West et al who found no PTM within the prophylactic RTH area in 37 MPM patients except in two patients (5%) who developed invasion at the periphery of previous RTH field 19 .
In our study the mean time till the development of PTM did not differ between the two groups (7 months in the RTH group vs. 6.3 months in the control group), which is similar to that of O'Rourke et al who reported a median time of 2.4 and 6.4 months for the RTH and control groups, respectively, with no significant difference 20 .
Our study showed that patients who received prophylactic RTH had significantly less pain than those in the control group.Moreover, the proportion of patients who complained from pain was significantly less in the RTH group.This highlights the effectiveness of RTH therapy in reducing pain.We do not have an explanation for the discrepancy in the significance of PTM prevention and pain reduction.
Three randomized controlled trials including relatively small sample sizes investigated the role of prophylactic RTH in reducing PTM 16,20,21 .Our findings are consistent with that of Boutin et al 21 and Bydder et al 16 but not with that of O' Rourke et al 20 .In the study of Boutin et al, forty patients were randomized to the RTH group (n=20) or control group (n=20) 21 .No patients (0%) in the RTH group developed PTM but 8 (40%) patients in the control group developed PTM.Our study differs from that of Boutin et al in the types of pleural procedures included.In our study, only pleural biopsy and tubal insertion were included because thoracoscopy was not performed in our center during the study period.Bydder et al 16 control group (7% vs. 10%, respectively).However, this difference was not significant.It should not escape our notice that they used a different RTH regimen (10-Gy single fraction using 9 MeV).
In the third trial done by O' Rourke et al, 61 patients were randomized to prophylactic RTH vs. no RTH 20 .The proportion of PTM in the RTH group (7/31, 23%) was higher than that in the control group (3/30, 10%).These results are contradictory to our findings and those of the other two randomized controlled trials.This may be explained by the fact that O' Rourke et al 20 delivered prophylactic RTH within 21 days from the pleural procedure, while it was delivered within 15 days in our trial and in the other two trials 16,21 .This suggests that the timing of prophylactic RTH is a contributing factor to its efficacy.
The result of pooled analysis of the abovementioned three randomized controlled trials in addition to ours is not in favor of using prophylactic RTH to prevent PTM (RR 0.59, 95% CI: 0.29-1.18,p=0.13).
There was significant heterogeneity in these data which was resolved by subgroup analysis to RTH within 15 days vs. RTH within 21 days (figure 3).Prophylactic RTH was significantly superior to no RTH in reducing PTM in the subgroup of studies where RTH was given within 15 days.This difference between the two subgroups (RTH within 15 days vs. RTH within 21 days) was statistically significant (p=0.01).
According to the recommendation of the European Society of Medical Oncology (ESMO) 2015 for the diagnosis, treatment, and follow up of MPM, evidence about the efficacy of prophylactic RTH in preventing PTM is controversial and it should not be routinely applied 22 .
Based on the results of our randomized controlled trial and those of previous trials, we believe that prophylactic RTH to the pleural procedure site might be effective in preventing PTM.However, the current evidence is not sufficient to confirm its efficacy.Future studies should investigate the effect of RTH timing (within 15 days vs. 21 days) and the RTH technique (10 Gy in single fraction vs. 21 Gy in three fractions) on the efficacy of prophylactic RTH to prevent PTM.Additionally, the effect of prophylactic RTH on pain and quality of life of MPM patients should be explored.
Conclusion
Data from our randomized controlled trial showed that prophylactic RTH to the pleural procedure site in MPM patients was not significantly effective in preventing or delaying PTM.However, our study shows that prophylactic RTH is effective in reducing pain at the procedure site.
Figure 1 .
Figure 1.The CONSORT flow diagram of study participants.
Figure 2 :
Figure 2: Proportion of patients who developed procedure tract metastasis in both groups randomized 43 MPM patients to receive a 10-Gy single dose of prophylactic RTH vs. no RTH.The proportion of patients who developed PTM in the RTH group was less than the
Figure 3 :
Figure 3: Forest plot of the efficacy of prophylactic radiotherapy (RTH) in reducing procedure tract metastasis.
Table 1 . Characteristics of patients
to the control group (2/20 vs. 5/20, figure2).However, this difference was not statistically significant (p=0.405).The development of PTM was not associated with the type of pleural procedure (p=0.698).The mean time till the development of PTM did not differ significantly between the two groups (RTH group: 7 months vs. control group: 6.3 months, p=0.864).
compared * Eastern Cooperative Oncology Group
|
v3-fos-license
|
2023-03-28T06:15:56.702Z
|
2023-03-27T00:00:00.000
|
257764853
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/g3journal/advance-article-pdf/doi/10.1093/g3journal/jkad072/49651856/jkad072.pdf",
"pdf_hash": "284c97fcb90f71dec45e96f2673eedb0c93fd95f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1159",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5d26c3a1a557bf1a44413cd575c5a945c646b797",
"year": 2023
}
|
pes2o/s2orc
|
Social experience and pheromone receptor activity reprogram gene expression in sensory neurons
Abstract Social experience and pheromone signaling in olfactory neurons affect neuronal responses and male courtship behaviors in Drosophila. We previously showed that social experience and pheromone signaling modulate chromatin around behavioral switch gene fruitless, which encodes a transcription factor necessary and sufficient for male sexual behaviors. Fruitless drives social experience-dependent modulation of courtship behaviors and physiological sensory neuron responses to pheromone; however, the molecular mechanisms underlying this modulation of neural responses remain less clear. To identify the molecular mechanisms driving social experience-dependent changes in neuronal responses, we performed RNA-seq from antennal samples of mutants in pheromone receptors and fruitless, as well as grouped or isolated wild-type males. Genes affecting neuronal physiology and function, such as neurotransmitter receptors, ion channels, ion and membrane transporters, and odorant binding proteins are differentially regulated by social context and pheromone signaling. While we found that loss of pheromone detection only has small effects on differential promoter and exon usage within fruitless gene, many of the differentially regulated genes have Fruitless-binding sites or are bound by Fruitless in the nervous system. Recent studies showed that social experience and juvenile hormone signaling co-regulate fruitless chromatin to modify pheromone responses in olfactory neurons. Interestingly, genes involved in juvenile hormone metabolism are also misregulated in different social contexts and mutant backgrounds. Our results suggest that modulation of neuronal activity and behaviors in response to social experience and pheromone signaling likely arise due to large-scale changes in transcriptional programs for neuronal function downstream of behavioral switch gene function.
Introduction
Detection of the social environment through pheromone signaling is critical for animals to recalibrate sex-specific behaviors such as mating and aggression (Cushing and Kramer 2005;Curley et al. 2011;Dey et al. 2015;Sethi et al. 2019). It is thought that changes in social environment can modify the regulation of genes necessary for neuronal homeostasis, physiology, and transmission, ultimately affecting circuit function and behaviors (Cushing and Kramer 2005;Flavell and Greenberg 2008;West and Greenberg 2011). Previous studies on the effects of early life experience have identified changes in neuroanatomy, synaptic plasticity, neurotransmission, and gene expression. For example, maternal licking and grooming of pups increase DNA methylation around glucocorticoid receptor gene, leading to long-lasting effects on offspring stress responses and behaviors (Weaver et al. 2004;McGowan et al. 2009;Mifsud et al. 2011). However, transcriptional cascades driving sensory and social experience-dependent modulation of gene expression, circuit function, and behaviors remain unclear.
Identifying gene regulation cascades by which social signals influence neural and behavioral responses requires a model system with well-defined circuits and genetic regulators with roles in neurophysiology, circuit structure, and behavioral function. Circuitry for courtship behavior in Drosophila melanogaster is an excellent experimental system to address this question. In Drosophila, male-specific courtship behaviors are governed by a critical transcriptional regulator Fruitless M (Fru M ), which is encoded by the male-specific alternative splicing of the fruitless (fru) gene from the P1 promoter (Dickson 2008;. It is known that Fru M is both necessary and sufficient for male courtship as loss of Fru M in males leads to a loss of male-female courtship (Ryner et al. 1996;Demir and Dickson 2005;Von Philipsborn et al. 2014). Fru M is expressed in approximately 2,000 interconnected neurons throughout the peripheral and central nervous system, and its expression is required for the development, function, and plasticity of the circuit which drives male-specific behaviors (Yamamoto and Kohatsu 2017). In particular, social cues such as pheromones can affect courtship behaviors in males (Kurtovic et al. 2007;van Naters and Carlson 2007;Dweck et al. 2015;Lin et al. 2016;Yan et al. 2020). Two types of these pheromones, male-specific pheromone cis-vaccenyl acetate and non-sex-specific pheromones (such as methyl laurate and palmitoleic acid), activate Fru M -positive olfactory receptor neurons (ORNs) expressing Or67d and Or47b receptors, respectively (Kurtovic et al. 2007;Dweck et al. 2015;Lin et al. 2016). These two ORN classes act differently, with Or67d regulating malemale repulsive behaviors and aggression, whereas Or47b driving age and social experience-dependent male copulation advantage (Wang et al. 2011;Dweck et al. 2015;Lin et al. 2016;Sethi et al. 2019).
Previous studies have reported that different social contexts, as well as loss of Or47b or Or67d function, alter the regulation of fru transcription, particularly the enrichment of active chromatin marks around fru promoters (Hueston et al. 2016;Zhao et al. 2020). In addition, the expression of fru M isoforms in Or47b and Or67d ORNs affects physiological responses to pheromone ligands and courtship behaviors (Lin et al. 2016;Ng et al. 2019;Sethi et al. 2019;Zhang et al. 2020). It is likely that changes in social context, pheromone signaling, as well as subsequent changes in fru regulation, affect the expression of ion channels as well as neurotransmitter receptors regulating neurophysiology. Indeed, Fru M binding is detected upstream of many ion channels and genes controlling neural development and function in the central brain Nojima et al. 2014;Vernes 2014). Even though these studies point to the regulation of neuronal and circuit function by Fru M , very little is known about how it affects the expression of these target genes, or how pheromone signaling and social experience affect transcriptional programs by modulating Fru M .
Here, we performed antennal RNA-seq to determine transcriptional changes in response to social isolation and mutants in pheromone receptors or Fru M . Our results showed small modifications to fru exon and promoter usage in pheromone receptor mutants. Larger changes were detected in fru M mutants, suggesting adaptive changes to the fru isoform pool in the absence of the male isoforms. We also found that transcriptional programs associated with neural activity and function were altered. Many of the Fru M target genes involved in regulating membrane potentials and synaptic transmission were misregulated in the same direction in fru M and pheromone receptor mutants. These results uncover a gene regulatory cascade from pheromone receptors to transcriptional programs that alter neuronal responses in different social contexts, potentially through changes in Fruitless function.
Fly genetics and genotypes
Flies were raised on standard fly food (containing yeast, cornmeal, agar, and molasses) at 25°C in a 12-hour light/12-hour dark cycle in cylindrical plastic vials (diameter, 24 mm and height, 94 mm).
The genotypes used for in vivo imaging validation are listed in Table 1.
RNA-seq
RNA-seq was performed as described before . Male flies are aged for 7 days and dissected for the third antennal segment (∼180 antennae per genotype). RNA was extracted from dissected tissues samples using Qiagen RNA-easy extraction kit, quantified using a Qubit RNA assay kit, and checked for quality using a High Sensitivity RNA ScreenTape on a TapesStation (Agilent). RNA integrity scores are typically 7.0 and greater. 1 μg of RNA was used to construct libraries for sequencing using a KAPA mRNA library prep kit with polyA RNA selection. Barcoded libraries are sequenced on a Novaseq 6000 SP 50 bp following manufacturer's instructions (Illumina). After demultiplexing, sequence quality was assessed using FASTQC (Version 0.11.9). While there are issues with under-clustering of the samples and unbalanced pools, the data quality was typical for RNA extracted from fresh frozen material. The unbalanced pools resulted in differences in sequencing depth of each sample. The raw data from antennal RNA-seq experiments in this study are already public in GEO (# GSE179213).
Analysis of RNA-seq data
Once sequenced, the reads are preprocessed with FASTP to remove adaptors and trim/filter for quality. These are mapped to the dm6 reference genome using MapSplice2, with individual mapping rates exceeding 98% in all cases. This raw alignment was deduplicated and filtered for mapping quality and correct pairing; additional alignments are generated to confirm results are robust to mapping ambiguity. Mapped reads are assigned to genes in the annotation using the feature Counts command from the SubRead package (Liao et al. 2014). Differential expression was modeled with DESeq2 using the "apeglm" shrinkage estimator, and data was processed and visualized in R using the tidyverse framework, supplemented with the biomaRt, ComplexHeatmap and UpSet packages. The bioinformatics DEXSeq was used to test for differential exon use under models corresponding to those used in differential gene expression (Anders et al. 2012). From the genome-wide test, the fruitless locus was examined in particular.
Statistical analysis
Adjusted P-value were directly calculated from DESeq2 or DEXSeq (Supplementary Table 1). Other statistical analysis is described in the legend of corresponding figures.
Specifically, to compare the exon usage in Fig. 4, we also calculated P-value from post hoc t-tests from raw read counts of independent comparisons of group-housed male antennae to each experimental condition at an individual exon segment (regions 1-22, see Table 2). Even though many exons level differences were significant using this method, adjusted P-value from DEXSeq gave rise to fewer significantly altered exon levels.
Quantitative reverse transcription PCR (qRT-PCR)
The qRT-PCR protocol was modified based on the previous protocol of the Volkan lab . For each genotype (same as RNA-seq), four biological replicates were prepared separately, with each replicate containing 100 antennae from 50 males (7-day old). Antennae were dissected on Flypad and transferred into TRIzol (Invitrogen, 15596026) immediately. Total antennae RNA was extracted using the RNeasy Mini Kit (QIAGEN, 74104) and treated with Dnase I (TURBO DNA-free Kit, Invitrogen, Thermo Fisher Scientific AM1907) to remove genome DNA. cDNA was generated from the reverse transcription of 80-150 ng total RNA using the SuperScript IV First-Strand Synthesis Kit (Invitrogen, 18091050) and poly d(T) as transcription primers. qPCR was performed using the FastStart Essential DNA Green Master kit (Roche, 06924204001) on LightCycler 96 instrument (Roche, 05815916001). Primers used are listed in Table 3. The expression level was calculated by ΔCt method using the fl(2)d as the standard gene. The calculation was performed in GraphPad Prism software. One-way ANOVA was used for significance test, followed by multiple comparisons (compare other groups to group-housed wild types w 1118 GH). *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
In vivo validation of gene expression
Fly heads were dissected in cold PBT (phosphate buffered saline with Triton X-100) buffer and were fixed in 4% paraformaldehyde (PFA) on nutator at room temperature for 1 hour. Fly heads were washed three times with fresh PBT, and every wash was 10 min at room temperature. Antennae were dissected from heads and were fixed in 4% PFA on nutator at room temperature for 30 min. Antennae were washed three times with fresh PBT, and every wash was 10 min at room temperature. Antennae were mounted using Fluoromount-G Slide Mounting Medium (SouthernBiotech). Images were taken by Olympus FluoView FV1000 confocal microscope. Controls and experimental groups were imaged by same parameters. Native fluorescence was measured by ImageJ. The fluorescence intensity was defined as the fluorescence of region of interest subtracted by that of the background. Statistical tests were performed in GraphPad Prism software. One-way ANOVA was used for significance test, followed by multiple comparisons (compare other groups to group-housed control). *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001.
Neuronal transcriptional programs are modulated with social isolation and lack of pheromone receptors or Fru M function
To identify genes regulated in the peripheral olfactory system by social experience, pheromone signaling, and Fru M , we utilized RNA-seq from whole antennae of 7-day old wild-type (w 1118 ) males that are either group-housed (w 1118 GH) or single-housed (w 1118 SH), as well as group-housed Or47b mutant males (Or47b 1 ), Or67d mutant males (Or67d GAL4 ), and fru M mutant males (fru LexA /fru 4-40 ) (Fig. 1a). As noted in the Material and Methods and discussed in the Discussion, these genetic backgrounds are modestly different compared to typical variation among Drosophila (Fig. 1b). Each sample had mapped reads ranging between 24 and 40 million, and hierarchical clustering analysis based on Pearson's correlation between samples showed consistency among replicates within the same genotype (Fig. 1b). Principal component analysis (PCA) also showed the expected grouping of the replicates belonging to the same condition, across the first two principal components accounting for most of the overall variance (32 and 19%) (Fig. 1c). We also found that gene expression changes were more similar among Or67d, Or47b, and fru M mutants, compared to grouped or isolated wild-type male antennae (Fig. 1b, c). As expected, expression levels of Or47b, Or67d, and male-specific fru exon were significantly lower in all replicates for Or47b, Or67d, and fru M mutants, respectively, though the changes of the whole fru gene locus cannot be detected ( Fig. 1d and Fig. 4b and Supplementary Fig. 2), validating genotype-specific changes in each condition. In addition, genes known to be absent in adult antennae, such as amos (Goulding et al. 2000;Zur Lage et al. 2003;Li et al. 2016), also showed nearly no expression, whereas housekeeping genes, like Act5C, Gapdh2, RpII18, fl(2)d, and wkd, showed nearly identical expression across all samples (Fig. 1d). These results point to high RNA-seq data quality across sample groups and within biological replicates. The Or47b ORNs were shown to degenerate in 14-day old Or47b mutant flies. To test if the transcriptional changes are not due to the decrease of ORN numbers in 7-day old antennal samples, we counted the numbers of Or47b and Or67d ORNs. The total numbers of Or47b or Or67d ORNs were comparable between the control and the Or mutants ( Supplementary Fig. 1). These results suggest that the changes of gene transcriptional level are mainly due to the loss of Or function, rather than the changes of ORN numbers.
We then ran the differential expression analysis to globally examine the transcriptional changes upon loss of social expression, pheromone sensing, or Fru M function. Compared to grouphoused wild-type antennae, social isolation had the least number of significantly altered genes, whereas group-housed fru M mutants resulted in the highest number ( Table 1). Given that fru M mutants had a smaller sample size in the experiment, this observation needs to be treated with some caution. However, it seems unlikely that less statistical power in the fru M samples would result in such an excess of differentially transcribed genes. Pairwise comparisons of group-housed wild types to isolated wild types, and Or47b/ Or67d/fru M mutants revealed the genes co-regulated by pheromone receptors and, acknowledging the smaller sample size, fru M tended to behave in the same direction in the corresponding mutants (Fig. 2a, d), suggesting the shared downstream signaling pathways upon pheromone receptor activation and Fru M -dependent regulation. The numbers of genes with significant differential expression in the same direction shared by each condition compared to the group-housed wild types are illustrated in a Venn diagram and Upset plot (Fig. 2b, c and Hierarchically clustered heatmaps showing log 2 fold change compared to group-housed wild-type antennae across all experimental conditions (right) and average mRNA levels (reads per kilobase of transcript, per million mapped reads, RPKM) of replicates within each condition ordered in the same way as log 2 fold change (left). Only 2,999 genes with at least one significant (adjusted P-value below 0.01) change between an experimental condition vs group-housed wild types are shown.
Supplementary Table 2), where genes with overlapping changes in social isolation and Or47b, Or67d, and fru M mutants are highlighted. Particularly, only one gene, CG13659, an ecdysteroid kinase-like domain encoding gene, is consistently changed across all experimental conditions compared to antennae from the group-housed wild-type males (Fig. 2b).
Hierarchical cluster analysis of differentially expressed genes compared to group-housed wild-type samples showed that the transcriptional changes in fru M and Or mutants were most comparable with one another and most dramatically different from the control (Fig. 2e). Single-housed wild types were most similar to group-housed wild types (Fig. 2e). Cluster analysis identified several genes of behavioral, neurophysiological, and developmental functions such as Cytochrome p450 6a20 (Cyp6a20), serotonin receptor 2A (5-HT2A), Juvenile hormone esterase (Jhe), and Dpr-interacting protein alpha (DIP-alpha) ( Fig. 2e) (Liu et al. 2008;Wang et al. 2008;Johnson et al. 2009;Carrillo et al. 2015). Among these, antennal expression of Cyp6a20, which is downregulated in Or47b, Or67d, and fru M mutants, was previously shown to mediate effects of social experience on male-male aggression ( Fig. 2e) ). On the other hand, Cyp4p2, which is involved in hormone metabolism and insecticide detoxification (Seong et al. 2018;Seong et al. 2019;Scanlan et al. 2020), is only misregulated in Or47b mutants (Fig. 2e). In addition to the downregulated genes, we also found some genes encoding ion channels and neurotransmitter receptors that were significantly upregulated (ppk25 and GluRIIA) (Fig. 2e). The heatmap for gene expression changes revealed gene clusters that were co-regulated by pheromone receptors and Fru M , in addition to gene clusters that were uniquely regulated by each OR and Fru M ; this again highlights that the co-regulated genes tend to change in the same direction in pheromone receptor and fru M mutants.
Gene ontology terms for differentially expressed genes in response to lack of social and pheromone signaling highlight neuromodulators
Previous work has demonstrated that social experience, pheromone signaling, and Fru M activity can regulate the responsiveness of pheromone sensing ORNs to modify neuronal function and sex specific behaviors ( Table 3). Many GO terms of molecular function and biological process were commonly affected across multiple experimental groups, suggesting the converging downstream molecular events in response to social experience and pheromone sensing mediated by Fru M activity (Fig. 3). Strikingly, the genes with the altered expression tended to be localized on the cell membrane ( Fig. 3, GO: cellular component) and have functions in ion transport across membrane (Fig. 3, GO: molecular function), and appeared to be involved in the process of detecting and responding to olfactory stimuli ( Fig. 3, GO: biological process). This supports previous studies in providing a general mechanism for social experience, pheromone receptor signaling, and Fru M -dependent regulation of pheromone responsiveness of Or47b ORNs (Sethi et al. 2019;Zhang et al. 2020;Zhao et al. 2020). Furthermore, genes with oxidoreductase activity also had overlapping alterations across Or47b, Or67d, and fru M mutants, and many of these appeared to contribute to insect hormone metabolism (Fig. 3, GO: molecular function). Interestingly, previous studies reported that juvenile hormone signaling works together with social experience in olfactory receptor neurons to modulate chromatin around fru locus (Sethi et al. 2019;Zhao et al. 2020). Our RNA-seq results also add an additional layer of complexity to hormone-social experience interactions, as social experience and pheromone signaling affect the levels of certain hormones by modifying hormone metabolism dynamics. In summary, social isolation, disrupted pheromone receptor signaling, and lack of Fru M function in peripheral olfactory sensory neurons affect the expression of many genes with roles in diverse aspects of neurophysiology, including neuronal responsiveness, ion transmembrane transport, and beyond.
Loss of pheromone signaling alters fruitless splicing patterns and doublesex expression
fruitless locus (containing multiple promoters, untranslated regions, coding sequences, and denoting regions 1-22 in Fig. 4a generates multiple alternative splice isoforms for RNAs transcribed from seven promoters (P1-P6 and PD) (Fig. 4a). The transcripts from fru P1 are alternatively spliced between males and females, where the male isoforms ( fru M ) encode functional proteins while female isoforms (fru F ) do not produce any proteins (Dickson 2008; Yamamoto and Koganezawa 2013) (Fig. 4a). The expression of fru M in males and the absence of functional fru F transcripts in females help define male and female-specific neuronal pathways as well as the cell-specific expression patterns of genes regulated by Fru M . Promoters fru P2 through fru P6 produce common isoforms in both males and females that also affect sexspecific activity in courtship circuits of both sexes (Fig. 4a). Fru M itself has multiple splicing isoforms that vary in the 3′ end of the mRNA ( fru MA , fru MB , and fru MC ), which encode Fru M transcription factor proteins with variable zinc finger DNA-binding domains Neville et al. 2014;Vernes 2014). These regulate different aspects of the circuit controlling courtship behaviors, with Fru MC and Fru MB having the highest overlap behaviorally and Fru MA having little to no effect on courtship . We previously showed that social experience and signaling from Or47b and Or67d pheromone receptors alter open chromatin marks around fru P1 promoter in the male antennae (Zhao et al. 2020). Interestingly, examination of total transcript levels for the entire fru gene locus showed little to no difference across experimental conditions (Fig. 1d). These small changes in total transcript levels, despite dramatic changes in open chromatin marks in wild-type SH and mutants in Or47b, Or67d, and fru M , prompted us to look at other aspects of gene regulation. It is known that changes in chromatin regulate many aspects of transcription such as transcriptional initiation, elongation, and alternative splicing (Hall and Georgel 2011;Naftelberg et al. 2015). The effects of chromatin on splicing are thought to occur especially because chromatin state alters the speed of RNA Polymerase II (RNAPII), which can lead to splicing mistakes like intron retention or exon skipping (Hall and Georgel 2011).
Given the functional differences in the fru M isoforms, we predicted that chromatin changes caused by social experience and pheromone receptor signaling could alter fru splicing. To explore this, we mapped reads from all experimental conditions to fru genomic locus and investigated exon usage levels using DEXSeq (Anders et al. 2012). In general, transcript reads from fru locus appear noisier in experimental conditions compared to grouphoused wild-type male antennae, with variations in the expression of coding and non-coding sequences ( Fig. 4b-e). In Or47b mutants, there is a small decrease in fru P1 promoter (region 1) and male-specific exon (region 2) levels ( Fig. 4c, see methods-statistical analysis). Or67d mutants show a small decrease in fru P1 promoter (region 1) levels and male-specific exon (region 2) (Fig. 4d, see methods-statistical analysis). The largest change in male-specific exon (region 2) levels is seen in fru LexA /fru 4-40 allele (Fig. 4b), which has a LexA transgene inserted into the first codon of fru M open reading frame within the male-specific exon (region 2) and a 70-Kb deletion from promoter P1 to P3 (Mellert et al. 2010). Surprisingly, fru LexA /fru 4-40 mutants showed disproportional increase of several 3′-end exons (regions 18, 20, and 22) (Fig. 4b).
This suggests the adaptive changes of fru isoform pool in the absence of fru male isoforms. fru P1 promoter (region 1) and the male-specific exon (region 2) are unaltered in socially isolated antennae, yet there is a small increase in the female-specific exon (region 3) (Fig. 4e, see methods-statistical analysis).
In addition to the first three exons, a non-coding sequence (region 18, C5RA) (Fig. 4a), which is only present in the exon C5 of fru-RA transcript , slightly increases in Or67d and Or47b mutants, as is shown in exon usage quantification (Fig. 4c, d) and read coverage of region 18 locus (Fig. 4f). This transcript encodes a Fru protein that lacks these zinc finger domains but retains BTB/PDZ protein-protein interaction domain (Fig. 4a). It is possible that this isoform can interfere with the transcriptional functions of Fru M proteins by binding and titrating out their interaction partners such as other transcription factors, chromatin modulators, and basal transcriptional machinery (Ito et al. 2012;Chowdhury et al. 2017;Zhang et al. 2018;Sato et al. 2019). Both fru-RA and fru-RL transcripts use P4 promoter. Even though we do see small but significant differences in RA-specific 3′UTR in Or47b and Or67d mutants, the effect on the RL transcript is even smaller compared to the RA transcript. In addition, our qRT-PCR analysis of fru-RA exon levels was inconsistent and only sometimes reproduced the RNA-seq results compared to control genes. It is possible due to the difference between the DEXSeq and qRT-PCR analysis, where the DEX-seq measures the overall UTR but the qPCR is only to detect 100-150 bp. The accumulated difference along the whole UTR might not be detectable by qPCR. Given that our RNA-seq is on whole antennal samples, these differences might be larger and more salient at the level of the individual ORNs, and future experiments looking at transcriptional profiles from single-ORN populations or detailed in situ hybridization experiments analyzing expression levels of each fru splice isoforms on antennal tissues will help determine the extent and cell-type specificity of these alterations. These results suggest that social and pheromonal cues have modest effects on fru exon and promoter usage at the antennal RNA level.
Another sex determination transcription factor known to regulate sex specific behaviors is doublesex (dsx) (Villella and Hall 1996;Waterbury et al. 1999;Billeter et al. 2006;Kimura et al. 2008;Rideout et al. 2010;Robinett et al. 2010;Dauwalder 2011;Pan et al. 2011;Pan and Baker 2014). dsx expression in the antenna is restricted to non-neuronal cells (Robinett et al. 2010). We found that the expression of dsx in antenna is significantly increased in Or and fru M mutants, albeit the increase is much more pronounced in Or67d and fru M mutants ( Supplementary Fig. 2b). Socially isolation did not alter the expression of dsx in antennae ( Supplementary Fig. 2b, d). These results suggest that the expression of dsx in antennae is repressed by Or47b, Or67d, and fru M functions.
Collectively, our results suggest that the expression of two critical transcription factors, Fru and Dsx, which regulate sex-specific behaviors, is modulated by pheromone signaling.
Bimodal regulation of genes regulating neurophysiology and neurotransmission by Fru M and pheromone receptor signaling
Previous studies have shown that pheromone receptor signaling and social experience-dependent regulation of chromatin and RNAPII enrichment around fru P1 promoter can ultimately scale and fine-tune behavioral responses to social environment (Sethi et al. 2019;Zhao et al. 2020). Additionally, previous reports on the genome-wide binding profiles for three Fru M isoforms in the central brain revealed isoform-specific differences in target genes that regulate neuronal development and function (Billeter et al. 2006;Neville et al. 2014). Fru M motifs are enriched among regulatory elements that are open in the female but closed in the male, suggesting Fru M functions as possible repressive transcription factor (Brovkina et al. 2021). Functional differences of Fru M isoforms also influence ORN responses to their pheromone ligands (Zhang et al. 2020). Thus, chromatin-based modulation of fru levels and splicing with social experience and pheromone signaling can provide a quick way to modulate neuronal physiology and synaptic communication by modifying gene expression programs. Yet, the effect of social experience and pheromone receptor signaling on gene expression programs or the mode of gene regulation by Fru M (as a transcriptional activator, repressor, or both) remains unclear (Dalton et al. 2013;Neville et al. 2014;Vernes 2014).
As discussed previously, gene ontology analysis of these differentially expressed genes implies that many genes involved in regulating neural activity are regulated by social context, pheromone receptor signaling, and Fru M function. To further investigate this, we specifically focused on genes associated with ion channel activity and/or neurotransmitter regulation (Fig. 5a, b and Fig. 6a, b). We clustered these genes based on their log 2 fold change in transcript levels compared to group-housed wild types in each experimental condition, while also showing their corresponding expression levels in the antennae (RPKM, reads per kilobase of transcript, per million mapped reads) (Fig. 5a, b and Fig. 6a, b). We also used the single-cell RNA-seq data to provide additional evidence showing the ORN-specific expression patterns of the genes that show differential expression in different social and mutant conditions. We found that many ion channels and/or neurotransmitter receptor-encoding genes showed up/downregulation in response to social isolation and loss of Or47b, Or67d, or Fru M function (Fig. 5a, b and Fig. 6a, b). Within ion channels, two subclasses stood out. These are the Degenerin/Epithelial Sodium Channel (DEG/ENaC) proteins known as pickpockets (Ppks) and inward-rectifying potassium channels Irks. Additional genes also include those encoding calcium channels, for example, Piezo, TrpA1, and cacophony (cac) (Fig. 5a, b).
Ppk family
We specifically focused on two ion channel families, pickpocket family of sodium channels and potassium channels. Recent reports pointed to the function of DEG/ENaC channels known as pickpocket family of sodium channels that act in Or47b and Or67d ORNs to regulate responses to their ligands (Zhang et al. 2020). Fru M -binding motifs have been identified around many of these ppk family members, such as ppk, ppk5, ppk6, ppk15, ppk19, ppk23, ppk25, and ppk30 (Dalton et al. 2013;Neville et al. 2014;Vernes 2014). Both ppk23 and ppk25 have been identified as necessary for modulating responses of Or47b ORNs through Fru MB and Fru MC activity, respectively, with Fru MB having an antagonistic effect on physiology in Or67d ORNs (Ng et al. 2019;Zhang et al. 2020). In group-housed wild-type antennae, ppks show generally low expression based on our transcriptome analysis as well as recent single-ORN RNA-seq data (Mclaughlin et al. 2021), with ppk5 displaying the highest levels (Fig. 5c). Many ppk genes are differentially regulated in fru M mutants, in agreement with the existing Fru M -binding sites at their promoters. For example, ppk6 and ppk25 are upregulated in fru M mutants whereas ppk5, 7,13,14,15,19 are downregulated. The bimodal changes in ppk transcripts in fru M mutants suggest that Fru M can act as both a repressor and an activator of ppk gene regulation. ppk13, 14,15,19,25 also show correlated changes in Or47b and/or Or67d mutants. ppk6 is strikingly upregulated in both fru M and Or67d mutants, whereas ppk7 is downregulated in both Or47b and fru M mutants (Fig. 5c'). Of note is the significant increase in ppk25 expression, especially in Or67d mutants, which we also confirmed through quantitative RT-PCR ( Fig. 5 and Supplementary Fig. 3c-e). ppk25 is expressed in Or47b and Ir84 ORNs, but not Or67d ORNs, and has been shown to be downstream of Or47b and Ir84a activity altering their neuronal responses (Lin et al. 2005;Starostina et al. 2012;Ng et al. 2019) (Fig. 5c). The shared and mutant specific patterns of ppk gene misregulation in fru, Or47b, and Or67d mutants suggest that lack of pheromone receptor function and Fru M activity alters the expression of ppk genes in the antennae that contributes to changes in physiological responses.
Irk gene family
Irk gene family encodes 3 inwardly rectifying potassium channels (Irk1-3) with binding motifs for Fru MA identified upstream of Irk2 and binding of both Fru MA and Fru MC found around Irk3 (Dalton et al. 2013;Neville et al. 2014;Vernes 2014). Three Irk genes are expressed in varying levels in the antennae with Irk1 having the lowest expression and Irk2 having the highest expression (Fig. 5d). We found that Irk1 is upregulated in Or47b mutants, whereas Irk2 trends towards upregulation in response to social isolation (Fig. 5d').
These results suggest that changes in the transcript levels of Fru M -regulated sodium and potassium channels with social isolation and in pheromone receptor mutants may contribute to changes in neuronal responses and behaviors.
Regulators of neurotransmission
To ask if social experience, pheromone signaling, and Fru M function regulate genes involved in neurotransmission, we next examined the expression of neurotransmitter receptors, transporters, and enzymes for neurotransmitter metabolism. ORNs in the antennae as well as their projection neuron targets around the antennal lobes are mostly cholinergic (Wilson 2013). In the antennal lobe, it has been shown that local interneurons, which include serotonergic, GABAergic, and glutamatergic interneurons, provide cross talk between synaptic partners in the antennal lobe glomeruli (Chou et al. 2010;Wilson 2013). These neurons form connections with both presynaptic ORNS and their postsynaptic partner projection neurons for modulation of neuronal response across glomeruli (Wang et al. 2003;Olsen et al. 2007;Wilson 2013). These connections are required for fine tuning of signaling at synapses as a way of rapid modulation of neuronal function (Wong et al. 2002;Wang et al. 2003;Olsen et al. 2007;Dacks et al. 2009;Johnson et al. 2009;Sudhakaran et al. 2012;Sizemore and Dacks 2016;Mohamed et al. 2019;Zhang et al. 2019;Suzuki et al. 2020). We found a high expression of choline acetyltransferase (ChAT) that catalyzes acetylcholine biosynthesis and VAChT that packages acetylcholine into synaptic vesicles, coinciding with their reported cholinergic roles in ORNs. Moreover, we also found relatively high expression of several genes encoding receptors of various neurotransmitters, such as choline, serotonin (5-HT), GABA, and glutamate ( Fig. 6c-f'). Many of these genes, such as nAChRalpha4/5, 5-HT2A, 5-HT7, GABA-B-R2, and GluRIIA, have previously been found to regulate courtship behavior in flies through signaling in the antennal lobe Johnson et al. 2011;Clowney et al. 2015;Suzuki et al. 2020). Interestingly, GABA-B-R2 was shown to be specifically involved in presynaptic gain control of Or47b ORNs (Root et al. 2008). Additionally, singlecell RNA-seq data shows both broadly expressed neurotransmitter genes like GluRIIB and 5-HT2B, while others are specific to a subset of ORN classes (McLaughlin et al. 2021) ( Supplementary Fig. 4). Overall, many of the genes encoding neurotransmitter receptors show expression changes in different experimental conditions (Fig. 6b).
To focus on genes related to specific neurotransmitters, we did not observe any significant changes in response to social isolation, except for a few genes, like dmGlut, which is upregulated compared to the group-housed wild types (Fig. 6d, d'). We again found that loss of Fru M function led to bimodal effects on gene expression (Fig. 6c-f'). Indeed, many of these genes have known Fru M binding to their promoters, including receptors nAChRalpha1/3/ 4/5, GluRIIA, GluClalpha, 5-HT1A, 5-HT1B, 5-HT2A, and 5-HT7, and transporters/regulators such as VAChT, ChAT, and Gat (Dalton et al. 2013;Neville et al. 2014;Vernes 2014). Some of these genes display correlated changes between pheromone receptor mutants and fru M mutants, like GluRIIA, dmGlut, and 5-HT2A, suggesting that the effects of pheromone signaling on neurotransmission can act via their influences on fru regulation (Fig. 6d-e'). The changes in 5-HT2A were also validated through qRT-PCR ( Supplementary Fig. 4c). In the antenna, 5-HT2A-GAL4 and dmGlut-GAL4 expression is observed in a subset of ORNs (Fig. 6gh'). Interestingly, Or47b and Or67d ORNs do not express 5-HT2A reporter (Fig. 6g). In agreement with a decrease in 5-HT2A transcript levels in the RNA-seq and RT-PCR experiments, 5-HT2A reporter expression was significantly decreased in Or47b mutant antennae ( Fig. 6g-g'). On the other hand, dmGlut expression in the antennae was upregulated in all conditions compared to group-housed male antennae, generally in agreement with the qRT-PCR validation results (Fig. 6d, d' and Supplementary Fig. 4d). We also used a dmGlut-GAL4 to visualize the dmGlut expression in vivo and detected the signal in a subset of non-neuronal cells in the antennae, though we only observed the statistically significant increase of the dmGlut reporter expression in fru M mutants (Fig. 6h, h'). Evident changes are also observed in some genes not known to be Fru M targets, for example, GluRIB which shows downregulation only in fru M mutants and 5-HT2B which shows upregulation in Or47b and fru M mutants (Fig. 6d-e'). These may reflect effects of pheromone receptor signaling independent of Fru M function or indirect effects of Fru M activity. To summarize, the systems-level changes in expression of genes involved in neurotransmission and neurophysiology with social experience and pheromone receptor signaling can modulate ORN responses. In addition, these effects on gene expression with social signals can occur either in a Fru M -dependent manner or independently of Fru M in response to other gene regulatory pathways activated by pheromone receptor signaling.
Odorant binding proteins
Analysis of GO terms for molecular function for "odorant binding" highlighted genes encoding odorant binding proteins (Obps) among the significantly altered compared to group-housed wildtype male antennae (Fig. 7). Previous studies using in situ hybridization and transcriptional reporters have shown that Obps are ) unpaired t-test (g') and one-way ANOVA (h') were used for significance test, followed by multiple comparisons if necessary (compare other groups to group-housed control). *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. Not significant if no * labeled.
5-HT2A>GFP
generally produced in the non-neuronal support cells of the antennal sensilla and are secreted into the local hemolymph (Larter et al. 2016). However, our analysis of previously published single-cell RNA-seq data from ORNs revealed that some, but not all, Obps (i.e. Obp19a,Obp28a,Obp56a,Obp59a,Obp69a,Obp83a,Obp83b,and lush) are abundantly expressed in ORNs in different levels ( Supplementary Fig. 5a). Odorants that enter the sensilla through pores are thought to interact with Obps in the hemolymph, which aid with odor binding to receptors on the cilia of ORNs (Larter et al. 2016). Mutants in Obp genes are associated with alterations in the spontaneous or evoked neuronal response dynamics of resident ORNs (Kim et al. 1998;Kim and Smith 2001;Xu et al. 2005;Laughlin et al. 2008;Larter et al. 2016;Scheuermann and Smith 2019). Analysis of Obp gene expression in the mutant male antennae showed that many Obp transcripts that normally are expressed in trichoid sensilla were increased in the antennae from Or47b, Or67d, and fru M mutants (e.g. Obp83a, Obp83b, lush, and Obp69a) (McKenna et al. 1994;Pikielny et al. 1994;Xu et al. 2005;Laughlin et al. 2008;Larter et al. 2016;Scheuermann and Smith 2019) (Fig. 7). qRT-PCR from antenna generally corroborates RNA-seq results ( Supplementary Fig. 5b, c). Among the Obps that are differentially expressed in mutants, Obp69a is particularly interesting as it was previously shown to modulate social responsiveness in Drosophila (Bentzur et al. 2018). In this context, cVA exposure in males as well as activation of Or67d neurons decreases Obp69a levels, which in turn alters aggressive behaviors driven by Or67d neurons. In addition, lush, Obp83a, and Obp83b which are also expressed in trichoid sensilla were all shown to regulate odor-evoked response kinetics and spontaneous activity of trichoid ORNs (Kim et al. 1998;Xu et al. 2005;Laughlin et al. 2008;Scheuermann and Smith 2019).
In addition to Obps expressed in the trichoid sensilla, many other Obps also show misregulation particularly in Or67d and Or47b mutants. For example, in both mutants, Obp99d is significantly upregulated; in contrast, Obp99a and Obp8a show a downregulation (Fig. 7). Even though it is not known which sensilla these Obps are normally expressed in, given the responses, it is likely that they are produced by the non-neuronal cells in trichoid sensilla where Or47b and Or67d ORNs are housed. There are also some Obps that show misregulation only in specific mutants. For example, Obp83cd, Obp83ef, and Obp56c are normally not expressed in the antennae, yet Obp83cd and Obp83ef show a significant upregulation in Or67d mutants, whereas Obp56c is upregulated in Or47b mutants (Fig. 7). Obp84a is the only Obp to be upregulated in isolated male antennae and downregulated in Or47b mutant antennae (Fig. 7). These results suggest the presence of regulatory interactions between olfactory receptor signaling and neural activity that likely drive activity-dependent homeostasis in Obp levels. Given the role of most Obps in regulating neuronal physiology, it is possible that transcriptional changes in Obp genes observed in social isolation as well as pheromone receptor mutants might occur as homeostatic mechanism to compensate for altered neuronal activity and ORN function.
Pheromone receptor signaling regulates genes involved in hormone metabolism
Hormone signaling is responsible for regulating behavioral and brain states in both vertebrates and invertebrates. For example, in vertebrates, many social behaviors such as aggression, mating, and parenting, are under the control of hormones such as estrogen, testosterone, oxytocin, and vasopressin ( In social insects, such as ants, caste-specific behaviors are determined by hormone states, where queen-and worker-like behaviors are associated with ecdysone and juvenile hormone signaling, respectively (Glastad et al. 2020;Gospocic et al. 2021). In Drosophila, juvenile hormone signaling modulates behavioral and motivational states during courtship (Lin et al. 2016;Lee et al. 2017;Zhang et al. 2021). Recent studies have also identified age-related cues such as juvenile hormone (JH) signaling together with social experience to control Or47b neuronal responses to pheromones and courtship behaviors in a Fru M -dependent manner (Lin et al. 2016;Sethi et al. 2019;Zhang et al. 2020). JH signaling, concurrent with social experience, modifies chromatin around fru P1 promoter and ultimately fru M levels in Or47b ORNs (Zhao et al. 2020). These studies also demonstrated that JH receptor enrichment at fru P1 promoter increases in socially isolated flies as well as flies with disrupted Or47b signaling (Zhao et al. 2020). As mentioned above, gene ontology analysis of differentially expressed genes in this study also highlights genes involved in hormone metabolism (Fig. 3). Thus, we specifically interrogated the genes regulating hormone levels in pheromone receptor and fru M mutants (Fig. 8a-c').
Many of the enzymes involved in juvenile hormone biosynthesis and metabolism, such as juvenile hormone epoxide hydrolases (Jheh1,2,3), Jhe, and juvenile hormone acid methyltransferase (jhamt), are expressed at varying levels in the antennae (Fig. 8c). These genes are also reported to have Fru MA and Fru MC binding in their upstream regulatory elements (Dalton et al. 2013;Neville et al. 2014;Vernes 2014). Two mostly enriched genes, Jheh1 and Jheh2, show mild upregulation in fru M mutants but no significant changes in the absence of social cues or pheromone receptor signaling (Fig. 8c, c'). On the other hand, both Jhe and Jheh3 appear to be upregulated in social isolation while downregulated in Or47b mutants (Fig. 8c' and Supplementary Fig. 6b). Throughout the antenna, Jheh3-GAL4 expression is observed in many ORNs (Fig. 8d, d'). In agreement with a transcriptional increase in socially isolated male antennae, we found that Jheh3 reporter expression was significantly increased in isolated male antennae. On the other hand, in both Or47b and fru M mutant antennae, Jheh3 reporter expression was significantly decreased in agreement with transcript levels (Fig. 8d, d'). As observed in the RNA-seq, there was no change in Jheh3 reporter expression in Or67d mutants compared to grouped male antennae (Fig. 8d, d'). Jhe is of particular interest as Jhe activity is known to be necessary for robust male-specific courtship behaviors and mating success in addition to affecting the abundance of sex-specific pheromones such as 11-cis-vaccenyl acetate in males (Liu et al. 2008;Ellis and Carney 2010). Furthermore, seminal work on Jhe and Jheh3 has shown that these enzymes work together to catabolize JH in D. melanogaster (Khlebodarova et al. 1996). These results suggest that social experience and pheromone receptor signaling regulate the expression of JH biosynthetic enzymes. Such changes can modulate juvenile hormone activity by rapidly catabolizing JH in the periphery and affecting downstream target genes, such as fruitless.
Discussion
Sensory experience influences many behaviors by modifying neuronal and circuit function (Cushing and Kramer 2005;Curley et al. 2011;Dey et al. 2015;Sethi et al. 2019), yet molecular mechanisms remain largely unknown. Here, we took advantage of the wellcharacterized system of sex-specific behaviors, governed by the Fru M , which acts as a gene regulatory switch for male-specific circuit development, function, and behavior in D. melanogaster (Yamamoto 2007;Dickson 2008;Yamamoto and Kohatsu 2017). While our sample sizes were modest for fru mutants, our results show that social experience and pheromone signaling alter gene expression programs, including modest effects on Fru M splice/promoter usage, ultimately modulating circuit function and behavioral responses (Fig. 9).
As genetic background significantly influences transcriptional profiles, one of the limitations of our transcriptome analysis is that the backgrounds of the mutants and wild type are different. Thus, it is not possible to fully distinguish the effects of background on transcription from effects of mutants used. To further evaluate the scale of potential genetic background influence, we called genetic variants based on the RNA-seq data of this study and calculated the genetic distance among these genotypes together with variation data extracted from the genomes of 18 lines randomly selected from the Drosophila melanogaster Genetic Reference Panel (DGRP). As is shown in Supplementary Fig. 7a, while our experimental genotypes are not identical, they are Obp genes show significant changes. Hierarchically clustered heatmaps showing log 2 fold change compared to group-housed wild-type antennae across all experimental conditions (b) and average mRNA levels (RPKM) of replicates within each condition ordered in the same way as log 2 fold change (a). Genes with adjusted P-value above 0.01 were filtered out in each experimental condition. Adjusted P-value was directly performed via DESeq2. *P.adjust < 0.05; **P.adjust < 0.01; ***P.adjust < 0.001; ****P.adjust < 0.0001. Fru M -binding information is listed in Supplementary Table 4. more closely related to each other than those randomly selected DGRP wild-type strains, which are a snapshot of typical genetic variation observed among wild-type isofemale lines. This reflects, in part, a shared genetic ancestry among the laboratory stocks used in this study that resulted from some overlap in the stocks used to make our experimental lines. Secondly, transcript levels for multiple housekeeping genes we analyzed are similar across wild-type and mutant samples (Fig. 1d). We also verified the consistency of these housekeeping genes from additional antennal RNA-seq samples (Supplementary Fig. 7b and Supplementary Table 5). Lastly, we did fully control for the background during in vivo confirmation experiments for differentially expressed genes identified by RNA-seq analysis. Using antennal expression patterns of transcriptional reporters for a limited number of relevant genes, we were able to confirm patterns observed in RNA-seq datasets of this study. Collectively, even though the genetic background influences may still exist, the gene sets showing differential expression between Or/fru mutants and wild types are largely due to the effects of loss of pheromone receptor or Fruitless function.
Previous studies in Drosophila demonstrated that social experience can modulate Fru M -dependent sex-specific behaviors such as courtship and aggression (Curley et al. 2011;Dey et al. 2015;Sethi et al. 2019). For example, social isolation decreases the sensitivity of Or47b neurons to their pheromone ligands in a Fru M -dependent manner, which leads to a decrease in male competitive courtship advantage (Sethi et al. 2019). Other studies have also shown that monosexual group housing can decrease aspects of courtship behaviors such as courtship song and circling (Dankert et al. 2009). In addition to courtship, aggression behaviors which are under the control of Or67d and Or65a neurons and Fru M function also change with social experience (Dankert et al. 2009;Liu et al. 2011). For example, social isolation significantly increases male-male aggression Dankert et al. 2009). These reports highlight the importance of social experience and pheromone signaling in the execution of sexspecific behaviors.
What are the molecular mechanisms by which Fru M function is altered by social experience? We previously reported that social experience and pheromone receptor signaling alter chromatin states around fru P1 promoter (Zhao et al. 2020) to modify fru regulation (Hueston et al. 2016;Sethi et al. 2019;Zhao et al. 2020). Surprisingly, as reported in this study as well as in Zhao et al. (2020), chromatin alterations at the fru P1 promoter in isolated and pheromone receptor mutant male antennae are not accompanied by major changes to transcription, except for a significant decrease in antennal reporter expression driven by fru P1-GAL4 in Or47b mutants (Hueston et al. 2016). Transcriptional regulation of fru is complex yielding 15 annotated alternatively spliced isoforms from 7 promoters giving rise to different 3′ sequences which encode variable zinc finger DNA-binding domains of Fru protein (Lee et al. 2000;Meier et al. 2013;Neville et al. 2014;Von Philipsborn et al. 2014). Different Fru proteins regulate unique yet overlapping set of target genes which have binding sites for single or multiple Fru M isoforms (Dalton et al. 2013;Neville et al. 2014;Vernes 2014). Many of these target genes regulate neural development and function. Therefore, changes in fru splicing patterns can affect the expression of thousands of genes simultaneously, strongly modulating neuronal responses and circuit outputs in a short period of time. Even though we do detect slight shifts at the level of exon/promoter usage in our transcriptome data, RNA-seq differences from bulk antennal tissues are not dramatic across social conditions and mutants, except for fru M mutant male antenna. While it is possible that Fig. 9. Fruitless-dependent transcriptional cascade that reprograms neural responses and behaviors with social experience, pheromone receptor function, and hormone signaling. Social context and pheromone detection modifies chromatin and transcriptional/splice programs for fruitless gene altering its function. This reprograms expression of Fruitless target neuromodulatory genes (i.e. ppk25) altering neural physiology and pheromone responses (Ng et al. 2019;Sethi et al. 2019;Zhang et al. 2020). Ultimately, these result in changes in neuronal activity and behavioral modulation (Sethi et al. 2019). It was also shown that juvenile hormone signaling works together with social experience to modulate both ORN physiology and courtship behaviors (Lin et al. 2016;Zhang et al. 2020). At the molecular level, social/pheromonal cues work together with juvenile hormone receptors to modulate transcription fruitless (Zhao et al. 2020). Social context, pheromone receptor, and FruM function also alter the expression of genes involved in juvenile hormone metabolism.
previous chromatin results were noisy, changes in chromatin without associated changes in transcription are a commonly seen phenomenon called "molecular priming" and have been shown in other systems, including in fru-positive circuits in the brain (Koike et al. 2012;Jaric et al. 2019;Brovkina et al. 2021). Remarkably, Fru M is expressed in ∼2,000 interconnected neurons highlighting a circuit for courtship behaviors from sensation to action Sato and Yamamoto 2020). This expression pattern allows neural activity-dependent influences on fru chromatin and transcription to propagate throughout the whole circuit. In summary, these features make circuit switch gene fru M an efficient molecular hub onto which many internal and external states act to modulate circuit activity and behavioral outputs by tweaking the levels of transcripts and splice isoforms, leading to a cascade of changes in transcriptional programs.
Each pheromone sensing neuron relays different information about the social environment, which is integrated and processed to output a specific behavior. Likely due to differences in neuronal identity and function, different pheromone receptors have different effects on fru chromatin and splice isoforms (Zhao et al. 2020) (Fig. 4). Such sensory stimuli-dependent changes in Fru proteins can alter the expression of downstream genes affecting neuronal activity and function to have rapid, temporary, or lasting effects on neuronal activity and behavioral outputs. These changes are essential for organisms to form short/long-term adaptation to the environment. However, how these different cell types generate these differences in behavioral repertoire via changes in gene expression in the periphery have been largely unknown.
Many of the genes that show differential expression in response to social isolation and disruption of pheromone receptor or Fru M function encode neuromodulators that affect membrane potential, such as ion channels, membrane ion transporters, proteins involved in neurotransmission, and odorant binding proteins ( Fig. 3; Fig. 5; and Fig. 6). Among all conditions, social isolation possesses the fewest differentially expressed genes compared to group-housed controls with a small overlap with pheromone receptor and fru M mutants. This might be due to differences in gene expression changes in response to disruption of evoked activity of pheromone sensing olfactory neurons with socially isolation vs disruption of both spontaneous and evoked activity in pheromone receptor mutants. Loss of Fru M alters the expression of many neuromodulatory genes with known Fru M -binding sites in a bimodal way, suggesting that Fru M can act as both an activator and repressor of gene expression. Some of these differentially expressed genes are also altered in pheromone receptor mutants, generally in the same direction (Fig. 2d, e). There are also unique overlaps between Or47b and fru M mutants, between Or67d and fru M mutants, and between Or47b and Or67d mutants (Fig. 2b, e). Many of these differentially expressed genes are known to harbor binding sites for different Fru M isoforms. These suggest that some of the differentially expressed genes in Or47b and Or67d mutants are due to Fru M -dependent changes, whereas others might be Fru M -independent, caused by OR signaling and/or ORN activity.
One functionally relevant gene among the genes that show differential regulation in pheromone receptor and fru M mutants is the Fru M target gene ppk25, which previously was shown to modulate ORN responses in Or47b and Or67d neurons (Ng et al. 2019;Zhang et al. 2020). ppk25 belongs to a family of sodium channels that serve a variety of functions, from regulation of neural activity to detection of sensory cues. PPK protein complexes generally are composed of multiple subunits encoded by different ppk genes. Many ppk genes contain binding sites for Fru M isoforms in their promoter regions (Dalton et al. 2013;Neville et al. 2014;Vernes 2014). In addition, a recent study implicated isoform-specific Fru M -dependent regulation of ppk25 and ppk23 in the modulation of Or47b and Or67d responses (Ng et al. 2019;Zhang et al. 2020). According to the genetic analysis in this study, Fru MB and Fru MC positively regulate the expression of ppk25 and ppk23, respectively. There are apparent discrepancies with this interpretation and transcriptome data from our study, as well as others (Li et al. 2020;McLaughlin et al. 2021). While our transcriptome analysis agrees with a regulatory role for Fru M in ppk25 gene regulation, the regulatory mode is repressive; that is, ppk25 expression is upregulated in Or47b, Or67d, and fru mutants. This type of repressive role for Fru M in transcription also is in consensus with previous studies demonstrating Fru M interactions with transcriptionally repressive histone-modifying enzymes such as HDAC1 (Ito et al. 2012;Ito et al. 2013). In addition, we are not able to detect any transcripts for ppk23 in the antennae, and the expression of ppk23 does not change in Or47b, Or67d, and fru M mutants. Instead, we noticed that other ppk genes such as ppk6,7,13,14,15,19 are altered in different mutant conditions. Fru M seems to have a bidirectional role in regulating ppk gene expression, where it activates the expression of a subset of ppk genes (ppk7,13,14,15) while repressing the expression of others (ppk6 and ppk25). One way to reconcile these differences is that multiprotein PPK complexes composed of combinations of different PPK subunits and the stoichiometric levels of each ppk transcript in a given neuron can determine channel function. For example, misexpression of ppk23, which normally is not expressed in the antennal ORNs, can interfere with PPK channel function by disrupting the existing functional complexes in a given neuron, or forming new PPK complexes, thus affecting physiological properties. Another possibility is that the transcriptional changes in fru Lex /fru /4-40 mutant are an output for eliminating all fru M transcripts, thus masking individual effects of each fru M isoform, such as fru MA , fru MB , or fru MC . And finally, it is also possible that the slight upregulation of ppk25 in Or47b and fru M mutants as well as large changes in Or67d mutants may be due to global fru M changes in the whole antennae, or through retrograde neuromodulatory signaling from the antennal lobe.
Antennal sensilla contain cell types other than ORNs, such as glia-like cells and support cells of sensillum, as well as epithelial cells. Since our transcription data is from the whole antennae, one possibility we cannot exclude is that differences in antennal gene expression in different genetic and social conditions are readouts from non-neuronal cells or other ORNs. Even though we anticipate the immediate effects of Or67d and Or47b mutants to happen in the ORNs expressing these two receptors, signals from ORNs can lead to secondary changes in gene expression in non-neuronal cells within the sensillum (Su et al. 2012). This also brings to light a general issue with bulk tissue where large cell-type-specific changes may be masked by cell-nonautonomous changes in gene expression in others cell types, as well as retrograde feedback signaling within olfactory circuits. Regardless, our data shows that many of the differentially expressed genes encode regulators of neuronal function and physiology. This increases the likelihood that the transcriptional changes in response to social and pheromonal cues are happening mostly in the neurons that respond to social cues, such as Or47b and Or67d ORNs. Future singlecell chromatin and transcription profiles from Fru M -positive neurons in the antenna and brain will provide deeper insights to neuron-specific changes in gene regulation from the peripheral to the central nervous system that modulate circuit function in response to social cues.
Part of the transcriptional effects can also be exerted downstream of changes in dsx levels seen in pheromone receptor and fru mutants. Given the upregulation of dsx levels in pheromone receptor and fru mutants suggests the possibility that some of the social experience-and neural activity-dependent transcriptional changes might also arise from increased Dsx. Dsx expression is restricted to non-neuronal cells in the antenna (Robinett et al. 2010). Similarly, genes affecting neural activity such as Obps and some neurotransmitter receptors, which function to alter both spontaneous and evoked activity of ORNs, are also expressed in nonneuronal cells in addition to the ORNs in the antennae (McKenna et al. 1994;Kim et al. 1998;Kim and Smith 2001;Larter et al. 2016). The social experience and pheromone signaling-dependent misregulation of these genes point to adaptive homeostatic mechanism within local sensilla that can contribute to modulation of neuronal activity.
Lastly, in addition to the transcriptional changes occurring in neural activity programs, genes regulating juvenile hormone metabolism are also modified with social context and pheromone receptor and fruitless mutants. Social experience works together with juvenile hormone signaling to modulate responses of pheromone sensing neurons in a Fru M -dependent manner (Sethi et al. 2019). These contribute to modification of competitive copulation advantage of males in different population densities and different ages as well as regulating overall courtship. At the molecular level, social/pheromonal cues work together with juvenile hormone receptors to modulate chromatin around fruitless P1 promoter and its transcription (Zhao et al. 2020). Juvenile hormone acts as a repressor of fru expression, and social experience converts it to an activator. In the same study, we showed that social isolation and disruption of Or47b signaling increase the accumulation of juvenile hormone receptor at the fru P1 promoter and juvenile hormone response elements. This might be due to changing levels of juvenile hormone since our results show that expression of genes involved in juvenile hormone metabolism are altered in social isolation, and mutants in pheromone receptors and fru. The findings in our study together with results from previous studies suggest the presence of interconnected gene regulatory networks among social/pheromone signaling, hormone signaling, and Fru M function in neural and behavioral modulation (Fig. 9).
Social isolation is known to affect a wide range of brain functions and behaviors, such as aggression, attention, depression, and anxiety. Overall, this study highlights the shared transcriptional changes in master behavioral regulators and their target neuromodulatory genes providing a molecular mechanism that alter neural responses with social experience and pheromone sensing.
Data availability
All relevant data are within the paper and its supporting information files. The raw sequencing data are accessible in GEO (# GSE179213). Code for the analysis is deposited on GitHub (https://github.com/csoeder/VolkanLab_BehaviorGenetics/tree/ master/scripts
|
v3-fos-license
|
2022-05-10T16:45:34.485Z
|
2022-04-30T00:00:00.000
|
248611634
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/9/5474/pdf?version=1652163533",
"pdf_hash": "c58cbd415a6184bc7d3f20100f7e74c3b725f540",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1165",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "d8659eec151ed005b9bde08c8bc12bb5b154a7c8",
"year": 2022
}
|
pes2o/s2orc
|
Contamination Assessment of Mangrove Ecosystems in the Red Sea Coast by Polycyclic Aromatic Hydrocarbons
Mangroves are known as a naturally based solution for climate mitigation and adaptation. Mangroves are at a potential risk of degradation by contaminants such as polycyclic aromatic hydrocarbons (PAHs). In this study, sixteen priority PAHs were analyzed and characterized in forty samples of mangrove seawater and mangrove sediments collected from two coastal areas (i.e., Sharm and Khor Rabigh) along the Red Sea Coast of Rabigh city in August 2013. We found that the average concentration of total PAH in mangrove sediments in the Sharam area (22.09 ng/kg) was higher than that in the Alkhor area (6.51 ng/kg). However, the average concentration of the total PAH in the mangrove seawater in the Alkhor area (9.19 ng/L) was double that in the Sharam area (4.33 ng/L). Phenanthrene and pyrene were the major components in both the mangrove seawater and sediment in all the investigated areas. We observed that the abundance of PAHs with 2–3 aromatic rings was dominant in sediment samples collected from both study areas. This abundance was also observed in seawater from the Sharam area. However, seawater samples from the Alkhor area had abundant PAHs with four aromatic rings. The majority of PAHs in sediment samples of both study areas originated from petrogenic sources, whereas the majority of PAHs in seawater samples originated from pyrogenic sources.
Introduction
Globally, approximately 50% of mangroves have been lost over recent decades. The main human activities for losing mangroves are reclamation, farming, aquaculture, deforestation, and urban development [1][2][3][4], with an estimate of 62% of global losses between 2000 and 2016 due to land-use change [5]. In coastal areas, mangrove ecosystems, in addition to being rich breeding and nursery grounds for various aquatic organisms, also act as a buffer between sea waves and the land and help stabilize the coastline and prevent coastal erosion [6]. Typically, these sites of mangroves are rich in debris and organic carbon and sheltered from strong wind and sea waves; therefore, they are a suitable environment for the deposition and buildup of contaminants with a slow rate of degradation under anoxic conditions [7,8]. Contaminants released from anthropogenic activities pose a threat to polluting mangrove waters and sediments, particularly polycyclic aromatic hydrocarbons (PAHs) [9,10].
PAHs are aromatic species with at least two fused benzene rings. They are prevalent in the environment due to their numerous sources. Because of their toxicity, carcinogenicity and mutagenicity, the United States Environmental Protection Agency (USEPA) has identified sixteen PAHs as priority pollutants. Typically, anthropogenic PAHs in marine environments are caused by industrial discharge, urban runoff, wastewater, engine oil, oil spills, ship and boat activities, and atmospheric deposition of industrial and traffic 2 of 13 emissions [11,12]. PAH toxicity and risk assessment in aquatic sediment have been reported in many studies [13][14][15][16][17]. When PAHs are released to the environment as petroleum or petroleum products, the source is called petrogenic, whereas if PAHs are released as a result of inefficient or incomplete combustion, the source is called pyrogenic [18,19]. The identification of PAH sources is important for assessing sediment contamination. PAHs from petrogenic sources are more toxic than PAHs from pyrogenic sources [20][21][22].
Rabigh, a coastal city in Saudi Arabia located on the Red Sea, has experienced high industrial growth rates over the past three decades, with several heavy and light industries within the city boundaries, including oil refineries, petrochemical companies, cement plants, and power and desalination plants. Furthermore, King Abdullah Economic City and King Abdullah Port are approximately 25 km from the study area. The aim of this article is to investigate the PAH contamination level in mangrove ecosystems (i.e., sediments and seawaters) in the two selected study areas on the eastern coast of the Red Sea.
Study Area
Two areas of mangroves located along the Red Sea Coast close to the city of Rabigh, Saudi Arabia, were selected for the study, as shown in Figure 1. These two coastal areas (i.e., the Sharam area and Alkhor area) are rich in Avicennia marina and Rhizophora mucronata mangrove trees. The Sharam area regularly experiences light petrochemical and oil shipping activities of approximately 20-25 vessels a month. The mangrove in the Sharam area is located approximately 2 km north of a petrochemical company. However, the Alkhor area experiences various potential sources of contamination, including wastewater discharge, boat activities, waste oil spilled during oil changes and disposal, and to a lesser extent, fuel spilled during refueling operations, particularly in the small marina and the station of the border guards near the intersection of the open sea and the water body of Alkhor.
Meteorology Measurements
These meteorological variables (air temperature, relative humidity, and wind speed and direction) were measured by an automated weather station with a data logger that was installed in a chosen site to represent the two study areas. The chosen site is located approximately between the two study areas, and the station operated from 1 January to 31 December 2013.
Sampling
Twenty sampling sites were selected for the collection of surficial mangrove sediments (at a depth of 0-15 cm) and surface seawater samples in August 2013 from both the Sharam area and Alkhor area (Figure 1). At each site, samples were collected from three points and then pooled and kept at 5 • C for lab analysis. The seawater samples were collected in 500 mL plastic bottles. The sediment samples were collected using a stainless-steel auger and kept in zip-lock polyethylene bags. The collected sediment samples were air-dried at ambient temperature, and then the detritus and coarse materials contained in the sediment samples were discarded. Then, the sediment was sieved through a 0.075 mm mesh sieve, hand milled, and homogenized in a ceramic mortar.
PAH Analysis
PAHs were extracted from sediment samples using a Thermo Scientific dionex ASE ™ 350 accelerated solvent extractor. Sediment samples of 5 g and diatomaceous earth (Sigma Aldrich ® , St. Louis, MI, United States) were mixed and used to fill extraction cells of 34 mL. A dichloromethane/acetone mixture (1:1, v/v) at 100 °C and 1500 psi, three 5 min static extraction cycles and a flush volume of 60% were used to perform extraction. Under a steady nitrogen stream using a centrifugal evaporator (Genevac EZ-2 Solvent Evaporator, Ipswich, United Kingdom,), dryness of extracts was achieved, and then dry extracts were reformed in approximately 2 mL of dichloromethane. The concentrated extract was filtered through a 0.2 µm PTFE filter (Chromafil Xtra MV-20/25, Laval, Canada) with a 10 mL syringe and then transferred to a GC vial for GC/MS analysis. For water samples, each sample was filtered to remove suspended matter, and extraction was conducted following the USEPA Method 550.1 of Bashe and Baker in 1990 [23]. Using a C18 Empore TM Solid Phase extraction disk (Neuss, Germany), 100 mL of each water sample was processed in a vacuum flask with sidearm for approximately 20 min. Then, the content of the disk was
PAH Analysis
PAHs were extracted from sediment samples using a Thermo Scientific dionex ASE ™ 350 accelerated solvent extractor. Sediment samples of 5 g and diatomaceous earth (Sigma Aldrich ® , St. Louis, MI, United States) were mixed and used to fill extraction cells of 34 mL. A dichloromethane/acetone mixture (1:1, v/v) at 100 • C and 1500 psi, three 5 min static extraction cycles and a flush volume of 60% were used to perform extraction. Under a steady nitrogen stream using a centrifugal evaporator (Genevac EZ-2 Solvent Evaporator, Ipswich, United Kingdom), dryness of extracts was achieved, and then dry extracts were reformed in approximately 2 mL of dichloromethane. The concentrated extract was filtered through a 0.2 µm PTFE filter (Chromafil Xtra MV-20/25, Laval, QC, Canada) with a 10 mL syringe and then transferred to a GC vial for GC/MS analysis. For water samples, each sample was filtered to remove suspended matter, and extraction was conducted following the USEPA Method 550.1 of Bashe and Baker in 1990 [23]. Using a C18 Empore TM Solid Phase extraction disk (Neuss, Germany), 100 mL of each water sample was processed in a vacuum flask with sidearm for approximately 20 min. Then, the content of the disk was dried, and were eluted using 30 mL of dichloromethane twice. Under a steady nitrogen stream using a centrifugal evaporator (Genevac EZ-2 Solvent Evaporator, Ipswich, QLD, United Kingdom), the extract was concentrated to approximately 2 mL. Then, the concentrated extract was filtered through a 0.2 µm PTFE filter (Chromafil Xtra MV-20/25, Laval, Canada) with a syringe and transferred to a GC vial for GC/MS analysis. The GC/MS analysis was conducted with a JEOL JMS-GCmate System integrated with an HP6890 gas chromatograph. An Agilent J&W DB-EUPAH 20 m × 0.18 mm, 0.14 mm high efficiency GC column was used, and the following conditions were applied: oven temperature: 60 • C (1 min) → 260 • C (10 min) → 320 • C (4 min); injector temp: 280 • C; transfer line: 280 • C; ion source: 280 • C; analyzer: 150 • C; electron impact energy: 70 eV. A total of 1 µL volume of each sample was injected in the splitless mode, and the purge time was 1 min. Identification and quantification of 16 PAH compounds were based on matching retention times and their corresponding mass spectrum with a mixed-PAH standard (Dr. Ehrenstorfer GmbH L 20950009AL, PAH-Mix 9, Augsburg, Germany). The 16 identified PAH compounds were naphthalene (here abbreviated as Naph), acenaphthylene
Degree of Similarity/Dissimilarity
The divergence degree of two datasets is determined using the divergence ratio (CD).
To determine whether the measured PAHs at the investigated sites shared the same or different contaminating sources, the degree of discrepancy of the PAH contamination among the different sites within both investigated areas was calculated using the following CD [24].
where xij is the measured concentration of PAH contaminant i at a certain site, j and k are contaminant compared between sites, and p is the number of PAH contaminants. As the calculated CD approaches, one measurement from both sites is considered different, whereas as CD approaches zero, measurements from the two sites are considered similar. CD values lower than 0.27 between a contaminant from two sites can be attributed toward similar sources [24].
Quality Control Procedures
An external reference standard and procedural blanks were used as quality control methods. Dr. Ehrenstorfer GmbH L 20950009AL, PAH-Mix 9 was used as an external standard containing 16 PAH compounds for calibration and spiking the matrix. The matrix spike solutions were prepared from the mixed stock standards by volumetric dilution. Two procedural blanks were used for every 10-sample batch and processed together with the samples through the whole sample preparation and instrumental analysis. Limits of detection (LODs) were estimated as three times the standard deviation of the signal of the blanks. Bias in the sample matrix was estimated by adding target analytes at known concentrations to sample aliquots.
PAH Diagnostic Ratios
There are several methods, including molecular diagnostic ratios (MDRs), the principal component analysis (PCA) method [25,26], the chemical material balance (CMB) model [27], the positive matrix factorization (PMF) method, and stable carbon isotopic ratio analysis [28], used to identify sources of PAHs. In this study, MDRs were used to infer the possible sources of PAHs. Therefore, isomer pairs of PAHs, such as PAH of MW 202 Flt/(Flt + Pyr), 228 BaA/(BaA + Chr), 276 IcdP/(IcdP + BghiP) and low to high molec-ular weights of PAHs (L/H MW PAHs), were computed. Table 1 shows the diagnostic ratios utilized in this study along with value ranges reported for different sources. Figure A1 shows how wind speed and direction were distributed in the study areas, and Table A1 shows the variation in the air temperature, relative humidity, and wind speed observed throughout the year. The hourly air temperature and relative humidity varied from 15 to 46 • C with a mean of 28.9 • C and from 2 to 93% with a mean of 52%, respectively. The hourly wind speed varied from approximately 0.0 ms −1 to 7.2 ms −1 with a mean of 2.1 ms −1 . The predominant wind directions over the study areas were north-northwesterly (25.1%), followed by northwesterly (22.6%) and westerly (11.5%), with wind speeds predominantly occurring in the 1.37-3.06 category ms −1 (Table A2).
Concentrations and Composition of PAHs
The concentrations of 16 PAHs in the surface seawater and sediments of the study areas are summarized in Table 2. Generally, all 16 PAHs were detected at low concentrations throughout this study. Among 16 individual PAHs, Naph (2-ring PAH) had the highest mean concentration in sediment samples from the Sharam and Alkhor areas, 7.49 ng/kg and 0.02 ng/kg, respectively. For seawater samples, Phen (3-ring PAH) recorded the highest mean concentration (1.06 ng/L) in the Sharam area, and Pyr (4-ring PAH) recorded the highest mean concentration (3.25 ng/L) in the Alkhor area. The mean concentration of the 16 PAHs in the mangrove sediments in the Alkhor area (6.51 ng/kg) is relatively lower than that observed in the Sharam area (22.09 ng/kg), whereas in the mangrove seawater, it is higher in the Alkhor area (9.19 ng/L) than that observed in the Sharam area (4.33 ng/L). The frequency distribution for the concentration of different PAH compounds in all sediment and seawater samples of the two investigated areas is illustrated in Figure 2. From this figure, it can be observed that Phen and Pyr were the most dominant compounds in both sediment and seawater samples. Among the 16 analyzed PAHs, Flt, Phen, and Pyr had the highest detection frequency (100%) in sediment samples, whereas Flu, Phen, and Pyr had the highest frequency of occurrence in seawater samples. Anth, BbF, and BkF were detected in seawater samples with a detection frequency (30%) or below, whereas those PAH species were never detected in sediment samples. This concentration difference between the sediments and the column of seawater above them could be suggestive of older PAH contamination experience in the Sharam area than the likely recent contamination experience in the Alkhor area.
The composition pattern of PAHs detected in the samples by the number of rings is shown in Figure 3. The sediment of all 10 sites as well as seawater of four sites (3, 4, 5, and 7) in the Sharam area featured a higher abundance of 2,3-ring PAHs. Similarly, in the Alkhor area, the sediment of all sites except site 6 as well as the seawater of two sites (5 and 6) featured a higher abundance of 2,3-ring PAHs. HMW-PAHs were generally predominant in sediment sites compared to LMW-PAHs. This predominance of LMW PAHs may be attributed to preferential degradation during PAH transport and burial into sediments [29]. Commonly, a higher abundance/concentration of HMW PAHs compared to that of LMW PAHs is observed in sediments from river and marine environments (e.g., [30]).
On the basis of molecular weight, comparisons of low-molecular-weight (LMW) PAH (MW < 228) and high-molecular-weight (HMW) PAH (MW >228) of sediment samples in both study areas were conducted: 11.00 compared to 11.09 ng/kg in the Sharam area and 3.26 compared to 3.25 ng/kg in the Alkhor area, respectively. It was found that LMW PAH compounds composed of fewer than four aromatic rings were dominant in sediment samples collected from both study areas (70% in the Sharam area and 66% in the Alkhor area). A close level of abundance of fewer-than-4-ring PAHs (44%) and morethan-4-ring PAHs (40%) and to some degree the relative proximity in LMW and HMW PAH concentrations (1.59 and 2.73 ng/L, respectively) were detected in Sharam area water. However, it was found that seawater samples from the Alkhor area had a close level of abundance of 4-ring PAHs (40%) and fewer-than-4-ring PAHs (38%). As a result, it was dominated by HMW PAHs (7.98 ng/L), as shown in Table 2 and Figure 3. The observed difference between the HMW PAH abundance in seawater and sediment samples of both investigated areas may indicate significant HMW PAH modification by water column processes during sedimentation.
Dissimilarity between the PAH Contamination in the Two Areas
The dissimilarity matrix among the sampled sites of both investigated areas is shown in Figure 4 for sediment and Figure 5 for seawater samples. A degree of dissimilarity ranging from 0.315 to 0.860 and from 0.293 to 0.692 for sediment and seawater, respec-
Dissimilarity between the PAH Contamination in the Two Areas
The dissimilarity matrix among the sampled sites of both investigated areas is shown in Figure 4 for sediment and Figure 5 for seawater samples. A degree of dissimilarity ranging from 0.315 to 0.860 and from 0.293 to 0.692 for sediment and seawater, respectively, was observed among all sites (Figures 4 and 5). Therefore, the two areas do not share common sources of PAHs and are not impacted by long-range pollution transport; rather, they are impacted by site-specific PAH contamination sources. According to the prevalent wind direction (NNW as shown in Table A2), the Sharm area is susceptible to emissions released from the desalination plant located northwest of the Sharm area. On the other hand, the Alkhor area is not impacted by this source of contamination since the wind coming from the south and south-south-west are less than 1.5% as shown in Table A2. Differences among the PAH-contaminated sediment samples of the Sharam area were more pronounced than those of the Alkhor area (Figure 4), whereas the results in seawater were reversed ( Figure 5). This result suggests that the site-specific contamination sources have a relatively greater distinct impact on Sharam sediment than on Alkhor sediment, whereas they have a relatively greater distinct impact on Alkhor seawater than on Sharam seawater. However, the two study areas are features low PAHs compared to other studies in the literature (Table A3).
Potential Sources of PAHs Pollution
The possible PAH sources using selected MDRs, namely, PAHs (L/H MW), Flt/(Flt + Pyr), IcdP/(IcdP + Bghip), and BaA/(BaA + Chr), are illustrated in Figure 6. From Figure 6a, it can be observed that the majority of PAHs in sediment samples of both study areas originated from petrogenic sources, while minor amounts originated from pyrogenic sources. However, the majority of PAHs in seawater samples originated from pyrogenic sources associated with few petrogenic sources. From the bivariate plot for IcdP/(IcdP + Bghip) versus Flt/(Flt + Py) ratios (Figure 6b), it can be observed that the PAHs of sediment samples for both study areas originate from liquid fossil fuel combustion sources associated with a minor fraction from unburned petroleum sources. Seawater samples mainly originated from mixed sources of liquid fossil fuel, biomass, and coal combustion. Furthermore, the PAH cross plot for BaA/(BaA + Chr) versus Flt/(Flt + Pyr) ratios (Figure 6c) of sediment samples of both study areas indicates mixed sources of petroleum and petroleum combustion sources. Seawater samples originate from mixed sources of liquid fossil fuel, biomass, and coal combustion. These cross plots showed that sediment samples for both investigated areas originated from petroleum and petroleum combustion, in particular, liquid fuel combustion. Seawater samples originated from coal and biomass combustion. Site A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 B1 B2 B3 B4 B5 B6 B7 B8 B9
Potential Sources of PAHs Pollution
The possible PAH sources using selected MDRs, namely, PAHs (L/H MW), Flt/(Flt + Pyr), IcdP/(IcdP + Bghip), and BaA/(BaA + Chr), are illustrated in Figure 6. From Figure 6a, it can be observed that the majority of PAHs in sediment samples of both study areas originated from petrogenic sources, while minor amounts originated from pyrogenic sources. However, the majority of PAHs in seawater samples originated from pyrogenic sources associated with few petrogenic sources. From the bivariate plot for IcdP/(IcdP + Bghip) versus Flt/(Flt + Py) ratios (Figure 6b), it can be observed that the PAHs of sediment samples for both study areas originate from liquid fossil fuel combustion sources associated with a minor fraction from unburned petroleum sources. Seawater samples mainly originated from mixed sources of liquid fossil fuel, biomass, and coal combustion. Furthermore, the PAH cross plot for BaA/(BaA + Chr) versus Flt/(Flt + Pyr) ratios ( Figure 6c) of sediment samples of both study areas indicates mixed sources of petroleum and petroleum combustion sources. Seawater samples originate from mixed sources of liquid fossil fuel, biomass, and coal combustion. These cross plots showed that sediment samples for both investigated areas originated from petroleum and petroleum combustion, in particular, liquid fuel combustion. Seawater samples originated from coal and biomass combustion.
Conclusions
The findings from this work represent vital reference information for future development and risk assessment studies in the Red Sea coastal area. This study offers important information on the contamination levels and potential origins of 16 priority PAHs in mangrove seawater and sediments in two investigated areas located along the Red Sea Coast. The present baseline measurements of PAH contaminants in the two investigated coastal areas would serve as a useful tool for future assessment of the ecosystem in the coastal sea area. This study revealed that the average concentration of total PAH mangrove sediments in the Sharam area is much larger than that in the Alkhor area, while the average concentration of the total PAH in the mangrove seawater in the Alkhor area is double that in the Sharam area. Additionally, the PAHs in sediment samples and seawater samples of both study areas were attributed mainly to petrogenic sources and pyrogenic sources, respectively. Mangrove exposure to PAHs due to the enhanced levels of motorboat usage and discharge of wastewater from nearby industrial sites and service locations to seawater bodies in coastal sea areas are of high concern and should be monitored and controlled. Periodic measurement of PAH concentrations at intervals of 8-10 years is recommended to maintain a healthy aquatic ecosystem.
Conclusions
The findings from this work represent vital reference information for future development and risk assessment studies in the Red Sea coastal area. This study offers important information on the contamination levels and potential origins of 16 priority PAHs in mangrove seawater and sediments in two investigated areas located along the Red Sea Coast. The present baseline measurements of PAH contaminants in the two investigated coastal areas would serve as a useful tool for future assessment of the ecosystem in the coastal sea area. This study revealed that the average concentration of total PAH mangrove sediments in the Sharam area is much larger than that in the Alkhor area, while the average concentration of the total PAH in the mangrove seawater in the Alkhor area is double that in the Sharam area. Additionally, the PAHs in sediment samples and seawater samples of both study areas were attributed mainly to petrogenic sources and pyrogenic sources, respectively. Mangrove exposure to PAHs due to the enhanced levels of motorboat usage and discharge of wastewater from nearby industrial sites and service locations to seawater bodies in coastal sea areas are of high concern and should be monitored and controlled. Periodic measurement of PAH concentrations at intervals of 8-10 years is recommended to maintain a healthy aquatic ecosystem. Figure A1. The wind speed and direction were distributed at the study areas. [35] Sardinia, Italy 0.07-1.21 µg/g 16 PAH Simpson, Christopher D. et al. (1996) [36] Kitimat, Canada <1-10,000 µg/g 16 PAH Witt, G. (1995) [37] Baltic Sea 9-29 ng/g 15 PAH (sandy areas) Witt, G. (1995) [37] Baltic Sea 800-1900 ng/g 15 PAH (sediment)
|
v3-fos-license
|
2019-11-07T21:38:40.616Z
|
2019-11-07T00:00:00.000
|
207913540
|
{
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-019-52396-y.pdf",
"pdf_hash": "5da13533fa24ebcc01d013f258a514143379ecc9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1166",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"sha1": "5da13533fa24ebcc01d013f258a514143379ecc9",
"year": 2019
}
|
pes2o/s2orc
|
Recent Lake Area Changes in Central Asia
Using Moderate Resolution Imaging Spectroradiometer (MODIS) 500 m spatial resolution global water product data, Least Squares Method (LSM) was applied to analyze changes in the area of 14 lakes in Central Asia from 2001 to 2016. Interannual changes in lake area, along with seasonal change trends and influencing factors, were studied for the months of April, July and September. The results showed that the total lakes area differed according to interannual variations and was largest in April and smallest in September, measuring −684.9 km2/a, −870.6 km2/a and −827.5 km2/a for April, July and September, respectively. The change rates for the total area of alpine lakes during the same three months were 31.1 km2/a, 29.8 km2/a and 30.6 km2/a, respectively, while for lakes situated on plains, the change rates were −716.1 km2/a, −900.5 km2/a, and −858 km2/a, respectively. Overall, plains lakes showed a declining trend and alpine lakes showed an expanding trend, the latter likely due to the warmer and wetter climate. Furthermore, there was a high correlation (r = 0.92) between area changes rate of all alpine lakes and the lakes basin supply coefficient, although there was low correlation (r = 0.43) between area changes rate of all alpine lakes area and glacier area/lake area. This indicates that lakes recharge via precipitation may be greater than lakes recharge via glacier meltwater. The shrinking of area changes for all plains lakes in the study region was attributable to climate change and human activities.
Lakes have a strong influence on both human beings and the ecological environment, providing water for local residents, developing fishery production, and playing an important role in agricultural irrigation [1][2][3] . Lake can also provide the necessary water conditions for vegetation in arid areas where precipitation is scarce and the ecosystem is fragile. As a geographical element in the arid region, lakes expansion or shrinkage are essential to agricultural development and the health of the plant and animal ecosystems inside and outside lake, such that land desertification and salinization in arid areas are severely impacted by lake area shrinkage 2,4 . Furthermore, the size of a lake exerts a regulating effect on the climate in the surrounding area 5 by increasing or decreasing local air humidity and thus affecting precipitation in the lake basin. Therefore, it is critical not only for the development of socioeconomic status of countries in Central Asia but also for their ecological environmental protection to study variation of lakes area and analyze internal driving factors contributing to those changes.
Central Asia is situated inland region from Eurasia ( Fig. 1) and includes Kazakhstan, Tajikistan, Kyrgyzstan, Turkmenistan, Uzbekistan, and China's Xin Jiang Province 6 . The region is characterized as arid and semi-arid, and the climate is mostly controlled by westerly air circulation. Annual precipitation in Central Asian countries varies greatly from region to region. In the windward (western) side of the Tianshan Mountains, precipitation amounts can top 2,000 mm annually, while in the desert side, precipitation levels are typically less than 100 mm 7 . Since the 1970 s, temperatures across Central Asia have shown an obvious rising trend of 0.368 ~ 0.428 °C/10 a 6,8 , which is higher than the current global average warming level. Overall, the vast region generally suffers from scarce water resources and a fragile ecological environment 9,10 . The largest water bodies of Central Asia are Aral Sea, and Issyk-kul Lake, Balkhash Lake (Table 1).
In most cases, lake area change is the result of the combined action of climate change and human activities 2,3,[11][12][13][14][15][16] . Climate change (e.g., changes in temperature and precipitation) directly impacts water cycle changes in a lake basin, while human activities (e.g., agricultural irrigation) can change the water cycle process in a lake system. In the context of global warming, the warming rate in Central Asia is higher than the average global warming rate 8 , which may promote the evaporation of the lake surface. The warming also speeds up the melting of glaciers and snow and brings the melting period forward.
In the plains region of Central Asia, changes in precipitation can cause changes in river runoff, which in turn can impact lake inflow and lake recharge and contribute to lake area expansion or shrinkage. Human activities such as irrigated agriculture consume water resources mostly in the form of evaporation and loss, which then directly affects lake area changes. Central Asia is a typical arid and semi-arid region in the inland temperate zone. Due to low levels of precipitation in this region, snow cover and glacier meltwater are important sources for lake water recharge. However, both snow and glacier cover are currently experiencing a shrinking trend due to changes of climatic factors.
Jing et al. 17 extracted 12 lakes areas with a combined water index with MOD09A1 dataset in different seasons (April, July and September) in the year 2005-2015, but that only nearly 10-years long. Klein et al. 2 used AVHRR and MODIS sensors to derive inland water bodies extents over a period from 1986 till 2012 for the region of Central Asia for the months of April, July and September. Tan et al. 3 used MODIS NDVI data to extract areas of 24 lakes along the Silk Road (including some lakes of Central Asia) and analyze their spatial-temporal characteristics, but that only with annual mean lake area without seasonal changes. Li et al. 18 also used Modis NDWI datasets to extract 9 lakes to analyze seasonal and inter-annual changes from 2001 to 2016, and the number of lakes was scarce. This paper used 14 lakes to analyze seasonal and inter-annual changes of 14 closed lakes with Modis product datasets.
Some studies have been conducted on changes in lake area, but most of the research was carried out within limited time frames 4,19 . Some images (eg. Landsat remote sensing images) have been used to obtain lake areas at a given moment for many times 16,20 , the lake area at a given moment indicates the area of the lake in a particular period, the lake area captured at a given point in time is insufficient to reflect inter-annual and annual variations because of the possible fluctuations of the lake area in the short term 3 . For example, one observation represents lake area for a year, thus lack of high temporal resolution (i.e., observing the lake area several times a year), and ignored annual seasonal changes affecting the lakes. Annual changes such as inundation and drought can cause a water body to fluctuate in area within a short time, resulting in significant impacts to the surrounding ecological environment. Currently, the data on lakes situated in arid regions do not fully reflect the variations in characteristics caused by annual changes to water surface area 3 , so the research which includes high temporal resolution is urgently required 21 . This paper studies the changes in lake water area in April, July and September in Central Asia and provides decision-making suggestions for water resource management and ecological environment maintenance for the impacted lakes.
In this paper, considering that the open lakes were highly regulated by reservoirs, we mainly considered closed lakes. The change of lakes water area in Central Asia was mainly affected by larger lakes, so this paper chose the main typical lakes (larger than 200 km 2 ) in Central Asia, including 7 alpine lakes and 7 plain lakes. This paper contained all the great closed lakes in Central Asia except for the Caspian Sea, therefore, closed lakes larger than 200 km 2 were selected as research objects.
Results
Temporal variation of lake area. For the years 2001 to 2016, the total area of the 14 lakes under study ( Fig. 2a) was the largest in April, followed by July and September. The change rate for the total area of the lakes was −684.9 km 2 /a, P < 0.01, R 2 = 0.63 in April, −870.6 km 2 /a, P < 0.05, R 2 = 0.85 in July, and-827.5 km 2 /a, P < 0.01, R 2 = 0.80 in September. The lake area decreased the fastest in July, followed by the area change rate in September. The area change rate in April was the lowest.
The change in total area of alpine lakes for the months of April, July and September ( Fig. 2b) was largest in April, with little difference in the area during July and September. From 2001 to 2016, the change rate of the total area of the lake was 31.1 km 2 /a, P < 0.01, R 2 = 0.84 in April, 30.6 km 2 /a, P < 0.05, R 2 = 0.94 in July, and 29.8 km 2 /a, P < 0.01, R 2 = 0.87 in September. The lake area increased the fastest in April, followed by the area change rate for July. The area change rate was the lowest in September.
The change in total area of lakes located in the Central Asian plains regions for the months of April, July and September (as shown in Fig. 2c) was the largest in April and the smallest in September. From 2001 to 2016, the change rates of the total area of the lakes in April, July and September were −716.1 km 2 /a, −900.5 km 2 /a, and −858 km 2 /a, with significance levels of P < 0.05 and R 2 of 0.65, 0.86 and 0.81, respectively. The lake area decreased the fastest in July, followed by the area change rate for September. The area change rate was the lowest in April. Figure 3 shows the average lake area for April, July and September as being the lake area for the entire year. From 2001 to 2016, the alpine lake area was either stable or expanding (Fig. 3, Table 2). For example, the area of Issyk-kul Lake was stable and did not pass the significance test level of P < 0.05. However, other lakes did pass the significance test level of P < 0.01. The annual change rates of Sai li-mu Lake and Karakul Lake were 0.21 km 2 /a and 0.81 km 2 /a, respectively. The change rates of Alakol Lake, Ayakkum Lake, Aqikkol Lake and Arkatag Lake were larger, ranging from 2.94 km 2 /a to 13.03 km 2 /a. According to Table 2, the seasonal variation rates of Aqikkol Lake, Arkatag Lake, Karakul Lake and Ayakkum Lake were 1.21, 1.27, 1.16 and 1.14, respectively, which were relatively large. On the other hand, the seasonal variation rates of Issyk-kul Lake, Sai li-mu Lake and Alakol Lake were 1.00, 1.07 and 1.03, respectively, indicating afairly small seasonal variation.
During the period under study, the area of plains lakes notably varied (Table 2 and Fig. 4). For example, the South Aral Sea, Ebi Lake and Tengiz Lake decreased, with the South Aral Sea and Ebi Lake passing the significance test of P < 0.001 and P < 0.05 and showing reduction rates of −846.47 km 2 /a and −7.30 km 2 /a, respectively. Conversely, the North Aral Sea, Sarygamysh Lake, Ulungu Lake and Balkhash Lake all exhibited an upward trend. Of these water bodies, the North Aral Sea and Sarygamysh Lake passed the significance test of P < 0.001 and showed increasing rates of 25.74 km 2 /a and 11.32 km 2 /a, respectively. According to Table 2, the seasonal variation rates of the South Aral Sea, Tengiz Lake and Ebi Lake were 1.99, 1.71 and 1.45, indicating that the seasonal variation rates of these lakes were relatively large, whereas the seasonal variation rates of Balkhash Lake, Sarygamysh Lake, Ulungu Lake and the North Aral Sea were 1.03, 1.03, 1.08 and 1.13, respectively, indicating little seasonal www.nature.com/scientificreports www.nature.com/scientificreports/ variation. Generally speaking, the seasonal variation rates of lakes on the Central Asian plains were larger than alpine lakes in the same region.
The present study used Aqqikol Lake and Alakol Lake (Fig. 5a,b), along with Tengiz Lake and the North Aral Sea and South Aral Sea (Fig. 5c,d) as examples of seasonal variation of lakes. For April, July and September from 2001 to 2016, the seasonal variation of the alpine lakes Aqqikol Lake and Alakol Lake and North Aral Sea was not significant, whereas that of Tengzi Lake and South Aral Sea were quite significant. Specifically, the www.nature.com/scientificreports www.nature.com/scientificreports/ seasonal variation ratios were as follows: South Aral Sea 1.99; Tengiz Lake 1.71; North Aral Sea 1.13; Aqikkol Lake 1.21; and Alakol Lake 1.03. These ratios indicated that the seasonal variation map of lake dynamics was consistent with the seasonal variation ratio of the lakes ( Table 2).
Analysis of factors influencing lake area change.
Seven alpine lakes in the study area experienced an average warming rate of 0.053 °C/a, with the exception of Karakul Lake (Fig. 6a, Table 3). Precipitation also charted a general upward trend, (with the exception of Karakul Lake), with an average increase rate of 1.15 mm/a. The temperature rise in the lake basins not only accelerated the melting of snow and glaciers, but also lengthened the melting period, thus providing more water for the lakes. Additionally, the increase in rainfall supplied water directly through the lake surface as well as indirectly through runoff, which also played a role in the increase of lake area.
However, as indicated above, the precipitation and temperature in the Karakul Lake basin showed a downward trend which did not pass the significance test. As the precipitation was mainly concentrated in spring and summer 22 , the expansion of the lake may have been due to wintertime and springtime precipitation that melted in spring to form runoff as a source of water to recharge the lake. Karakul Lake was desiccative and strong evaporation due to temperature rise so runoff was mainly formed from glacier melt water in summer. Thus, lake area for spring was larger than summer and expanding.
In the alpine lakes of Central Asia, the agricultural land area in the Issyk-kul basin showed a marked decrease, while the area of the Alakol Lake basin noticeably expanded (Table 3). However, the significance test of P < 0.001 indicated that changes in the area of Alakol Lake were mainly influenced by climate. By analyzing the relationship between lake area change rate and recharge coefficient, this study found a positive correlation, indicating that the relationship between lake area change and precipitation underwent significant changes 12,23 (Fig. 7, Table 3). The correlation coefficient was r = 0.92. Furthermore, analysis of the glacier area/lake area (correlation coefficient r = 0.43) indicated that the ratio of glacier area/lake area was smaller than that of the lake recharge coefficient, pointing to precipitation being more obvious than glacier recharge (Fig. 7, Table 3).
North Aral Sea and South Aral Sea, along with the Sarygamysh Lake, Balkhash Lake, Ebi Lake, and Ulungu Lake basins, all experienced a warming trend (Fig. 6, Table 3), with an average warming rate of 0.018 C/a. During the same time period, temperatures in the Tengiz Lake basin showed a downward trend of −0.019 C/a. Meanwhile, precipitation in the South Aral Sea and Balkhash Lake, Ebi Lake and Ulungu Lake charted an increasing trend www.nature.com/scientificreports www.nature.com/scientificreports/ with an average increase rate of 1.38 mm/a, whereas the North Aral Sea and Sarygamysh Lake started a downward trend of −0.243 mm/a and −0.036 mm/a, respectively, with none of these water bodies passing the significance test. Agricultural land area in the South Aral Sea and North Aral Sea and Tengiz Lake and Balkhash Lake basins also exhibited a downward trend of −150.1 km 2 /a, −60.37 km 2 /a, −46.65 km 2 /a and −18.31 km 2 /a, respectively, again with none of these locations passing the significance test. In contrast, agricultural land in Sarygamysh Lake, Ebi Lake and Ulungu Lake showed a clear upward trend, with rising rates of 19.55 km 2 /a, 194 km 2 /a and 11.23 km 2 /a, respectively, which passed the significance test of P < 0.05. Under the dual climate conditions of www.nature.com/scientificreports www.nature.com/scientificreports/ rising temperature and rising precipitation, the plains lakes demonstrated a downward trend in area, indicating that they were affected by climate change and human activities. Human activities have different impacts on the utilization of water resources in the lake basin in different seasons, and it is difficult to obtain data on the amount of water consumed in each season. Therefore, we can only use cropland area to indirectly reflect the agricultural water consumption and analyze the reasons for yearly changes of lakes area. Table 3. Variation trends of temperature, precipitation and cropland in 14 lake basins. a Lake supply coefficients is based on literature 43 . b Glacier area/lake area is based on literature 43 . c Lake type: "A" means alpine lake and "P" means plain lake in the lake type. d *means significance level P < 0.05.
Discussion
In this paper, 14 Central Asian lakes with a combined area of more than 200 km 2 were studied during the months of April, July and September to determine the seasonal variations in area. The freezing of the lakes in winter greatly affected their extraction accuracy 21 , so the winter season was not included in the study data. By comparing and analyzing the total area of lakes in April, July and September (representing changes in lake area during spring, summer and autumn), the study found a clear downward trend in the total area of the lakes, with alpine lakes showing an upward trend and plains lakes a downward trend.
It was concluded that the change of alpine lake area was generally increasing, while plain lake area was generally decreasing. The research conclusion was consistent with Tan et al. 3 on the changes of lake area in Central Asia along the Silk Road, the plain lakes tended to shrink, such as Aral Sea, Ebi Lake and Sarygamysh Lake. The seasonal variation trend was similar as lake area obtained by Jing et al. 17 , such as Ebi Lake, Ulungu Lake and Ayakkum Lake. Bai et al. 24 used Landsat images to study the changes in lake area of 9 inland lakes in Central Asia from 1975 to 2007, and found that the area of lakes in plain areas decreased significantly, while alpine lakes were relatively stable. These results were consistent with the research conclusions in this paper.
A dam between the North and South Aral seas was built in 2005 in the Berg Strait, completely controlling the water resources of the North Aral Sea. As a result, the recharge of the Syr Darya River into the North Aral Sea remained stable, meaning that the evaporation was in balance with lake precipitation and the runoff of the Syr Darya River into the lake 25 . However, the surface area remained stable only at certain times of the year. Snow melt water is an important water recharge source 26 , so the runoff in spring is higher than in summer and autumn. This finding was consistent with changes in lake area studied in this paper (Fig. 4).
The South Aral Sea experienced shrinking, followed by an increase. The springtime flooding of the Amu Darya River occurred frequently after early 2012, causing the surface of the South Aral Sea to rise 2 . At the same time, precipitation in the South Aral Sea region showed an increasing trend (Table 4), and rainy season from October to April 27 , leading to the South Aral Sea area being larger in spring than in summer or autumn due to frequent spring floods.
It is worth noting that the surface area of Sarygamysh Lake did not decrease but instead indicated an upward trend. The main reason for this seeming anomaly is that the farmland irrigation in the lower reaches of the Amu River did not flow into the Amu Darya River. Rather, Sarygamysh Lake was recharged by some of the water sources of the Amu Darya River 28 . Analysis of the relationship between lake area change and temperature and precipitation showed no obvious relationship among these factors, but there was a significant positive correlation between agricultural area expansion and lake area expansion ( Table 4). The main reason was that the inflow of the external water source (the Amu Darya River) into Sarygamysh Lake caused expansion of the lake area. Thus, the changes which occurred to the lake's surface area was primarily the result of human activities.
The main recharge sources of Tengiz Lake was inflow from snow-melt in Spring 29 . Thus area of Tengiz Lake for April was larger than July and September, and changed greatly. During the study's time frame (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016), the lake area decreased from 2001 to 2012, then enlarged from 2013 to 2015.The reasons may be climatic due to area reduction of cropland (−46.65 km 2 /a)in the Tengiz basin (Table 3). Precipitation in the Tengiz Lake basin from 2001 to 2016 was increasing indistinctively (2.813 mm/a), decrease of temperature (−0.019 °C/a) may cause drop in evaporation (Table 3), thus lake area change due to precipitation minus evaporation. The winter snow water equivalent in the Tengiz basin decreased from 2001 to 2012, and then began to increase subsequently 2,30 . Some www.nature.com/scientificreports www.nature.com/scientificreports/ small lakes changed with analogical trend in northern Kazakhstan due to increasing precipitation from 2013 to 2016 31 , these were consistent with our study.
Despite increasing trends for temperature and precipitation in the Ebi Lake basin, cultivated land area increased significantly and lake area decreased significantly. These changes were obviously influenced by human activities, which was consistent with the findings of the present study 32 . The development of irrigation agriculture in the upper reaches of the basin consumed water from rivers, directly affected the inflow of lakes, and reduced the area of lakes in the lower reaches. Plain lakes are mainly recharged by rivers, and lake surface area varies greatly according to the amount of river runoff. However, due to the lack of runoff data for lakes basin, this study had difficulty quantitatively analyzing the impact of runoff for lakes.
Precipitation and temperature in Ayakkum Lake, Aqikkol Lake, Arkatag Lake and Sai-limu Lake basins also showed an upward trend 33 . According to the findings of a recent study, Alakol and Issyk-kul lakes were either stable or expanding in area due to rising temperatures in the nearby mountain region 34 . For Karakul Lake, which is situated in the Pamir Plateau, precipitation and temperature were slightly decreasing, thus inhibiting evaporation. Moreover, because Karakul Lake is surrounded by mountains, it is difficult for the wet vapor flow of west wind circulation to enter the basin 35 , which left the lake area in a stable state.
Lake ice was identified as part of lake area for April in this paper, the present study noted that lakes area were larger than that reported in the existing literature due to different reflectivity for ice and water 2,17 . There may be also some uncertainty with large water bodies, as the spatial resolution of MODIS is 500 m each day, which is relatively low.
Conclusion
The present work studied area changes occurring from 2001 to 2016 in 14 typical lakes in Central Asia during the months of April, July and September. Using daily 500 m resolution water product data, the interannual and seasonal variation characteristics of lakes were analyzed. Overall, the total area of the 14 lakes under study showed a significant decreasing trend. Specifically, the change rates for lake area in April, July and September were −684.9 km 2 /a, −870.6 km 2 /a and −827.5 km 2 /a, respectively. The total area of lakes situated in plains regions showed a significant decreasing trend during the months of April, July and September, with change rates of −716.1 km 2 /a, −900.5 km 2 /a and −858 km 2 /a, respectively. However, the total area of lakes situated in alpine regions showed a significant increasing trend, with change rates of 31.1 km 2 /a, 29.8 km 2 /a and 30.6 km 2 /a for the same three months, respectively.
The study findings also showed that the area change rate of alpine lakes was less than that of plains lakes. The seasonal variation rates of lakes in the plains region of Central Asia ranged from 1.03 to 1.99, with seasonal variation rates for the South Aral Sea, Tengiz Lake and Ebi Lake being1.99, 1.72 and 1.45, respectively. The seasonal variation of alpine lakes was smaller than that of plains lakes, ranging from 1 to 1.27. The seasonal variation rates of Issyk-kul Lake, Sai-limu Lake and Alakol Lake (in the Tianshan Mountains) were slightly less, ranging from 1 to 1.07, while the rates for Aqikkum Lake, Aqikkol Lake, Arkatag Lake and Karakul Lake (in the Kunlun Mountains and Pamir Plateau)were between 1.14 and 1.27.
Analysis of the factors influencing alpine lake area changes points to the warm and humid climate likely being the main cause for the expansion. Hence, seasonal variations in lake area differed according to recharge source and the proportion of the components. For instance, alpine lake area changes were highly positively correlated with the lake basin recharge coefficient (r = 0.92), whereas the changes showed only a slight correlation with glacier area/lake area (r = 0.43) (Fig. 7). The recharge of precipitation to lakes may be greater than glaciers.
For the plains lakes, the shrinkage of surface area was primarily the result of climate change and human activities. Furthermore, even though the area of agricultural land in the South Aral Sea basin declined, the decrease in the area of the lake was due to a portion of the runoff from the Amu Darya River recharging the Sarygamysh Lake and subsequently increasing the lake area. The North Aral Sea situation differed substantially from that of the South Aral Sea, as the truncation of surface water sources caused by the Berg Strait dam resulted in the basin basically achieving water balance, with only a slight increase in area. In the same region, Ebi Lake was directly affected by human activities of agricultural irrigation water consumption and the decrease of water inflow into the lake. In contrast, Balkhash and Ulungu lakes saw an increase in their lake areas due to the warm and humid climate surrounding them. Finally, the Tengiz Lake basin underwent a slight cooling and humidifying change in climate, which may be related to an increase inlake area after year of 2013. Through the analysis of the causes of lake changes, this paper provided suggestions for water resources management in lake basins.
Methods
Annual change rate of lake area. Least Squares Method (LSM) 3 was used to calculate annual change rate of lake area, as follows: Trend changes of temperature and precipitation. Mann-Kendall test was used to calculate the climate change rate and significance level 36 . The result (P < 0.05) indicates that the trend passed the significance test.
|
v3-fos-license
|
2022-11-18T14:25:07.642Z
|
2013-02-27T00:00:00.000
|
253588822
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00009-013-0264-2.pdf",
"pdf_hash": "2205e4a2bb7c03c8bdcf2069217e11a92e5201ae",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1167",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "2205e4a2bb7c03c8bdcf2069217e11a92e5201ae",
"year": 2013
}
|
pes2o/s2orc
|
On the Nilpotency Class of a Generalized 3-Abelian Group
A group G is called 3-abelian if the map \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${x \mapsto x^{3}}$$\end{document} is an endomorphism of G and it is called generalized 3-abelian, if there exist elements \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${c_{1}, c_{2}, c_{3} \in G}$$\end{document} such that the map \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varphi : x \longmapsto {x^{c_{1}} x^{c_{2}} x^{c_{3}}}}$$\end{document} is an endomorphism of G. Abdollahi, Daoud and Endimioni have proved that a generalized 3-abelian group G is nilpotent of class at most 10. Here, we improve the bound to 3 and we show that the exponent of its derived subgoup is finite and divides 9. We also prove that G is 3-Levi, 9-central, 9-abelian and 3-nilpotent of class at most 2.
Introduction and Results
Let n ≥ 2 be an integer. A group G is called n-abelian whenever (xy) n = x n y n for all x, y ∈ G; or equivalently the map x → x n is an endomorphism of G. Levi [7] has proved that a group G is 3-abelian if and only if it is 2-Engel and the exponent of its derived subgroup [G, G] divides 3. Trotter [10] has proved that a 3-abelian group G is abelian whenever the map x → x 3 is an automorphism of G. A group G is called generalized n-abelian whenever there exist elements c 1 , . . . , c n ∈ G such that the map x → x c1 ... x cn is an endomorphism of G. The class of generalized n-abelian groups is closed under the formation images, and finite direct products. Obviously, every n-abelian group is generalized n-abelian and it is easy to see that every generalized 2-abelian group is abelian. It is clear that by conjugating we may assume one of c 1 , . . . , c n to be the trivial element.
Abdollahi, Daoud and Endimioni [1, Theorem 3.1] have proved that a generalized 3-abelian group G is nilpotent of class at most 10, and abelian Theorem 1.2. Let G be a generalized 3-abelian group admitting an endomorphism of the form ii) The subgroup Imφ is abelian. In particular, if φ is injective or surjective, G is abelian.
Let m = 0 be an integer. Baer [2] introduced the m-center of a group G as follows: The set Z(G, m) is a characteristic subgroup of G for any non-zero integer m. L.-C. Kappe and M. L. Newel [5] proved that Thus only one of the m-commutativity conditions suffices to define the mcenter Z(G, m). If m is a positive integer, the upper m-central series Z i (G, m) is defined inductively as the following: We then get an ascending series.
Proofs
Notations used in this paper are standard. For a group G and the elements x 1 , x 2 , ..., x n , x, y ∈ G, the commutators [x 1 , x 2 , ..., x n ] and [x n , y] are defined inductively by the rules: For a given integer i ≥ 1, we denote by [G, G], ζ i (G) and γ 3 (G) respectively the derived subgroup, the ith-center and the third term of the lower central series of G. We denote by H ≤ G if H is a subgroup of G. To prove Theorems 1.1, 1.2 and 1.3 we need the following lemmas. Lemma 2.1. Let G be a generalized n-abelian group admitting an endomorphism of the form ψ : x → x c1 · · · x cn where c 1 , . . . , c n ∈ G. Then, G is n-abelian whenever c 1 , . . . , c n ∈ ζ 2 (G).
whence (xy) n = x n y n .
Lemma 2.2.
Let G be a metabelian group, x, y ∈ G and u, v ∈ [G, G]. Then, for any integer n ≥ 1, the following assertions hold.
Proof. (a) is easy to prove as [G, G] is abelian. For the proofs of (b) and (c) see [4].
Proof of Theorem 1.1. i) Let G be a generalized 3-abelian group with the given endomorphism φ defined by x φ = x a xx b for all x ∈ G, where a, b ∈ G are fixed. To prove that G is nilpotent of class three, it is enough to show that every 4-generated subgroup g 1 , g 2 , g 3 , g 4 of G is nilpotent of class at most 3. Now H = g 1 , g 2 , g 3 , g 4 , a, b is clearly invariant under the action of φ and is thus a generalized 3-abelian group as well. We can thus replace G by H and it suffices then to show that H is nilpotent of class at most 3. By [1, Théorème 3.1], H is nilpotent. Now one can use nq package of Werner Nickel [8] implemented in GAP [9] and MAGMA [3] to find the nilpotency class Mediterr. J. Math.
of H. The package nq has the capability of computing the largest nilpotent quotient (if it exists) of a finitely generated group with finitely many identical relations and finitely many relations. For example, if we want to construct the largest nilpotent quotient of a group G with the following presentation . . . , x n , y 1 , . . . , y k ) = 1 , where r 1 , . . . , r m are relations on x 1 , . . . , x n and w(x 1 , . . . , x n , y 1 , . . . , y k ) = 1 is an identical relation in the group x 1 , . . . , x n , one may apply the following code to use the package nq in GAP: LoadPackage("nq"); #nq package of Werner Nickel Note that we need to construct the free group of rank n+k because as well as the n generators for G we also have an identical relation with k free variables. Note that the function NilpotentQuotient(L) attempts to compute the largest nilpotent quotient of L and it will terminate only if L has a largest nilpotent quotient. Note that our identical relation is (xy) φ = x φ y φ for all x, y ∈ G, which can be written as follows: ii) Let x, y ∈ G. From part (i) and Theorem 1.1, we know that G is nilpotent of class at most 3 and γ 3 (G) 3 = {1}. Therefore Using furthermore the facts from part (i) and Theorem 1.
|
v3-fos-license
|
2016-06-18T00:44:08.071Z
|
2016-04-29T00:00:00.000
|
14424406
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00573/pdf",
"pdf_hash": "30f0acae247e0122226379a6da755d36f00f99b5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1168",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "30f0acae247e0122226379a6da755d36f00f99b5",
"year": 2016
}
|
pes2o/s2orc
|
Comparative Proteomic Analysis of Soybean Leaves and Roots by iTRAQ Provides Insights into Response Mechanisms to Short-Term Salt Stress
Salinity severely threatens land use capability and crop yields worldwide. Understanding the mechanisms that protect soybeans from salt stress will help in the development of salt-stress tolerant leguminous plants. Here we initially analyzed the changes in malondialdehyde levels, the activities of superoxide dismutase and peroxidases, chlorophyll content, and Na+/K+ ratios in leaves and roots from soybean seedlings treated with 200 mM NaCl at different time points. We found that the 200 mM NaCl treated for 12 h was optimal for undertaking a proteomic analysis on soybean seedlings. An iTRAQ-based proteomic approach was used to investigate the proteomes of soybean leaves and roots under salt treatment. These data are available via ProteomeXchange with the identifier PXD002851. In total, 278 and 440 proteins with significantly altered abundances were identified in leaves and roots of soybean, respectively. From these data, a total of 50 proteins were identified in the both tissues. These differentially expressed proteins (DEPs) were from 13 biological processes. Moreover, protein-protein interaction analysis revealed that proteins involved in metabolism, carbohydrate and energy metabolism, protein synthesis and redox homeostasis could be assigned to four high salt stress response networks. Furthermore, semi-quantitative RT-PCR analysis revealed that some of the proteins, such as a 14-3-3, MMK2, PP1, TRX-h, were also regulated by salt stress at the level of transcription. These results indicated that effective regulatory protein expression related to signaling, membrane and transport, stress defense and metabolism all played important roles in the short-term salt response of soybean seedlings.
INTRODUCTION
Salinity is one of the most widespread agricultural problems in arid and semi-arid regions and significantly reduces plant growth and productivity. It is reported that 20% of irrigated land, which yields one-third of the world's food, is threatened by salt stress (Ma et al., 2012). High salinity, predominantly in the form of NaCl, affects plant growth mainly in three ways: osmotic stress, ion toxicity and secondary stresses such as oxidative stress (Turkan and Demiral, 2009). The salt signal is primarily perceived through roots, which rapidly respond to maintain root functionality and transmit signals to other organs for appropriate response and adaptation in the entire plant (Zhao et al., 2013). A combinatorial approach of accelerated gene discovery through genomics, proteomics, and advances in plant biotechnology techniques will provide insights into the molecular and biochemical basis of plant stress tolerance, which ultimately lead to crop improvement for sustainable agriculture (Eldakak et al., 2013).
Soybean (Glycine max) is one of the most economically important crops due to its high content of seed oil and protein. Salt stress affects soybean plant growth throughout its development starting from seed germination to flowering, however the early vegetative growth stages are reported to be more prone to abiotic stresses (Hossain et al., 2013). Recently, with advances in transcriptome mapping, some salt-responsive genes and molecular regulatory pathways have been identified in soybean seedlings (Fan et al., 2013;Qi et al., 2014). However, genomic studies only highlight mRNA levels, which may not be necessarily translated into proteins, and therefore transcriptome data may not correlate with results from proteomic analysis due to post-transcriptional and post-translational modifications (Hossain et al., 2013). Thus, quantitative analysis of gene expression at the protein level is essential for determining plant responses to salt stress. The proteome of soybean subjected to salinity has been analyzed using roots and hypocotyls of young seedlings (Aghaei et al., 2009) and other tissues (Sobhanian et al., 2010), and indicated that photosynthesis, protein biosynthesis and ATP biosynthesis were decreased while defense protein increased in soybean in response to salt stress (Sobhanian et al., 2011). Further comparative proteomic approaches have been employed to explore proteome expression patterns in germinating soybeans under salt stress treatments, the results suggested that enhanced energy metabolism and accelerated protein processing in the endoplasmic reticulum were important strategies for germinating soybeans response to NaCl stress (Yin Y. Q. et al., 2014). In addition, a proteomic approach has also been applied to seedlings of different salt tolerant genotypes of soybean under salt stress (Ma et al., 2012(Ma et al., , 2014, which identified several proteins as potential candidates for augmenting salt tolerance in soybean. However, the determination of the mechanisms involved in salt tolerance remains a challenging task; because plant responses to salinity can be very diverse, depending on the severity and duration of the stress, this leads to various changes at the proteome level (Hossain et al., 2013). The initial phases of stress response usually reveal more profound differences in the composition of the proteome compared to later phases of stress since novel homeostasis between plant and environment has been established (Vitamvas et al., 2015). The majority of studies that investigated salt response in soybean have focused on relatively late response to salinity treatment, in contrast, the early response of plants to short-term salt stress has been overlooked Pi et al., 2016). Therefore, the main objectives of this study were to investigate the proteome expression patterns and to identify the differentially expressed proteins under short-term salt stress in soybean seedlings. In the present study, an iTRAQ-based proteomic technique was used to assess proteome changes and identify proteins that were differentially expressed in soybean leaves and roots in response to 12 h of 200 mM NaCl treatment. Our approach was sensitive enough to identify 278 and 440 proteins with significantly altered abundance in leaves and roots of soybean, respectively. Proteins with markedly altered expression patterns were classified into 13 functional groups. Candidate proteins that may play important roles in salt stress responses were analyzed at the transcript level via semi-quantitative RT-PCR. This study advanced our understanding of salt-responsive mechanisms in soybean plants.
Plant Materials and Salt Treatment
Seeds of soybean (Glycine max cv Dongnong 50) were germinated on filter paper soaked in distilled water in Petri dishes at 25 • C. After 2 days, uniform germinated seedlings were transferred to plastic containers filled with vermiculite and irrigated with 1/4 Hoagland nutrient solution (Hoagland, 1944) in a growth chamber under normal conditions (25/20 • C day/night temperature, relative humidity of 60-80% and 16 h light period/day at intensity of 160 µmol photons m −2 s −1 ). When the plants reached the trefoil stage, soybean plants were transferred to liquid medium containing 1/4 Hoagland nutrient. For stress treatment, half of the soybean plants were shifted to 1/4 Hogland solution containing 200 mM NaCl for 0, 1, 3, 6, 12, 24, 48 h. The rest of the seedlings, grown in liquid 1/4 Hogland solution with no NaCl added were used as controls. Plant roots, and the second developed trifoliate leaves were analyzed at the proteomic, physiological and transcript levels. Three independent sets of control and NaCl treated samples were collected, and each replicate represented a pooled sample of three individual plants.
Measurement of Superoxide Dismutase
Activity, Peroxidase Activity, Malonyldialdehyde, and Chlorophyll Leaf and root samples (0.4 g) was ground in liquid nitrogen and homogenized in 10 volumes of ice-colded 50 mM sodium phosphate buffer (pH 7.8). After centrifugation at 15,000 g at 4 • C for 20 min, the resulting supernatants were collected and used for protein content assay and enzyme activities. Protein content was determined according to Bradford (Bradford, 1976) with bovine serum albumin as the standard. Superoxide dismutase (SOD) activity was determined by monitoring its ability to inhibit photochemical reduction of nitroblue tetrazolium (NBT) at 560 nm (Beauchamp and Fridovich, 1971). The activity of peroxidase (POD) was determined using the guaiacol oxidation method (Bradford, 1976). Malondialdehyde (MDA) content was measured by the thiobarbituric acid (TBA) reaction according to the method of (Hodges et al., 1999). MDA contents were calculated from UV absorbance at 450, 532, and 600 nm. Leaf chlorophyll was extracted in 80% acetone and measured with a UV-visible spectrophotometer at 645 and 663 nm. Chlorophyll a, chlorophyll b and total chlorophyll contents were calculated according to the formular previously described (Arnon, 1949).
Measurement of Na + /K + Contents
Dried roots and leaves of soybean seedlings were used for analysis of Na + and K + contents. The samples were ground to a powder using a pestle and mortar. A portion of the powdered samples were digested with concentrated HNO 3 at 110 • C for 2 h. Na + and K + contents in the digested samples were measured using an atomic absorption spectrophotometer as described previously (Wang and Zhao, 1995).
Protein Extraction and Quantification
Total protein from three biological replicates were prepared from control and NaCl-treated soybean leaf tissues using a phenol extraction method with the following modifications. Briefly, 1 g of each sample were ground into fine powder in liquid nitrogen in a chilled mortar. After adding 2.5 mL of Tris pH8.8 buffered phenol and 2.5 mL of extraction buffer (0.1 M Tris-HCl pH 8.8, 10 mM EDTA, 0.4% β-mercaptoethanol, 0.9 M sucrose), the samples were homogenized for 15 min, then transferred to a 50 mL tubes and agitated for 30 min at 4 • C, followed by centrifugation at 10,000 ×g for 30 min at 4 • C. The phenol phase was removed to new tubes, and the rest pf the aqueous phase was back-extracted with 4 mL extraction buffer and 4 mL phenol. The two extractions were combined and precipitated by adding 5 volumes of 0.1 M ammonium acetate in 100% methanol and incubating at −20 • C overnight. The precipitate was collected by centrifugation at 20,000 ×g for 20 min at 4 • C, and washed twice with 0.1 M ammonium acetate in methanol, ice-cold 80% acetone, and once with cold 70% ethanol. The resulting pellets were dissolved in lysis buffer (7 M urea, 2 M thiourea, 4% Chaps, 40 mM DTT). Protein concentrations were determined using Bradford assay (Bio-Rad) using BSA as the standard.
Protein Digestion, iTRAQ Labeling and Strong Cation Exchange Fractionation
A total of 100 µg of protein from each sample was used for acetone precipitation overnight. After protein precipitation, the pellet of each replicate was dissolved in 1% SDS, 100 mM triethylammonium bicarbonate, pH 8.5. The samples were reduced, alkylated, and digested with trypsin at 20:1 (w/w) at 37 • C for 12 h, then labeled using the iTRAQ Reagents 8plex kit according to the manufacturer's instructions (AB Sciex Inc., USA). The untreated leave and root sample replicates were labeled with iTRAQ tags 113, 117, and the salt-treated samples labeled with tags 115, 119, respectively. Three sets of iTRAQ samples were used for the three biological replicates.
After labeling, the samples were combined and lyophilized. The peptide mixture was dissolved in strong cation exchange (SCX) solvent A (25% (v/v) acetonitrile, 10 mM ammonium formate, and 0.1% formic acid, pH 2.8), and then fractionated on an Agilent HPLC system 1260 with a polysulfethyl A column (2.1 × 100 mm, 5 µm, 300 Å). Peptides were eluted at a flow rate of 200 µL min −1 with a linear gradient of 0−20% solvent B (25% v/v acetonitrile, 500 mM ammonium formate, pH 6.8) over 50 min, followed by ramping up to 100% solvent B in 5 min and holding for 10 min. The absorbance at 280 nm was monitored and a total of 12 fractions were collected.
Reverse Phase Nanoflow HPLC and Tandem Mass Spectrometry
Each SCX fraction was lyophilized and dissolved in solvent A (3% v/v acetonitrile, 0.1% v/v acetic acid), and loaded onto a C18 PepMap nanoflow column (75 µm internal diameter, 3 µm, 100 Å). Peptides from iTRAQ samples were separated using a 90 min linear gradient ranging from 97% solvent A/3% solvent B (96.9% v/v acetonitrile, 0.1% v/v acetic acid) to 40% solvent A/60% solvent B. MS/MS analysis was carried out on a LTQ Orbitrap Elite mass spectrometer (Thermo Scientific, Bremen, Germany) in a positive mode (Parker et al., 2015). Briefly, full MS survey scan was performed from a mass range of 400-1800 m/z with resolution R = 60,000 at m/z 400. HCD fragmentation was used for MS/MS, and the 10 most intense signals in the survey scan were fragmented. A resolution of 7500 at 400 m/z was used with an isolation width of 1 m/z, 30,000 signal threshold.
Data Analysis and Interpretation
The raw MS/MS data files acquired from the Orbitrap were processed by a thorough database searching considering biological modification and amino acid substitution against the Uniprot Soybean database with 71,042 entries (downloaded on May 16, 2013), using Proteome Discoverer 1.4 (Thermo Scientific Inc., Bremen, Germany) with the SEQUEST algorithm. The following parameters were used for searching: lowest and highest charge: +2 and +5, respectively; minimum and maximum precursor mass: 300 and 6000 Da, respectively; minimum S/N ratio: 3; enzyme: trypsin; maximum missed cleavages: 1; FDR ≦ 0.01; mass tolerance: 10 ppm for precursor ions and 0.5 Da for fragment ions; dynamic modifications: phosphorylation (+79.966 (S,T,Y)), carbamidomethyl (+57.021 Da (C)), oxidation (+15.995 Da (M)), carbamidomethyl (+57.021 Da (C)). The N-terminal modification was set for iTRAQ8plex (+304.205 Da). The Proteome Discoverer results files (.msf) were uploaded to ProteoIQ 2.6 (NuSep) software for further filtering. Peptide probability was applied to filter peptide assignments obtained from MS/MS database searching results using predictable false identification error rate. Protein probability was used to filter proteins with the null hypothesis that the database matching is random and taking into account of the peptide probability for all the peptides apportioned to that protein (Koh et al., 2015). Proteins detected with at least three spectral counts, FDR ≤ 5%, 95% probability and listed as top scoring proteins are considered as high confidence matches and are presented in the results. To be identified as being significantly differentially expressed, a protein should be quantified with at least three peptides in each experimental replicate, a p-value smaller than 0.05 and fold change greater than 1.3 or less than 0.7. The MS proteomics data have been deposited in the ProteomeXchange Consortium via the PRIDE partner repository (Vizcaino et al., 2013) with the data set identifier PXD002851. The functional annotation of proteins found was determined by Blast2GO (Bioinformatics Department, CIPF, Valencia, Spain), and then grouped on the basis of their biological functions from Gene Ontology (GO) terms combined with information from the literature (Zhang et al., 2012;Zhao et al., 2013). The proteinprotein interaction network was constructed using the String program (http://string-db.org).
RNA Extraction and Semi-Quantitative RT-PCR
Total RNA was extracted from salt-treated and control soybean leaves and roots separately by Ultrapure RNA Kit (Beijing Comwin Biotech Co., Ltd., China) with DNase I treatment, and cDNA was reverse transcribed from 1 µg of total RNA using a First Strand cDNA Synthesis Kit (Invitrogen). Gene-specific primers (GSPs) used for RT-PCR were designed with the Primer 5 software according to soybean cDNA sequences (Table S1). The soybean actin 11 gene was used as endogenous control for normalization. The annealing temperatures and numbers of amplification cycles of these 6 genes in the PCR assay were shown in Table S1. PCR was run with a program consisting of a 95 • C denaturation for 5 min, a 95 • C denaturation for 30 s, followed by a 50-62 • C annealing for 30 s, and a 72 • C extension for 60 s over certain cycles by 2×Es Taq MasterMix (Beijing Comwin Biotech Co., Ltd., China). The mRNA expression level was analyzed by PCR product with certain cycle number (26, 30, and 33) coupled with 1.5 % agarose gel electrophoresis.
Physiological Changes of Soybean Plants under Salt Treatment
The exposure of soybean seedlings to 200 mM NaCl resulted in various morphological and physiological changes in the leaves and roots over time. Salinity stress resulted in a clear growth retardation of plants. Although treatment with 200 mM NaCl for 1 h did not induce any obvious phenotypic differences in the seedling leaves. Treatment for 3 h induced the older leaf margins to roll inward (Figure 1), and this phenotype was quite obvious after treatment for 12 h. After 12 h, the leaves of salt-treated seedlings started to wilt, and chlorotic spots became visible (Figure 1). When treated for 48 h, the curling leaves displayed severe chlorosis and dried up.
To evaluate the effects of stress on soybean seedlings during the 48 h of NaCl treatment and determine the time point for sample collection after salt stress for subsequent proteomics analysis, physiological experiments were conducted. Generally, the concentrations of MDA is one of major indicators of stress-triggered oxidative damage and reactive oxygen species (ROS) accumulation. To monitor the effects of salt treatment on the plasma membrane system in soybean plants, MDA contents in the leaves and the roots were detected. As shown in Figures 2A,B, the MDA levels increased during the time course of the experiment in both tissues, which indicated that the injury to plasma membrane systems accumulated over the length of the NaCl treatment. The antioxidant property in plant tissue is generally accepted to correlate with plant tolerance to salt stress and it is usually represented by general radical scavenging capacities of superoxide dismutase (SOD) and peroxidases (POD). In the present study, the activities of SOD (Figures 2C,D) and POD (Figures 2E,F) exhibited similar dynamic patterns at different time points both in soybean leaves and roots. After treatment, the activities of SOD and POD increased initially during the first 12 h, both of which peaked at 12 h and declined from this time point. The results indicated that the ROS scavenging capacities of soybean seedlings were at the highest after 12 h of salt treatment. Thus, we speculated that a defensive mechanism may have developed in soybean seedlings against salt stress at this time point. Consistent with the phenotype of soybean seedlings under salt treatment, total chlorophyll content in the soybean leaves was significantly decreased after 12 h of salt stress treatment (Figure 2G), which was similar to a previous study in rice (Xu et al., 2015). Furthermore, salt stress significantly affected the concentrations of Na + and K + in soybean leaves and roots ( Figure 2H). The Na + /K + ratios in soybean leaves and roots dramatically increased under salt stress, and the roots Na + /K + ratios in soybean seedlings were significantly higher than that in leaves.
iTRAQ Analysis and Identification of Differentially Expressed Proteins
Given the physiological characteristics of salt treated soybean seedlings, a time of 12 h with 200 mM NaCl treatment was deemed as optimal to explore early responses to salt stress in soybean. Consequently, changes in the leaf and root proteomes of soybean seedlings subjected to 200 mM NaCl for 12 h were analyzed using iTRAQ-LC/MS-MS. Data from three biological replicates were analyzed and proteins detected by querying data with a soybean protein database. In total, 142,714 spectra could be matched to the database, resulting in a total of 48,714 peptides which were assembled into 6610 non-redundant protein groups (Table S2). Differentially expressed proteins (DEPs) were selected based on two criteria: (i) the mean ratio of reporter ion intensity originating from salt-treated protein samples (115 and 119) with respect to control protein samples (113 and 117) was more than 1.3 or less than 0.7; and (ii) a p-value smaller than 0.05 (Table S3). Based on these criteria, 278 DEPs were identified in soybean leaves, of which 237 (85.3%) displayed increases, and 41 (14.7%) a decrease in abundance under salt stress conditions (Figure 3, FIGURE 3 | Venn diagram of the distribution of differentially expressed proteins responsive to salt stress in soybean leaves and roots. The number above or below the horizontal line in each portion indicated the number of up-regulated or down-regulated proteins. The overlapping regions indicated the number of common proteins. Among the 50 common DEPs, 45 were up-regulated and 5 were down-regulated in leaves; and 46 were up-regulated and 4 were down-regulated in roots. Table S4); at the same time, 440 DEPs were identified in soybean roots, of which 354 (80.5%) showed an increase, and 86 (19.5%) showed a decrease in abundance after salt stress treatment (Figure 3, Table S5). Overall, only 50 DEPs were detected in both tissues, of which 9 proteins showed the opposite expression patterns in leaves and roots under salt stress treatments (Figure 3, Tables S4, S5). The results indicated the tissue-specific responses to salt stress at the protein level in soybean leaves and roots. Furthermore, it was obvious that the number of differentially expressed proteins in soybean leaves was less than that obtained from soybean roots, an observation also found in other plant species under salt stress . The reason for this finding might be attributed to the short period of time that plants were subjected to salt treatment, and the root is the primary site of salinity perception.
Functional Classification of Salt-Responsive Proteins
On the basis of the BLAST alignment, Gene Ontology, and information from the literature (Zhang et al., 2012;Zhao et al., 2013), all these identified DEPs in leaves and roots were classified into 13 functional categories: photosynthesis and carbohydrate metabolism, metabolism, stress and defense, transcription related, protein synthesis, protein folding and transporting, protein degradation, signaling, membrane and transport, cell structure, cell division/differentiation and fate, miscellaneous and unknown. The distributions of proteins with different functions expressed in the proteome of soybean leaves and roots is illustrated in Figure 4. Our results indicated that the proteins expressed in early salt response were involved in nearly every aspect of plant growth and metabolism.
The most represented DEPs in soybean leaves were associated with photosynthesis and carbohydrate metabolism (24.1%), metabolism (14.4%) and protein synthesis (13.3%); the root DEPs in these first two categories were same as in leaves, while the third category was transcription related (10.2%). In soybean leaves, while there were only 8 DEPs related to stress and defense, whereas the number of DEPs in this category in roots was 37 (Figure 4), and most of which showed increased levels under salt treatment. It has been reported that the early response of soybean to salt stress initially involves the promotion of primary signal perception and transduction, which could be more important than the later responses to salt stress (Pi et al., 2016). In our study, a total of 17 signaling related DEPs in leaves and 27 in roots were identified, including 4 overlapping DEPs in both tissues (Figure 4, Table 1). Moreover, DEPs related to membrane and transport were reported to be involved in early event of salt signal transduction (Luo et al., 2015). In soybean seedlings, 16 proteins in leaves and 31 proteins in roots were identified as being differentially expressed under salt stress (Figure 4, Table 2). Moreover, the numbers of DEPs belonging to the categories of transcription related, metabolism and protein degradation were largely different between the leaves and the roots. These above findings implied the different responses of these biological pathways to short-term salt stress in these two tissues. Detailed information of functional classification of all DEPs in soybean leaves and roots can be found in Tables S4, S5, respectively.
Protein-Protein Interaction among DEPs
To predict the relationship among all these identified DEPs in soybean leaves and roots, a protein-protein interaction (PPI) networks were generated using the web-tool STRING 9.1. A total of 104 differentially abundant proteins represented by 72 unique proteins from soybean were shown in the PPI network ( Figure 5, Table S6) based on the published literature and other experimental evidence . Four functional modules forming tightly-connected clusters were illuminated in the network (Figure 5). Nodes in different colors belong to four main groups. Stronger associations are represented by thicker lines. In Module 1 (blue nodes), six protein synthesis related proteins, five amino acid metabolism related proteins, two ATP synthases, a TCP as well as a MAPK appeared closely linked. This implied that amino acid metabolism, protein synthesis and energy supply were active and cooperated closely in soybean seedlings under salt stress. Module 2 (red nodes) included multiple enzymes involved in the TCA cycle, glycolysis, fatty acid biosynthesis and nitrogen metabolism. These linked proteins indicated that a synergistic system for carbon and nitrogen metabolism may play important roles in salt response. Moreover, PPC1 and PPC16, two key enzymes in organic acid metabolism were linked with module 3 (yellow nodes), which included ten proteins functioning in photosynthesis, carbohydrate and energy metabolism. Furthermore, proteins involved in protein folding, transporting, ROS scavenging, as well as some signaling components were assigned in Model 4 (green nodes). This indicated that proteins in this network played important functions in redox homeostasis, response to stress, signal transduction and protein metabolism.
Regulation of Some Salt-Responsive Proteins at the mRNA Level
In order to further understand the correspondence between proteins and their mRNA expression patterns, transcriptional analyses of five randomly selected proteins showing significant changes under salt stress were studied by semi-quantitative RT-PCR (Figure 6). After salt treatment, the changes of the mRNA levels in four genes correlated with changes at the protein levels as indicated by iTRAQ analysis, this included a 14-3-3, mmk2, pp1 and trx-h. The mRNA of annexin showed a downregulated trend in soybean leaves when treated for 12 h, however, annexin had a higher protein expression level ( Table 2). In contrast, the annexin gene in soybean roots exhibited constant up-regulation during the first 12 h of salt treatment, and then was down-regulated at 24 h after salt treatment, which showed no different expression according to our proteomic analysis. The mRNA levels of annexin gene showed poor agreement with corresponding protein expression levels, probably resulting from protein post-transcriptional regulation and modulation of its binding to various ligands under salt stress (Vedeler et al., 2012).
DISCUSSION
To cope with salt stress, soybean plants have evolved complex salt-responsive signaling and metabolic processes at the cellular, organ, and whole-plant levels. In our study, morphological and physiological changes in soybean seedlings were observed which represented the plant's early response to salt stress (Luo et al., 2015). The present study involved a comparative analysis of early salt stress responses to leaf and root of soybean seedlings using quantitative proteomic approach. Among 6610 identified proteins, a total of 278 proteins in leaves and 440 proteins in roots in responded to NaCl stress treatments. The functions of these salt responsive proteins and their main pathways are discussed further below.
Signal Transduction-Associated Proteins
Understanding salt-responsive signaling pathways is currently a hot topic in research of plant salt stress. Using an iTRAQ approach, a total of 37 signaling-related proteins were identified in this present study, and a majority of them exhibited tissuespecific expression ( Table 1). Based on the roles of these proteins in signal perception and transduction pathways, we classified these proteins into several groups, including Ca 2+ sensors, 14-3-3s, kinase and phosphatases, probable receptors and small GTP binding proteins (Table 1).
Ca 2+ plays a vital role as second messenger in plant cells in response to environmental stimuli. Plants have evolved a diversity of proteins function as Ca 2+ sensors that facilitate their regulation of target proteins and thereby coordinate various signaling pathways by binding Ca 2+ using the evolutionarily conserved EF-hand motif (DeFalco et al., 2010). In our study, calcium sensor, calcium sensing chloroplast-like proteins and three calcium ion binding proteins were up-regulated more than threefold under salt stress ( Table 1). The EF-Hand containing protein has a single EF-hand motif, which is used as a molecular device to recognize specific Ca 2+ signals through binding Ca 2+ to change its conformation to interact with down-stream proteins (Ma et al., 2014). A previous report has shown that the EF-hand calcium binding protein decreased its abundance at the stress time point of 72 h, but was found to be significantly up-regulated at the stress time point of 144 h in soybean roots (Ma et al., 2014). In our study, this protein was down-regulated 0.4-fold in soybean leaves under salt stress. Consistent with a previous study in rice leaves under osmotic stress (Zang and Komatsu, 2007), the abundance of calreticulin, a calcium-binding chaperone protein that plays a pivotal role in regulating calcium homeostasis and protein folding in the endoplasmic reticulum, was decreased in soybean leaves under salt stress. These findings indicated that signaling pathway mediated by calcium is an important strategy of soybean seedlings in coping with salt stress. 14-3-3 proteins were reported to regulate the activities of many proteins involved in signal transduction and play important roles in stress responses in higher plants (Roberts et al., 2002). In the current study, four 14-3-3 proteins in leaves and three 14-3-3 proteins in roots with two proteins in common were detected in both tissues, most of which showed dramatically up-regulated expression at translational levels ( Table 1). Furthermore, semi-quantitative RT-PCR results showed that 14-3-3 transcript level increased significantly after salt treatment in both tissues, which expressed to the highest level either at 12 or 24 h after salt stress (Figure 5). Consistent with our results, a 14-3-3 protein was also upregulated in Brachypodium distachyon leaves under salt stress (Lv et al., 2014), and the transcription of 14-3-3 genes in cotton showed an increasing pattern under salt stress (Sun et al., 2011). In addition, several analyses have shown that 14-3-3 proteins can be phosphorylated and interact with many proteins in various functional groups to play roles in signal pathways under salt stress (Lv et al., 2014;Zhou et al., 2014).
In the present study, three protein kinases and two protein phosphatases were identified, which showed enhanced expression levels under salt stress (Table 1). Specifically, mitogen-activated protein kinase homolog mmk2-like was upregulated more than 15-fold under salt stress. The MAPK family is reported to play various roles in intra-and extracellular signaling in plants by transferring the information from sensors to responsers, which act as points of convergence in abiotic stress signaling (Sinha et al., 2011). The serine threonine protein phosphatase PP1 also showed enhanced expression level under salt stress in soybean roots. Although there is no direct evidence in support of the significant role of PP1 in plant salt tolerance, the PP1 regulatory protein RICE SALT SENSITIVE 1 (RSS1) was identified recently through a combined approach of genetic screening for salt tolerance in rice and yeast two-hybrid screening, and the loss of RSS1 results in short root and dwarf phenotypes under high salt (Ogawa et al., 2011). Similar to the expression patterns at the translational level, up-regulation of MMK2 and PP1 were also validated by semi-quantitative RT-PCR analysis, which indicated that these proteins were regulated at the transcriptional level as well (Figure 6).
In addition, several GTP-binding proteins and GTPaseactivating proteins, known to be involved in controlling the transmission of extracellular signals to intracellular pathways exhibited increased expression levels under salt stress, suggesting that G-protein-coupled receptors were dynamically regulated to cope with salinity in soybean leaves. These results indicated that signal perception and transduction had been highly enhanced at early stages of plant stress response, thereby improving the activities of stress-responsive pathways in the leaves and roots of soybean plants exposed to salt stress.
Membrane and Transport-Related Proteins
Membrane proteins fulfill critical functions in the transport of ions and organic molecules, which also play important roles in ion homeostasis. Under salinity conditions, Na + /K + ratios and Na + concentration increase in soybean roots and leaves causing hyper osmotic stress, cellular ionic toxicity and oxidative stress (Ma et al., 2014). ABC transporters are known to transport stress-related secondary metabolites, such as alkaloids, terpenoids, polyphenols and quinines to protect plants against salt stress (Yazaki, 2006). The present proteome analysis indicated that the ABC transporter f family member 3-like was up-regulated in soybean leaves under salt treatment ( Table 2). Increased expression level of an ABC transporter was also found in cotton seedlings suggesting that it may play an important role in salt stress responses . However, three other ABC transporter f family members were decreased in soybean roots under salt stress ( Table 2). The expression differences implies that different gene family members probably have diverse functions in different tissues to cope with various stresses. The importin subunit alpha-1-like proteins were identified with increased expression levels in both tissues under salt stress ( Table 2). Importin α is well known as an adaptor that functions with importin β in the nuclear import of proteins containing specific nuclear localization signals (NLSs) which was reported to be regulated by phosphorylation (Hachet et al., 2004). By employing a genetic screen in Arabidopsis, an importin β-domain/karyopherin protein was identified to be involved in nucleocytoplasmic trafficking under cold, osmotic stress and ABA treatments (Chinnusamy et al., 2007). However, the detailed function of this protein in plant salt response is not clear and deserve further studies.
A total of four V-type proton ATPase were up-regulated in soybean roots ( Table 2). H + -ATPase plays an essential role in the maintenance of ion homeostasis in plant cells, which was identified as an important salt stress marker protein according to several proteomic studies (Kerkeb et al., 2001;Jiang et al., 2007;Li et al., 2015;Luo et al., 2015). Thus, increased activities of these enzymes may be an effective strategy for osmotic adjustment, which reduces the Na + concentration in the cytosol of plants under salt stress.
Finally, we identified 15 vesicle trafficking-related proteins that exhibited differential expression patterns in soybean seedlings under salt stress ( Table 2). Aquaporin (AQP) proteins function in transporting water and other small neutral solutes or gasses through the biological membranes, which is crucial for plants to survive in drought or salt stress conditions (Sade et al., 2010). AQP contains two subfamilies, the plasma membrane intrinsic proteins (PIPs) and the tonoplast intrinsic proteins (TIPs), which are most abundant in the plasma membrane and vacuolar membrane, respectively. Many genes encoding PIPs have been identified from different plant species, and overexpression of these genes have been reported to enhance plants salt tolerance (Sade et al., 2010;Hu et al., 2012;Liu et al., 2013;Xu et al., 2014;Sreedharan et al., 2015). Here, two isoforms of PIP showed different expression patterns in soybean leaves and roots in response to salt stress ( Table 2). This may be attributed to tissue-and time-specific expression manners of different PIP isoforms under salt stress. In addition, an annexin-like protein was up-regulated by salt stimulus in soybean leaves in this study. Annexin functions as a Ca 2+ -permeable channel in the plasma membrane to form a ROS-stimulated passive Ca 2+ transport pathway (Laohavisit et al., 2010), which was reported to be induced by salinity in variety of plant species based on proteomics data (Lee et al., 2004;Manaa et al., 2013), indicating the significance of this protein in plant salt stress tolerance. In summary, the identified membrane and transport related proteins were consistent with the physiological processes of maintaining ion homeostasis and membrane stability, which are crucial for plant growth during salt stress.
Stress-Related Proteins
Salt stress causes the overproduction of reactive oxygen species (ROS), which oxidizes proteins, lipids, carbohydrates and DNA and irreversibly damages plant cells (Gill and Tuteja, 2010). The antioxidant properties of plant cells is usually represented by the general radical scavenging capacities of peroxidases (POD), ascorbate peroxidase (APX), glutathione S-transferase (GST) and superoxide dismutase (SOD). In the present study, the increase of POD, APX, GST and SOD abundances in soybean roots was outlined (Table S5), which is similar to other salt-responsive species (Jiang et al., 2007;Peng et al., 2009;Du et al., 2010). These results were confirmed by the activities of ROS-scavenging enzymes SOD and POD in soybean leaves and roots under 200 mM NaCl treatment for the initial 12 h (Figures 2C-F). Thus, the expression changes of these proteins under salt stress implied that the antioxidative defense system in soybean seedlings was provoked by salt treatment.
In addition to the redox related proteins, plants have developed cross-tolerance mechanisms to cope with different stresses (Zhang et al., 2012). From our iTRAQ data, some biotic stress-related proteins were induced under salt stress conditions, such as disease resistance protein rpp13 and pathogenesis-related protein class 10 (PR10), which mediates tolerance to heavy metals (Wang Z. Q. et al., 2014) and pathogen attack (Coumans et al., 2009). Interestingly, several major latex proteins (MLPs) were upregulated in soybean roots but down-regulated in soybean leaves under salt stress (Tables S2, S3), which was in agreement with the proteomic findings of soybean root tips under flooding . MLPs were found only in plants and associated with fruit and flower development and in pathogen defense responses. So far, only one study reported that transcription of the mlp gene in cotton was rapidly induced by NaCl and overexpression of this gene enhanced salt tolerance in Arabidopsis (Chen and Dai, 2010). However, the specific biological function of MLP and whether up-regulation of MLP correlates with enhanced salt tolerance in soybean plants are unknown and to the best of our knowledge, it may represent a novel salt-stress-responsive protein in soybean plants. Furthermore, some drought stressrelated proteins, e.g., dehydrin and desiccation protectant protein lea14 homolog, also responded to salt stress in our study (Tables S4, S5). These proteins provide novel insights into the understanding of the cross-tolerance mechanisms in soybean seedlings in response to biotic and abiotic stresses.
Metabolisms
A large number of DEPs were found to be involved in nitrogen and amino acid metabolism. Glutamine synthetase, the key enzyme in plant NH 4+ metabolism, has been reported to play an important role in enhancing rice tolerance to salt and chilling stresses (Hoshida et al., 2000). We also found it up-regulated in soybean leaves under salt stress. Cysteine synthase is responsible for the final step in cysteine biosynthesis, the key limiting step in producing glutathione (GSH), which is involved in resistance to adverse stresses. Liu et al. used comparative proteomic methods to show that cysteine synthase was induced in a salt-tolerant rice cultivar but down regulated in salt-sensitive rice leaves. In this study, four cysteine synthases were identified to be up-regulated in soybean roots (Table S5), and appeared tightly linked with other proteins in the soybean PPI network (Figure 5). Aspartate aminotransferase (AAT) catalyzes the conversion of α-ketoglutarate and aspartate to glutamate and oxaloacetate (Hodges, 2002). Proteomic analysis has found the salt also induced AAT in rice roots (Nam et al., 2012), which is consistent with the results in this study. There were many other salt-responsive proteins related to amino acid metabolism identified only in soybean roots, such as aminotransferase-like, glutamate decarboxylase, alanine aminotransferase, serine hydroxymethyltransferase, phosphoserine aminotransferase, asparagine synthetase and serine hydroxymethyltransferase (Table S5). These results indicated that amino acid and nitrogen metabolism were enhanced in soybean seedling leaves and roots under salt stress. Significantly, 14 lipoxygenases showed enhanced levels both in soybean leaves and roots under salt stress, which suggested that lipid metabolism changes under salt stress and may play important roles in soybean seedling growth.
Salt adaptation of plants requires complex rearrangements of metabolism with interactions between several metabolic pathways. Furthermore, one of the most striking observations in our study was the increase of several cytoplasmic enzymes engaged in secondary metabolism in roots under short-term salt stress. Dihydroflavonol reductase (DFR), a key enzyme involved in anthocyanin biosynthesis and proanthocyanidin synthesis, was reported to be up-regulated by salt stress in soybean seedlings (Ma et al., 2012), which was consistent with results in our study (Tables S4, S5). Caffeic acid 3-O-methyltransferase was up-regulated more than 14-fold in soybean leaves under salt stress (Table S5). This protein is involved in lignin biosynthesis. The accumulation of this enzyme under salt stress could be related to increased lignification of the cell wall-a modification to avoid water loss induced by osmotic stress (Simova-Stoilova et al., 2015), a process that was reported to be up-regulated by drought stress in soybeans (Alam et al., 2010). Interestingly, several enzymes related to flavonoid compounds metabolism were identified in our study. Recent evidence has suggested that salinity stress strengthened the accumulation of flavonoids, which could play vital roles downstream of soybean tolerance to salt stress . Isoflavone reductase catalyzes the reduction of 2 ′ -hydroxyformononetin to vestitone, which is the penultimate step in the synthesis of medicarpin in the general flavonoid biosynthesis pathway. In the present study, isoflavone reductase homologs were induced in soybean roots under salt stress (Table S5), which was inconsistent with previous results (Sobhanian et al., 2010). Moreover, increased expression levels of three chalcone isomerase were identified in soybean roots (Table S5). This result supported the significant correlations of chalcone metabolic enzymes in soybean's tolerance to salinity (Pi et al., 2016). These results suggested that the biosynthesis of these secondary metabolites was associated with salt stress response in soybean seedlings.
CONCLUSIONS
In the present study, mophological and physiological changes were determined in soybean leaves and roots treated with 200 mM NaCl for up to 48 h, and the results supported the treatment time of 12 h for a proteomics survey. An iTRAQbased proteomic technique was employed to compare the abundance of proteins in untreated and 200 mM NaCl treated soybean leaves and roots for 12 h. In total, 278 and 440 differentially changed proteins in the leaves and in the roots, respectively, which were classified into 13 categories. As a result, we gained new information about proteins in soybean seedlings and their roles in the salt stress response. First, stress signal transduction and membrane proteins in soybean were activated at the early stages of salt stress treatment. The second strategy involved the upregulation of proteins leading to ROS scavenging and cross-tolerance to other biotic and abiotic stresses. Third, rearrangements of several metabolic pathways led to salt adaptation of soybean seedlings. Proteinprotein interaction analysis implicated that protein metabolism, energy supply and photosynthesis collectively functioned to reestablish cellular homeostasis under salt stress. Furthermore, semi-quantitative RT-PCR results suggested that the expression of some proteins (e.g., annexin) could be regulated posttranscriptionally. These results may contribute to the existing knowledge on the complexity of soybean protein changes that occur in response to salt stress. Further studies for gene function analysis are needed to further clarify the molecular mechanism under salt stress in soybean roots and leaves.
AUTHOR CONTRIBUTIONS
WJ: method optimization, data analysis, drafting the manuscript. RC: data analysis and semi-qRT-PCR analysis. SL and RL: physiological analysis, protein isolation and data analysis. ZQ: manuscript preparation. YL and XZ: seedlings treatment and physiological analysis. SC: overall design of the experiments, and manuscript preparation. JL: overall design of the project and experiments, and manuscript preparation.
|
v3-fos-license
|
2019-04-02T13:13:44.086Z
|
2018-03-21T00:00:00.000
|
90889833
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0472/8/4/45/pdf",
"pdf_hash": "a1b7b226e16f253c3d04954feac5e6864fa39d72",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1170",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"sha1": "aed7f6822a08a917a90488d9498df9e5c7324420",
"year": 2018
}
|
pes2o/s2orc
|
Effect of Chicken Manure Application on Cassava Biomass and Root Yields in Two Agro-Ecologies of Zambia
Fertilizer application is known to increase crop yields and mitigate net soil nutrient mining due to continuous removal. However, smallholder farmers rarely apply adequate fertilizers because of high cost, limited availability and lack of awareness. An experiment was conducted to evaluate the effect of chicken manure on cassava root and biomass yield at Kabangwe and Mansa, two locations representing agroecological zones II and III, respectively, in Zambia. With the aim of exploring alternative soil fertility management for smallholder farmers, the effect of sole chicken manure and mineral fertilizers was evaluated on cassava. The treatments were four levels of chicken manure (0, 1.4, 2.8, 4.2 ton/ha) and a single level of mineral NPK applied at 100N-22P-83K kg/ha as recommended. The design was a Randomized Complete Block (RCBD), with three replications using the improved cassava variety “Mweru” during the 2015/2016 growing season. The results showed significant (p < 0.05) treatment effects on cassava root yields and yield components (fresh and dry root, leaf, stem, and total biomass) at both sites. The highest mean fresh (27.66 ton/ha) and dry root yield (9.55 ton/ha), and total fresh biomass (53.68 ton/ha) and dry biomass (16.12 ton/ha) production were achieved with the application of 4.2 ton/ha of chicken manure. This treatment showed 71% and 81% fresh root yield advantage over the control at Mansa and Kabangwe, respectively. While the marginal rate of return (MRR) was negative for the mineral fertilizer, it was positive for all the chicken manure treatments with the maximum (315%) achieved from the application of 4.2 ton/ha. The study concludes that application of chicken manure significantly increases the yield and biomass production of cassava and is economically efficient.
Introduction
Cassava is the third most important tropical food crop after rice and maize, which contribute directly to feeding the growing population under very challenging environmental conditions [1]. In Zambia, the crop is one of the main food security crops, dominating the smallholder farming systems [2,3]. The importance of the crop emanates from its adaptation to wider agroecology, its ability to grow on poor soils, its ability to tolerate drought, pest and disease, and the production of high dry matter per given area [4]. In recent years, cassava production has increased in sub-Saharan Africa (SSA) as a result of a food shortage to feed the rapidly growing population and increasingly degrading environmental conditions [5]. However, Vanlauwe, et al. [6] emphasized that the production increase was a result of area expansion rather than an increase in yield per unit area. Available information also indicates that input use in cassava fields is very limited or absent among smallholder farms in SSA [7][8][9][10].
Soil fertility is dynamic, changing based on processes such as accumulation or depletion, which are governed by the interplay between physical, chemical, biological and anthropogenic processes [11]. Without applying a significant quantity of ameliorants such as manure or mineral fertilizer to replenish the soil, anthropogenic process in SSA have led to the removal of huge amounts of soil nutrients [12]. As a result, deteriorating soil fertility is considered a fundamental biophysical factor responsible for declining per-capita food production in SSA [13]. Cassava suffers the most in these scenarios because the majority of farmers in the region assume cassava production does not require external nutrient input [14,15]. For example, the average Zambian farmer does not apply manure or mineral fertilizer to his/her cassava field. As a result, cassava yield in 2014 was 44.5% and 92.8% lower than the African and global average cassava yields, respectively [16].
So far, application of mineral fertilizers is the main soil fertility management strategy used by many, but if not handled properly, can cause environmental problems such as soil acidity and eutrophication [17]. It is also unaffordable for most African farmers [18]. On the other hand, intensive farming produces a significant amount of manure [19], and poultry manure is among the most easily available resource for poor farm households. In addition to the supply of plant nutrients and organic carbon, the use of organic manure improves soil physical properties and enhances its water holding ability [20]. Poultry manure is the best quality animal manure in terms of nutrient content and availability [20,21]. However, no significant guidelines have been developed for application rates for cassava.
In Zambia, maize receives much of the priority attention in terms of external nutrient application because it is a major food staple and the most preferable dietary source of carbohydrates in the country. Despite its food security importance, cassava does not receive soil fertility inputs or the attention it deserves. The importance of cassava has become more prominent in recent years due to the growing population and drought susceptibility of maize in the face of climate change [22]. In response to the striking relation between drought occurrences and high vulnerability of maize to adverse weather conditions, Zambia has begun introducing new and improved cassava varieties. Additionally, the country is focusing on the traditional cassava growing areas, i.e., CopperBelt, Luapula, Northern and Muchinga provinces and new non-cassava growing areas such as the Central, Eastern and Southern Provinces of the country [23]. As of yet, no soil fertility management regime has been developed for cassava production in Zambia. Given the financial constraints and availability problems with mineral fertilizer, the promotion of organic inputs is the most feasible option. However, it requires derivation of optimum rates of manure application to attain acceptable levels of cassava yields for resource-poor households. Against this background and with the objective of deriving optimum rates of chicken manure application, a study was conducted to evaluate the effect of chicken manure compared to recommended mineral fertilizer (NPK) application on cassava yield and biomass in two agroecologies of Zambia as an option for integrated soil fertility management.
Description of the Study Sites
The experiment was conducted at two sites: Zambian Agricultural Research Institute (ZARI) Mansa Station in the Mansa district of Luapula province and the Kabangwe Station of the International Institute of Tropical Agriculture (IITA) Southern Africa Research and Administrative Hub (SARAH), which is located at the outskirt of Lusaka in Chibombo district central province of Zambia ( Figure 1). A brief summary of the characteristics of these two sites is given in Table 1. Zambia is divided into three major agroecologies based on rainfall. Zone II is an area that receives an annual rainfall of between 800 and 1000 mm per annum, but zone III is a high rainfall area receiving more than 1000 mm per annum [24]. However, the 2015/2016 cropping season was an El Niño season in Zambia. This brought minimal rainfall to the central and southern parts of the country while the northern region received an appreciable amount of rain. As a result, from 23 November 2015 to 22 November 2016, the Kabangwe site recorded 422.9 mm of rain, while Mansa recorded 1245.6 mm. Zambia is divided into three major agroecologies based on rainfall. Zone II is an area that receives an annual rainfall of between 800 and 1000 mm per annum, but zone III is a high rainfall area receiving more than 1000 mm per annum [24]. However, the 2015/2016 cropping season was an El Niño season in Zambia. This brought minimal rainfall to the central and southern parts of the country while the northern region received an appreciable amount of rain. As a result, from 23 November 2015 to 22 November 2016, the Kabangwe site recorded 422.9 mm of rain, while Mansa recorded 1245.6 mm.
Experimental Design
Four levels of chicken manure (0, 1.4, 2.8, 4.2 ton/ha) were evaluated against 100N-22P-83K kg/ha, a fertilizer rate generally recommended by Howeler, et al. [8] for cassava and tried by Fermont, et al. [9] in East Africa. This is to help fill the need for a blanket fertilizer recommendation for cassava in Zambia. The experiment was set using a Randomized Complete Block Design (RCBD) with three replications at both sites. The plot sizes were 5 m × 5 m, giving a total plant population of 25 per plot. Each plot was set 1.5 m apart with ditches running parallel and perpendicular to the plots to prevent runoff crossing during heavy rain. Fresh chicken manure was collected from one egg layers' farm, properly dried and mixed to reach a homogenized mixture before application. The properly mixed manure was sampled to determine the macronutrient (N, P, K) and micronutrient contents in it. Land ploughing was done using disc plows mounted on tractors and then harrowed. Matured improved cassava variety "Mweru" cuttings of 25-30 cm in length, collected from ZARI in Mansa, were planted at 1 m × 1 m standard inter and intra spacing. Extra cuttings were planted at the edge of the plots and later transplanted to substitute cuttings that did not sprout or those affected by termites within 1 month after planting (MAP). The optimum dose of mineral fertilizer was band applied in the form of urea, triple superphosphate and potassium sulfate. To enhance nutrient release from chicken manure and to reduce nutrient loss from the mineral fertilizer, manure was applied and properly incorporated to the soil during planting while the conventional practice of a split application of N and K was followed to apply the mineral fertilizer twice, at one and three MAP. However, all of the P was applied at once, i.e., at one MAP. The trial plots were kept weed free by hand weeding as needed.
Agronomic Data Collection
Harvesting was conducted at 12 MAP, and plant growth parameters such as plant height, canopy diameter, stem girth, Leaf Area Index (LAI)-the leaf area per unit ground [28], and chlorophyll index were recorded. Plant height was measured from the base of the first branch to the newly emerging leaf of the tallest plant using a measuring tape. Similarly, the average of two measurements (made perpendicular and parallel to the ridge) was recorded for the canopy diameter of each plant. Stem girth was measured on the largest stem using digital Vernier calipers. LAI was indirectly measured under the canopy using a SunScan canopy analysis system (Delta-T device, Cambridge, UK). Four readings of leaf chlorophyll (two from either side of the midrib) were measured from the central lobe of the first fully expanded leaf using a chlorophyll meter (SPAD 502, Konica Minolta, Tokyo, Japan). To avoid the effect of direct sunlight [29], chlorophyll readings were taken under the shadow of the reader. For all plant growth parameters, plot readings were taken from five plants following an 'X' pattern in the plot, and the average of the five readings was recorded. The harvest was conducted on 9 plants in the 3 m × 3 m net plot areas. After uprooting, the plants were separated into root, leaf, and stem. Fresh weight was recorded in the field with a digital balance. After the fresh weight was recorded, a 500 g sample from the root and stem and a 300 g sample from the leaf were taken to determine the dry weight. The samples were oven dried at 70 • C to constant weight to determine the dry weight of the biomass [30].
Soil Sampling and Analysis
Surface soil samples were collected (0-20 cm) using an Edelman auger at every 5 m crossing the fields in an X-like pattern and bulked together and properly homogenized to obtain one composite sample for each of the experimental sites. Samples were air-dried, ground, and passed through a 2-mm sieve to obtain the fine earth fraction (<2 mm separates). Soil samples were analyzed at the IITA soil laboratory in Cameroon. Particle size distribution (sand, silt, clay) was determined by the hydrometer method as outlined by Bouyoucos [31] and Day [32]. Soil pH-H 2 O (1:2.5 solution) was determined in a 1:2.5 (w/v) soil to water solution using a pH meter as outlined by McLean [33]. Organic carbon (Org. C) was determined by chromic acid digestion and spectrophotometric analysis as described by Heanes [34]. Total nitrogen (TN) was determined from a wet acid digest [35] and analyzed by colorimetric analysis [36]. Exchangeable cations (Ca, Mg, K and Na), as well as available micronutrients (Cu, Zn, Mn, Fe) and available phosphorus (AvP), were extracted using the Mehlich-3 procedure [37], whereby the contents in the extracts were determined by flame photometry and atomic absorption spectrophotometry (AAS). CEC was extracted using the ammonium acetate method [38] in which the content was determined colorimetrically.
Data Analysis
The agronomic data were subjected to statistical analysis of variance using a generalized linear model (GLM) in R statistical software version 3.3.2 [39]. The total variability was then detected using the following model: where T ij is the total observation, µ is the overall mean, β i is the ith replication, γ j is the jth treatment effect and ε ij is the variation due to random error.
Significance of the treatments was tested using the agricolae package of R [40]. The means were compared using the lsmean package of R [41] with the least significant difference (LSD) set at a 5% level of significance. Single degree of freedom orthogonal contrast of the control against the manure and fertilizer treatments was performed to evaluate treatment effects on crop performance.
Economic Analysis
To evaluate the economic feasibility of the different organic fertilizer treatments, economic cost and benefit analysis was done based on the partial budget techniques detailed in [42]. For the economic analysis, the prevailing market price for inputs during planting time and prevailing market price for outputs (the cassava roots in particular) during harvesting time was considered. The mean cassava root yield for each treatment was averaged over the two sites. All the costs and benefits were calculated on a hectare (ha) basis using Zambian Kwacha (ZMW) as a common denominator. The partial budget concepts used during the economic analysis were the following:
•
Mean cassava root yield is the average root yield (ton/ha) of each treatment for the two locations minus 10% of the yield (to estimate what can be expected for a farmers' field); • Field price of cassava is the farm gate price (from the local market at harvest) of cassava root per kg minus the cost of harvesting and packing; • Gross field benefit (GFB) per ha is the product of the field price of cassava and mean cassava root yield of each treatment; • Field price of manure is the retail price of chicken manure per kg and its transportation to the field; • Field price of NPK fertilizer is the retail price of fertilizer per kilogram and its transportation to the field; • Field cost of manure per ha is the product of the quantity of manure applied per ha in each treatment and the field price of manure; • Field cost of NPK fertilizer is the product of the quantity of fertilizer required per ha by each treatment and the field price of the fertilizers; • Fertilizer application cost is the product of labor hours used to apply both organic and mineral fertilizers and the wage rate per-day; • Total variable cost (TVC) is the sum of all costs; • The net benefit (NB) per ha for each treatment is GFB minus TVC.
Once the TVC and NB were calculated, potentially promising treatments were selected for further marginal rate of return (MRR) analysis by the dominant analysis procedure explained in [42]. After discarding the dominant treatments (any treatment with NB less than or equal to those treatment costs that vary when the treatments are listed in increasing TVC), the remaining treatments were ranked again from the lowest to highest TVC for % MRR analysis between pairs of treatments. The MRR between two treatments (say 1 & 2) was calculated as follows: Therefore, 100% MRR indicates that for every one Kwacha invested in fertilizer application, there is one Kwacha return on investment. Table 2 presents the results of the soil analysis for the two research stations. In terms of physical properties, sandy texture dominates the soil particle size distribution (54-77%), with low levels of silt (7-20%) and clay (16-26%) at Mansa and Kabangwe stations, respectively. The sandy loam texture class at Mansa indicates alluvial and transported parent materials of the study site soils. The soil reaction of the two sites was 4.9 and 5.28, and the pH of manure was 7.48. Based on the rating of Hazelton and Murphy [43], the soil reaction is rated as very strongly acidic for Mansa and strongly acidic for Kabangwe, while the manure was slightly alkaline. In terms of nutrient supply capacity, the soils are very low in the contents of soil Org. C (1.0-1.2%) and TN (0.05-0.06%), and extremely low in CEC (3-4 cmol (+)/kg) and basic cations (Ca 2+ , Mg 2+ , Na + and K + ). The soil at Kabangwe is particularly deficient in terms of AvP (3.78 mg/kg), whereas that of Mansa has excess levels of AvP (20.51 mg/kg). This may indicate natural variation and/or differences in previous land use that resulted in residual phosphate accumulation in Mansa and depletion at Kabangwe. The low levels of nutrients and CEC are consistent with the low levels of clay and therefore limited surface area for nutrient and cation retention as result of the sandy nature of the soils [44,45]. The implication of this is that application of organic fertilizers and leaf litter is essential to retain nutrients and enhance the soil organic matter content. In light of this, the chicken manure can contribute favorable qualities, including high contents of Org. C (26%) and organic matter (45%), high levels of nutrients including TN (3.6%) and CEC (26 cmol (+)/kg), cations such as Ca (9 cmol (+)/kg) and K (2 cmol (+)/kg), and most micronutrients (Fe, Mn, Zn, Cu). The result therefore shows the potential of chicken manure as a soil amendment to address the low levels of nutrients and organic matter in the soils. However, low levels of AvP (1.3%) and high levels of Na (8 cmol (+)/kg) are the sources of concern, suggesting the need for combining chicken manure with phosphate fertilizer to avoid salinity increases from large quantities of manure application. Table 3 presents the result of plant growth parameters for different rates of chicken manure and fertilizer application. The results show that chicken manure application significantly (p < 0.05) affected plant height, canopy diameter, and leaf area index (LAI) at Mansa but not stem girth or chlorophyll index. At Kabangwe, plant height, canopy diameter, stem girth, leaf area index and chlorophyll index were significantly (p < 0.001) improved by chicken manure application. Plant growth parameters increased at both sites following the application of manure and the NPK fertilizer. Mean plant height, canopy diameter, and stem girth were higher at Mansa (207.6 cm, 140.33 cm, and 28.21 mm, respectively) compared to (205.33 cm, 123.9 cm and 25.81 mm, respectively) at Kabangwe site. On the other hand, LAI and the chlorophyll index (3.57 and 44.43, respectively) were higher at Kabangwe site compared to Mansa ( Table 4). The 1.4 ton/ha chicken manure application rate resulted in a higher mean value for all plant growth parameters compared to the control, but the differences were not statistically significant. On the other hand, plant growth parameters were significantly higher (p < 0.05) for the 1.4 ton/ha rate of manure compared to the control at Kabangwe. The 4.2 ton/ha chicken manure (O 3 ) and recommended 100N-22P-83K kg/ha (R F ) treatments resulted in greater effects compared to the other treatments. However, these two treatments were not significantly different from each other for all traits except LAI, which was significantly (p < 0.05) higher at Mansa than at Kabangwe, with values of 3.57 and 3.37, respectively. Generally, LAI, the leaf area per unit ground area, increases slowly during the first 1 to 2 months of the growth period and then follows a rapid increase and a decline after 6 months [47]. It was reported that clones that gave high yields have the ability to retain a large number of leaves and have a large leaf area and a large green stem area. Cassava genotypes with relatively high LAI and long Leaf Area Duration (LAD) have high root yields [28]. The LAI of cassava ranges from 1 to 7 while optimum LAI ranges from 3 to 4 in the tropics. The current results from the chicken manure treatments fall in the range of high yielding clones. The 4.2 ton/ha manure treatment resulted in the highest values for all growth parameters at both locations, but the mean difference was not statistically different between O 2 and R F except for LAI at Mansa. The treatments vs control mean group comparison showed a significant difference at least at the p < 0.05 level at both sites, except for the chlorophyll index at Mansa station, which was non-significant. A recent study revealed that canopy characteristics of cassava are affected by different fertilization regimes [48]. In addition, organic manure and NPK fertilizer application significantly increased cassava height, stem girth, number of leaves, and internode length [49], and our present results are in line with these findings.
Fresh and Dry Cassava Root and Biomass Yield
Fresh and dry cassava root weight was significantly (p < 0.05) influenced by the application of fertilizers at both Mansa and Kabangwe sites ( Table 3). All the manure rates and fertilizer applied led to significantly high fresh and dry cassava root yield compared to the control at both sites, except the root yield with the rate of 1.4 ton/ha manure that gave a higher, but statistically non-significant result at Mansa. While 4.2 ton/ha outperformed the 1.4 ton/ha manure treatment in terms of fresh root yield, no significant difference was observed between the 1.4 and 2.8 ton/ha and between the 2.8 and 4.2 ton/ha manure treatments at the Mansa site. At Kabangwe, the 4.2 ton/ha treatment resulted in significantly higher fresh root yield compared to other fertilizer treatments, but no significant difference was observed between the other manure levels and the mineral fertilizer treatments (Table 5). With the exception of the 1.4 ton/ha manure treatment, all the treatments resulted in significantly higher dry root yield with no significant difference among themselves at the Mansa site. At Kabangwe, the 4.2 ton/ha manure application significantly increased dry cassava root yield compared to the other treatments. While the 2.8 ton/ha treatment significantly increased dry root yield compared to the control, no significant difference was observed between the other treatments at Kabangwe ( Table 6). The highest fresh root yield of 26.59 ton/ha at Mansa and 27.66 ton/ha at Kabangwe, and the highest dry root weight of 8.99 ton/ha at Mansa and 9.55 ton/ha at Kabangwe were obtained at the rate of 4.2 ton/ha chicken manure (Tables 5 and 6). This is because chicken manure is the best in terms of quality and can supply both macro-and micronutrients to the plant compared to the NPK fertilizer treatments [20,21,46]. Organic amendments also buffer acidity problems and increase P availability in acid soils [50]. As a result, soils treated with manure have this advantage and can respond better to crop production. The organic amendment and fertilizer treatment also consistently and significantly increased the cassava fresh and dry leaf, stem and total biomass yield at both sites compared to the control. The only exception is the mean fresh stem yield for the rate of 1.4 ton/ha manure that showed higher but statistically non-significant effects at Mansa. Mean fresh leaf, stem and total biomasses were significantly higher for the 4.2 ton/ha chicken manure treatment at both sites. However, there was no significant difference between the mean fresh leaf of the two highest manure levels at Kabangwe (Tables 3 and 5). Except for the dry stem mass at Mansa and dry leaf mass at Kabangwe, a similar trend was observed for the dry biomass yield.
The application of dry chicken manure at a rate of 1.4, 2.8, 4.2 ton/ha and the NPK fertilizer resulted in 32%, 49%, 80%, and 45% increases, respectively, in fresh cassava root yield compared with the control at Kabangwe. The respective increases at Mansa were 30%, 46%, 71% and 49%. However, the biomass yield advantage over the control was much higher for fresh cassava stem (145% at Mansa and 103% at Kabangwe site), while it was 100% for fresh total biomass at Mansa; all at the rate of 4.2 ton/ha manure. Similarly, the dry cassava root yield advantage was 37%, 49%, 74%, and 54%, respectively at Mansa, while it was 27%, 41%, 76% and 33%, respectively at Kabangwe. The highest biomass gain was 112% for dry stem biomass at Mansa, followed by 101% for dry stem biomass at Kabangwe and 95% for dry leaf biomass at Mansa site in response to O 3 . Even though yield advantage over the control was higher at Mansa, cassava response to the applied treatments was more pronounced at Kabangwe than Mansa. This can be evidenced from the 4.2 ton/ha chicken manure treatment that was significantly higher than other treatments at Kabangwe but was not significant at Mansa. This could be explained by other factors that can limit soil productivity such as the soil pH. Mansa is situated in the rain belt of Zambia, closer to the Congo basin, where high rainfall has resulted in higher acidity compared with Kabangwe. This was corroborated by the initial soil analysis result and the P availability in the soil (Table 2). AvP was high enough in Mansa soils, but can be fixed because of the extremely acidic nature of the soil. As explained by Vanlauwe, et al. [6] soils do not always respond to applied nutrients due to constraints by other factors such as soil depth, acidity or alkalinity. Climatic factors and soil conditions can also interact, leading to a limited response to input use. According to Howeler [46], cassava grows best on well drained soils with an appreciable clay content. The clay content of our study soil was higher at Kabangwe, while the rainfall was higher at Mansa.
The response to external inputs from cassava genotypes is variable, ranging from a high response to no response at all. Our result has shown that "Mweru" responds favorably to fertility inputs. The results also showed that application of chicken manure and NPK fertilizer resulted in better cassava yield and growth performance. In line with this, different researchers reported that chicken manure application, either in combination or as a sole application, increases cassava yield [51,52]. They also found that, in addition to a cassava yield increase, chicken manure application improved the physico-chemical properties of the soil. Not only did cassava yield respond to the fertility inputs, chicken manure application also significantly improved plant height, LAI, root and shoot weight, and grain yield of sorghum in Nigeria [53]. In Zambia, where farmers do not use input for cassava production, cassava yield has been limited to 4.6 ton/ha [16], and thus the use of external input should be recommended to increase yield and sustain land productivity. The use of 1.4, 2.8, 4.2 ton/ha chicken manure and 100N-22P-83K fertilizer resulted in 3-, 4-, 5-and 4-fold increases in fresh cassava root yield, respectively, compared to the 4.6 ton/ha country average. Our results confirmed that the use of fertilizer (organic or mineral) increases cassava yield. The yield gap between the farmers' and researchers' fields was extremely high [18], confirming that the use of fertilizer by Zambian farmers is either minimal or nonexistent.
The reason why farmers use little or no external input at their farm could be due to fertilizer price or affordability by resource-poor farmers [54,55]. Because farmers can obtain roots for their subsistence from cassava planted on very marginal fields [56][57][58], a low-to-no input farming system is preferred compared to an input-intensive farming system for cassava. The slash and burn system that is still practiced by some Zambian farmers, especially in the Northern part of the country [59], could also explain the observed low fertilizer use in Zambia. However, as evidenced in West and East African countries [56,58,60], a low input production system cannot support the high crop yield required to meet the needs of a growing population and can also cause environmental degradation, through deforestation and carbon dioxide emissions. Therefore, these conditions necessitate the use of fertilizer, either organic or mineral sources, to sustain cassava farming in the long run and meet the growing cassava demand in the country. Table 7 shows the partial budget analysis result of the different treatments used in the experiment. The result revealed that the treatment with fertilizer application excluded from the marginal rate of return comparison because it was the dominant treatment (treatment with lower benefit compared to the resources invested in it). However, the 1.4 ton/ha chicken manure treatment has a 138.5% MRR or a return of 1 Kwacha 39 Ngwe relative to one Kwacha investment. A further increase of chicken manure from 1.4 to 2.8 ton/ha (treatment O 2 ) resulted in 138% MRR or another additional 1 Kwacha 38 Ngwe return on 1 Kwacha investment. The 4.2 ton/ha chicken manure treatment had the highest MRR (315%) compared with the other treatments. Researchers began questioning long ago how resource-poor farmers can adopt new technology, especially in developing countries [61]. For example, there is an argument that most African farmers cannot afford fertilizer and thus do not apply the required amount to reach the economic return on fertilizer investments [62]. A cassava adoption study conducted in southern Zambia revealed that farmers were reluctant to adopt cassava cultivation because of its economic return on investment compared to an already existing maize system [23]. Therefore, economic analysis of the profitability of investment on fertilizer application is important in order to make better recommendations for input-based cassava farming. Partial budget analysis of profitability of the different treatments showed that the use of all the different levels of chicken manure was economically profitable, and the highest MRR (315%) was attained at the rate of 4.2 ton/ha of chicken manure. On the other hand, despite the yield increase as a result of NPK fertilizer use at both sites, there was no economic benefit. This is basically because of two things: one is as a result of the high fertilizer price and the other is because of the low fresh cassava root price at the farm gate that could reduce the MRR. A study on the cassava sector in Zambia revealed that the major constraints to the sector were that producers themselves are disaggregated with poor transportation and marketing infrastructure, which result in high cost and less competitive price [63]. Even today, it seems that the cassava price is low, and this was shown by the low MRR of NPK compared to manure application. In this changing world, if the inflation in the country is considered to be 15%, benefit reduction and cost inflation by the same amount do not switch the order or the effect of the treatments. The O 3 treatment remains the most profitable investment, with 206.97% MRR, as the benefit dropped by 15% and the price of input, such as fertilizer and manure, increased by the same amount.
Conclusions
Our results confirmed that the use of both organic and mineral fertilizer increases cassava yield, but higher yields were observed in response to chicken manure compared to the NPK fertilizer treatment. The 4.2 ton/ha chicken manure treatment resulted in the highest cassava root and biomass yield at both locations. This means that where actual and potential cassava yields are different, the use of either organic or mineral soil amendments can improve cassava productivity in the short run and contribute to sustainable intensification in the long run. The economic analysis indicated that the MRR was all positive for chicken manure application and increased with an increased rate of application. However, with the current condition, either because of the low price of cassava root or because of the high fertilizer cost, the use of an organic source (chicken manure) is economically more viable than the use of mineral fertilizer in Zambia.
|
v3-fos-license
|
2022-04-14T06:23:56.214Z
|
2022-04-13T00:00:00.000
|
248128937
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "CLOSED",
"oa_url": null,
"pdf_hash": "e47f89bd75bba96dfb5a83334153e03a289b5bbe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1171",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "f94c573955bb9beb1b0ccac912926fbf8dad2014",
"year": 2022
}
|
pes2o/s2orc
|
Gallium Mesoporphyrin IX-Mediated Photodestruction: A Pharmacological Trojan Horse Strategy To Eliminate Multidrug-Resistant Staphylococcus aureus
One of the factors determining efficient antimicrobial photodynamic inactivation (aPDI) is the accumulation of a light-activated compound, namely, a photosensitizer (PS). Targeted PS recognition is the approach based on the interaction between the membrane receptor on the bacterial surface and the PS, whereas the compound is efficiently accumulated by the same mechanism as the natural ligand. In this study, we showed that gallium mesoporphyrin IX (Ga3+MPIX) provided dual functionality—iron metabolism disruption and PS properties in aPDI. Ga3+MPIX induced efficient (>5log10 reduction in CFU/mL) bacterial photodestruction with excitation in the area of Q band absorption with relatively low eukaryotic cytotoxicity and phototoxicity. The Ga3+MPIX is recognized by the same systems as haem by the iron-regulated surface determinant (Isd). However, the impairment in the ATPase of the haem detoxification efflux pump was the most sensitive to the Ga3+MPIX-mediated aPDI phenotype. This indicates that changes within the metalloporphyrin structure (vinyl vs ethyl groups) did not significantly alter the properties of recognition of the compound but influenced its biophysical properties.
■ INTRODUCTION
In 1928, Alexander Fleming discovered penicillin, which revolutionized medicine and improved the quality of human life. Currently, after almost 100 years, one of the main challenges for both the academic and pharmaceutical industries is antibiotic multidrug resistance (AMR). According to the report of O'Neil, 10 million deaths per year would be caused by AMR infections by 2050. 1 Antimicrobial photodynamic inactivation (aPDI), primarily used to photokill cancer cells, 2−4 is now considered an alternative method for eradication of both Gram-positive and Gram-negative bacteria with different drug response profiles. 5−7 The aPDI approach is based on three components: oxygen, light, which activates a dye known as a photosensitizer (PS). In an oxygen-rich environment, reactive oxygen species (ROS) might be generated through either energy (type II mechanism) or electron (type I mechanism) transfer from an irradiated PS. ROS generated in aPDI are cytotoxic because of their multitarget action on proteins, lipids, or nucleic acids. The ideal PS should exhibit low dark toxicity and high phototoxicity, which usually correlates with a high quantum yield of ROS photogeneration and application safety toward eukaryotic cells. Photodynamic inactivation eradicates microbial species efficiently despite their drug resistance profile. 8−10 Moreover, recent studies by Wozńiak et al. revealed a synergy between photodynamic therapy and clinically used antimicrobials. 11,12 aPDI has an impact on the production of virulence factors, which cause pathogens to be less virulent. 13,14 Despite our recent studies of aPDI tolerance and an increased stress response upon consecutive cycles of sublethal treatments, 15,16 resistance to photodestruction has not yet been observed. aPDI is efficient in both in vitro 17 and in vivo studies. 18, 19 The efficiency of aPDI might be dependent on PS uptake. 20,21 PSs can accumulate in a different manner, depending on the wall structure of bacterial cells, environmental factors, and the type of the involved mechanism, for instance, active transport. 22 The concept of targeted PS recognition is based on PS uptake using membrane receptors, which recognize PSs as a similarly structured natural ligand. Proposed compounds for targeted PSs are metals conjugated with a protoporphyrin (metalloporphyrins, MPs). 23 The gallium protoporphyrin IX and gallium mesoporphyrin IX conjugates, formed with the metal oxidation state III, mimic the haem structure (Fe 3+ protoporphyrin IX) and thus possibly bind to elements of haem acquisition machinery. 24,25 Gallium compounds are active in disturbing iron metabolism by intracellularly accumulating via the Trojan Horse strategy ( Figure 1B). 23,26 Previous studies showed that Ga 3+ PPIX displayed light-independent antimicrobial activity against both Gram-positive and Gram-negative bacteria by blocking iron metabolism. 26−31 Gallium MPs also demonstrated antibiofilm activity. 32,33 Moreover, Ga 3+ PPIX exhibited antimicrobial photodynamic action against Staphylococcus aureus. 34,35 S. aureus is a Gram-positive member of ESKAPE pathogens that can effectively "escape" antibacterial drug action. 36 During infection, the pool of available iron is limited to pathogens such as S. aureus. To overcome the low iron availability, bacteria assimilate haem in either free form or bound in complexes with hemoglobin or haptoglobin in vivo ( Figure 1A). 37 The classified mechanism of the iron-regulated surface determinants (Isd) or haem transport system (Hts) for acquiring iron ions from haem has been reviewed in detail previously. 37,38 Briefly, IsdH and IsdB are primary receptors for haptoglobin−hemoglobin complexes or hemoglobin alone, respectively. 39 Both contain conserved near transporter domains that recognize and extract haem from the complex. 40 IsdA protein binds haem from the environment or receives the compound from IsdH or IsdB membrane receptors. Then, haem is transferred through the cell wall to the IsdC component and to IsdE, a membrane lipoprotein. IsdE acquires the haem and delivers it to IsdF, an ATP-binding cassette (ABC) permease. By ATP-hydrolyzing energy produced by IsdD, haem is passed through the membrane to the cytoplasm, where IsdG and IsdI, haem oxygenases, release iron from the structure of haem. 37,39,41,42 The second wellknown iron assimilation machinery is the membrane-localized ABC transporter HtsABC. Hts works in a similar manner to the complex of IsdDEF. HtsA is a membrane-associated lipoprotein, while HtsBC are two ABC transporters. However, their role is described mostly in the transport of staphyloferrin A. 38,41 Paradoxically, haem itself may induce toxicity at higher concentrations. 43 The two-component haem-regulated transporter (HrtAB) detects and pumps an overdose of the compound out of the cell. HrtAB is an ABC-type transporter, where HrtA acts as an ATPase and HrtB is a permease with the role of a membrane transport channel. 37,39,41 Deletion of genes encoding the HrtAB transporter revealed an impairment of bacterial growth under high concentrations of haem. 44 Efflux pump gene expression is regulated by the haem sensor system (HssRS), which is required for the adaptive response to haem. 45 Based on the existing knowledge concerning haem transport in S. aureus, we investigated whether Ga 3+ mesoporphyrin IX (Ga 3+ MPIX) could accumulate and act as a PS against methicillin-resistant Staphylococcus aureus. To answer these questions, we evaluated the antimicrobial effect using Ga 3+ MPIX against several staphylococcal strains, including clinical isolates with the multidrug resistance (MDR) phenotype and haem acquisition mutants. Our hypothesis assumed that Ga 3+ MPIX can be efficiently accumulated or retained in S. aureus because of the presence or absence of specific haem transporters. In addition, the intracellular activity of Ga 3+ MPIX can act in two ways: independent of light (blocking haem metabolism) or dependent on light (photodynamic action). aureus, Fe 3+ protoporphyrin IX (known as haem) is recognized by Isd and Hts protein machineries. Haem complexed with hemoglobin or haptoglobin−hemoglobin is recognized and released by IsdB and IsdH cell wall-anchored receptors. Also, free haem in the environment is bound to IsdA and then transferred through the cell wall to membranes by IsdCDEF. Another uptake mechanism is known as HtsABC, which directly transfers haem from the cell wall to the cytoplasm. Haem oxygenases (IsdG and IsdI) recognize and cleave the porphyrin structure. In the higher level of haem, the HrtAB detoxification machinery is upregulated and acts as an efflux pump. (B) Because of the structural similarity between Fe 3+ protoporphyrin IX and gallium MPs, bacteria do not distinguish compounds and could take up gallium conjugates. Gallium MPs might work as "Trojan Horse" and disrupt bacterial cells by blocking iron metabolism. In addition, because of the porphyrin ring structure, those compounds generate ROS including singlet oxygen upon light exposure. (Created with BioRender) Chemicals. Ga 3+ mesoporphyrin IX chloride (Ga 3+ MPIX) ( Figure 2B) and Ga 3+ protoporphyrin IX chloride (Ga 3+ PPIX) ( Figure 2B) were purchased from Frontier Scientific, USA; stock solutions were prepared according to manufacturer recommendations and kept in the dark at 4°C. Ga 3+ MPIX was dissolved in 0.1 M NaOH to 1 mM concentration, whereas 1 mM stock of Ga 3+ PPIX was diluted in the mixture 50:50 (v:v) 0.1 M NaOH:DMSO Protoporphyrin IX (PPIX) was purchased from Sigma−Aldrich, USA ( Figure S2A); a 1 mM solution was prepared in dimethyl sulfoxide (DMSO) and stored in the dark at room temperature. Protoporphyrin diarginate (PPIXArg 2 , Figure S2B), delivered by the Institute of Optoelectronics, Military University of Technology, Poland, was dissolved in Milli-Q water and stored at −20°C in darkness until use. 10 Haem (Sigma−Aldrich, USA) was dissolved in 0.1 M NaOH solution and kept in the dark at 4°C .
Photoinactivation Experiments. Microbial cultures were grown overnight in medium in the absence or presence of iron ion concentrations. Then, cultures were adjusted to an optical density of 0.5 McFarland units (McF) (approx 10 7 CFU/mL) and transferred to a 96-well plate alone or with PS combined. The aPDI samples treated with Ga 3+ MPIX were incubated at 37°C with shaking in the dark for 10 min and illuminated with different green light doses up to 31.8 J/cm 2 . The number of colony-forming units (CFU/mL) was determined by serial dilutions of 10 μL aliquots and plating bacterial cells on TSA plates. The control consisted of untreated bacteria. TSA plates were incubated at 37°C for 16−20 h, and then CFU/mL were counted. Lethal and sublethal aPDI conditions were defined similarly to our previous studies. 15,18 For competition testing, Ga 3+ MPIX was mixed with different concentrations of haem in a mixture with a volume ratio of 1:1 (v/v) and then incubated and irradiated as in photoinactivation experiments. Molar ratios of the studied molecules were as follows: (Ga 3+ MPIX:haemμM:μM) 1:0, 1:1, 1:10, and 0:10. Each experiment was performed in three independent biological replicates.
Growth Curve Analysis. Overnight culture was diluted 1:20 (v/v) in TSB or TSB-Chelex medium. A chosen PS (Ga 3+ MPIX, Ga 3+ PPIX, PPIX, or PPIXArg 2 ) was added to 450 μL aliquots of bacterial strain to a final concentration of 10 μM. The control group of bacterial cells was not treated with any PS. Prepared samples were loaded into 48-well plates and then placed in an EnVision Multilabel Plate Reader (PerkinElmer, USA), where the optical density (λ = 600 nm) was measured every 30 min for 16 h with incubation at 37°C with shaking (150 rpm).
Time-Resolved Detection of Singlet Oxygen Phosphorescence. A solution of the PSs in D 2 O-based phosphate buffer containing a small amount of DMSO (pD adjusted to 7.8) in a 1 cm fluorescence cuvette (QA-1000; Hellma, Mullheim, Germany) was excited for 15 s with laser pulses at 532 nm, generated by an integrated nanosecond DSS Nd:YAG laser system equipped with a narrow-bandwidth optical parameter oscillator (NT242-1k-SH/SFG; Ekspla, Vilnius, Lithuania), operating at 1 kHz repetition rate. The nearinfrared luminescence was measured perpendicularly to the excitation beam using a system described elsewhere. 46 At the excitation wavelength, the absorption of Ga 3+ MPIX was 0.196, while that of Ga 3+ PPIX was 0.235. The measurements were typically carried out in air-saturated solutions. To confirm the singlet oxygen nature of the detected phosphorescence, measurements were compared at 1215, 1270, and 1355 nm by employing additional dichroic narrow-band filters NBP, (NDC Infrared Engineering Ltd., Bates Road, Maldon, Essex, UK) and in the presence and absence of 5 mM sodium azide, a known quencher of singlet oxygen. Quantum yields of singlet oxygen photogeneration by the PSs were determined by comparative measurements of the initial intensities of 1270 nm phosphorescence induced by photoexcitation of rose bengal and the PSs with 532 nm laser pulses of increasing energies, using neutral density filters. The absorption of rose bengal solution, used as a standard of singlet oxygen photogeneration, was adjusted to match that of the examined PSs.
Electron Paramagnetic Resonance (EPR) Spin Trapping Measurements. EPR spin trapping was carried out using 100 mM 5,5-dimethyl-1-pyrroline N-oxide (DMPO) (Dojindo Kumamoto, Japan DMPO) as a spin trap. Samples containing DMPO and about 0.1 mM of the PSs in 70% DMSO/water with pH adjusted to neutral pH were placed in 0.3 mm-thick quartz EPR flat cells and irradiated in situ in a resonant cavity with green light (516−586 nm, 45 mW cm −2 ) derived from a 300 W high-pressure compact arc xenon illuminator (Cermax, PE300CE-13FM/Module300W; Perki-nElmer Opto-electronics, GmbH, Wiesbaden, Germany) equipped with a water filter, a heat reflecting mirror, a cutoff filter blocking light below 390 nm, and a green additive dichroic filter 585FD62-25 (Andover Corporation, Salem, NC, USA). The EPR measurements were carried out employing a Bruker-EMX AA spectrometer (Bruker BioSpin, Germany), using the following apparatus settings: 10.6 mW microwave power, 0.05 mT modulation amplitude, 332.4 mT center field, 8 mT scan field, and 84 s scan time. Simulations of EPR spectra were performed with EasySpin toolbox for Matlab. 47 MTT Survival Assay. HaCaT cells (CLS 300493) were seeded at a density of 1 × 10 4 cells per well in 96-well plates 24 h before the experiment. Cells were divided into two plates for light and dark treatment. Cells were grown in a standard humidified incubator at 37°C in a 5% CO 2 atmosphere in Dulbecco's modified Eagle's medium (DMEM). Ga 3+ MPIX was added to a final concentration of 0−100 μM and then incubated for 10 min at 37°C in the dark. HaCaT cells were washed twice with PBS and covered with fresh PS-free DMEM. Next, the cells were illuminated with 522 nm light (dose: 31.8 J/cm 2 ). Twenty-four hours post-treatment, MTT reagent [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] was added to the cells, and the assay was conducted. 17 The results are presented as a fraction of untreated cells and calculated as the mean of three independent biological experiments with the standard deviation of the mean. The data were analyzed using two-way analysis of variance (ANOVA) and Tukey's multiple comparisons test in Graph-Pad software. A p value <0.05 indicated a significant difference.
Analysis of Real-Time Cell Growth Dynamics. HaCaT cells (CLS 300493) were seeded the day before treatment in seven technical replicates for each condition at a density of 1 × 10 4 per well on E-plate PET plates (ACEA Biosciences Inc., USA). Cells were grown in a standard humidified incubator at 37°C and in a 5% CO 2 atmosphere in DMEM in the xCELLigence real-time cell analysis (RTCA) device (ACEA Biosciences Inc., USA). 17 When cells were estimated to be in the exponential phase of growth (cell index (CI) = ∼2), the experiment was conducted. The PS was added to the cells at a concentration of 0, 1, or 10 μM and left for 10 min in the dark incubation at 37°C. Then, the cells were washed twice with PBS, and the medium was changed to PS-free. Afterward, lighttreated cells were exposed to 522 nm light (dose of light: 31.8 J/cm 2 ). In the case of dark-treated cells, plates were incubated for the corresponding time of irradiation in the dark at room temperature. Then, the plates were returned to the xCELLigence device, and the cell index was measured every 10 min and recorded automatically until the cells reached the plateau phase under each condition.
PS Accumulation. Microbial overnight S. aureus cultures were adjusted to an optical density of 0.5 McF. Ga 3+ MPIX was added to 800 μL bacterial aliquots to final concentrations in the range of 1−10 μM. In a competition assay, bacterial where [GaMPIX] is the concentration [g/mL] of molecules obtained from a calibration curve based on known concentrations of the compound, M w is the molecular weight of GaMPIX 669.85 g/mol), NA is the Avogadro's number (6.023 × 10 23 ), and CFU is the colony-forming unit obtained using serial dilutions counted for 1 mL of the analyzed samples.
Confocal Microscopy Imaging. S. aureus Newman and its isogenic ΔHrtA, ΔHtsA, and ΔIsdD mutants were grown in either iron-rich or iron-poor medium overnight for 16−20 h. Then, microbial cultures were diluted to an optical density of 0.5 MacF units. Cells were incubated with 10 μM Ga 3+ MPIX for 2 h at 37°C with shaking. In control cells, tested compounds were not added. Bacterial samples were washed once in PBS buffer. Afterward, cells were imaged using a Leica SP8X confocal laser scanning microscope with a 100× oil immersion lens with excitation at 405 nm and fluorescence emission at 551−701 nm (Leica, Germany).
Statistical Analysis. Statistical analysis was performed using GraphPad Prism 9 (GraphPad Software, Inc., CA, USA). Quantitative variables were characterized by the arithmetic mean and the standard deviation of the mean. Data were analyzed using two-way ANOVA and Tukey's multiple comparison test. A p value of <0.05 indicated a significant difference.
■ RESULTS
Gallium MPs Delayed Staphylococcal Growth Light-Independently. Previous studies on several MPs have revealed the broad spectrum of gallium ion toxicity by blocking iron metabolism. 23 We assumed that the presence of ethyl groups in the macrocycle structure of Ga 3+ MPIX (instead of vinyl groups in Ga 3+ PPIX) would not affect its toxicity. The growth of the S. aureus 25923 reference strain was compared after exposing cells to gallium MP (Ga 3+ PPIX, Ga 3+ MPIX) and non-MP (PPIX and PPIXArg 2 ) in incubation during constant cultivation in iron-rich medium ( Figure 3, Table S2). A slower specific growth rate (μ max ) at the exponential phase was observed after exposure of the S. aureus 25923 strain to Ga 3+ MPIX (μ max = 0.15) or Ga 3+ PPIX (μ max = 0.126) compared to untreated cells (μ max = 0.354). Exposure to non-MPs such as PPIX (μ max = 0.282) or water-soluble PPIXArg 2 (μ max = 0.282) did not influence S. aureus 25923 growth. These observations confirm that despite the difference in the structure (ethyl groups vs vinyl groups), Ga 3+ MPIX still induces dark toxicity against S. aureus, which is related to the presence of gallium ions in the compound.
Green-Light Irradiation of Ga 3+ MPIX Generates ROS. To check the mechanism underlying the photodynamic potential of Ga 3+ MPIX, direct measurements of the PS ability to photogenerate ROS were performed.
Excitation of the PSs by 532 nm laser pulses induced phosphorescence that was strongly dependent on the observable wavelength ( Figure 4A). Thus, intense phosphorescence was only observed at 1270 nm, which coincides with maximum emission of singlet oxygen in water. Although D 2 O phosphate buffer was used, the apparent lifetime of the observed phosphorescence was about 40 μs, which is shorter than that reported in pure D 2 O. This shortening could be attributed to a small amount of DMSO and H 2 O, which were used to prepare stock solutions of the PSs. Consistent with the singlet oxygen nature, the observed phosphorescence was significantly quenched by the addition of 5 mM azide ( Figure 4B). The quencher reduced both the intensity and lifetime of the phosphorescence, which is most likely due to the quencher interaction with the triplet excited state of the PS and quenching of singlet oxygen. The final test of singlet oxygen nature of the 1270 nm phosphorescence is the effect of exchanging air for argon in the examined samples ( Figure 4C). It is evident that saturating the PS solutions with argon completely abolished the singlet oxygen phosphorescence. A weak long-lasting phosphorescence detected in argon-saturated samples could be attributed to emissive relaxation of the porphyrin triplet excite states.
Quantum yields of singlet oxygen photogeneration of the examined PSs, employing rose bengal as a standard for singlet oxygen photogeneration with a yield of 0.75, 49 were determined to be very similar for both dyes −0.69 for Ga 3+ MPIX and 0.67 for Ga 3+ PPIX, indicating that in aqueous media these porphyrin derivatives are efficient photogenerators of singlet oxygen ( Figure 4D,E).
Using EPR spin trapping, we were able to detect, after irradiation with green light (516−586 nm) of the PSs in a mixture of DMSO/H 2 O, the spin adduct with spectral parameters consistent with that of DMPO-OOH, (AN = 1.327±0.008 mT; AHα = 1.058±0.006 mT; AHβ = 0.131±0.004 mT 50 ) indicating the photogeneration of super- Figure 3. Staphylococcal growth under exposure to porphyrin compounds. Overnight cultures of the S. aureus 25923 reference strain in TSB medium were diluted 1:20 (v:v) and exposed to 10 μM Ga 3+ MPIX, Ga 3+ PPIX, PPIX, or PPIXArg 2 . The growth of each condition was monitored by measuring the optical density at 600 nm (OD 600 ) on an Envision plate reader. The experiment was conducted in three independent biological repetitions. Significance at the respective p-values is marked with asterisks (***p < 0.001) with respect to untreated S. aureus 25923 cells.
Molecular Pharmaceutics
pubs.acs.org/molecularpharmaceutics Article oxide anions ( Figure 4F). While both PSs photogenerated, under the conditions used, a superoxide anion, Ga 3+ PPIX was a slightly more efficient generator of the oxygen radical. However, it must be stressed that the yield of generation of superoxide anions by the examined PSs is rather low and cannot be compared with their ability to photogenerate singlet oxygen.
The production of ROS in vitro has also been confirmed by the use of ROS detection probes (HPF and SOSG) after irradiation with two light doses, 12.72 and 31.8 J/cm 2 , in the presence of Ga 3+ MPIX at two concentrations (Supplementary Figure S3A,B). We could observe quite a good correlation with the lower concentration of the compound used (1 μM). In both tested ROS types (HPF for radical detection and SOSG for singlet oxygen detection), we could observe that at a higher light dose (31.8 J/cm 2 ) the signals for both probes were higher compared to the lower dose (12.72 J/cm 2 ). However, at a higher (10 μM) Ga 3+ MPIX concentration, this relationship is completely lost, which indicates that this is the maximum signal that can be obtained under our experimental conditions. Both ROS are generated during aPDI with Ga 3+ MPIX, which confirms the photodynamic properties of this compound.
In addition, using ROS quenchers, we examined the predominant types of ROS produced during aPDI in vivo, which are likely responsible for the observed death of bacterial cells. We used quenchers of free radicals predominantly formed by type I photochemistry (mannitol, superoxide dismutase), singlet oxygen generated by type II photochemistry (NaN 3 ), and ROS formed by mixed type I/II photochemistry (tryptophan, Trp). We observed cell protection after the use of the type II quencher NaN 3 and Trp, indicating that singlet oxygen was mainly responsible for cell death ( Figure S3C). Interestingly, the enzyme catalase (CAT) also caused a statistically significant protection of bacterial cells against Phototreatment conditions: 10 min preincubation with 10 μM Ga 3+ PPIX or Ga 3+ MPIX at 37°C with shaking without washing; green LED light 31.8 J/cm 2 ; log 10 CFU/mL reduction was assessed with respect to nontreated cells, initial number of cells ∼10 7 CFU/mL. Light (+)lightdependent; Light (−)light-independent; light-onlybacterial cells irradiated without any PS applied. Significance at the respective p-values marked with asterisks: * = p < 0.05, ** = p < 0.01, *** = p < 0.001, and **** = p < 0.0001 with respect to "Light-only" treatment.
Molecular Pharmaceutics
pubs.acs.org/molecularpharmaceutics Article Ga 3+ MPIX, which indicates the potential role of H 2 O 2 in the Ga 3+ MPIX-mediated cell death process. On the contrary, mannitol, superoxide dismutase (SOD) did not provide significant protection. This is in agreement with the small amounts of photogenerated superoxide anions detected by EPR trapping. In summary, singlet oxygen appears to be the major ROS produced during Ga 3+ MPIX-mediated aPDI in vitro and in vivo, and the amount of singlet oxygen produced is comparable to Ga 3+ PPIX. Phototreatment of S. aureus with Ga 3+ MPIX Reduced Bacterial Viability despite Divergent MDR Profiles. Based on its absorbance spectrum, Ga 3+ MPIX might be excited by green LED light because of peaks called Q-bands, which are near the emission spectrum of the light source used (Figure 2A). However, the aPDI efficiency might differ among strains of S. aureus. 9 Several staphylococcal strains with divergent MDR profiles and different origins were taken for further investigations with 10 μM Ga 3+ MPIX or Ga 3+ PPIX illuminated with green light. The results are presented in Table 2 as the means of viability reduction for a discriminating light dose of 31.8 J/cm 2 . A reduction of more than 3 log 10 units in the number of CFUs (99.9%) was considered a bacterial eradication/lethal dose. However, sublethal doses were defined as a 0.5−2 log 10 reduction in CFU/mL. 14 Light-only treatment and light-independent, 50 min exposure to 10 μM of each compound did not influence the bacterial viability. Ga 3+ MPIX revealed a higher efficiency in bacterial reduction upon green illumination than Ga 3+ PPIX. For Ga 3+ PPIX-mediated aPDI, only sublethal conditions were obtained, despite the 5 N strain, where sublethal reduction was not even reached. Ga 3+ MPIXmediated aPDI resulted in bacterial eradication for strains: ATCC 25923, clinical isolate 5 N, and 4046/13. In the case of MDR strain 1814/06, aPDI with the Ga 3+ MPIX compound reduced bacterial viability, achieving lethal doses. The response to Ga 3+ MPIX-mediated aPDI is strain-dependent, however, independent of the MDR profile. Interestingly, the addition of a single wash step to the aPDI protocol influenced the effectiveness of aPDI, albeit in different ways (Supplementary Table S3). We observed that the inclusion of a single wash step in the photoinactivation protocol resulted in a better performance of Ga 3+ PPIX-mediated aPDI against bacteria. With Ga 3+ MPIX, the results were more diffuse, some strains were less efficiently photoinactivated, and others remained unchanged or increased. Nevertheless, the efficacy of Ga 3+ MPIX was still better than that of Ga 3+ PPIX. Because of the higher efficiency of Ga 3+ MPIX-mediated phototreatment under green light illumination, we chose this compound for further analysis.
Phototreatment of S. aureus with Ga 3+ MPIX Effectively Reduced Bacterial Viability in an Fe-Dependent Manner. Limited availability of iron in the culture medium impacts the higher expression of certain iron-haem receptors. 37 To check the hypothesis that the observed efficiency of Ga 3+ MPIX-mediated aPDI might be due to similar recognition of Ga 3+ MPIX molecules by haem receptors, the survival of the S. aureus 25923 reference strain was examined upon green LED light irradiation with Ga 3+ MPIX upon cultivation in the absence (−Fe) or presence (+Fe) of iron in the medium ( Figure 5). In iron-rich medium, we observed a maximum reduction in the number of bacteria of 4.6 log 10 units in CFU/ mL for 1 μM at 31.8 J/cm 2 . Compared to these data, in an iron-depleted medium, the maximum viability of bacteria was noticeable at the limit of detection, which was a 5.3 log 10 unit reduction in bacterial viability. Iron deficiency resulted in a higher efficiency of aPDI, with a 2.86 log 10 difference in bacterial viability at a sublethal dose of 12.72 J/cm 2 between two cultivation conditions. The efficiency of Ga 3+ MPIXmediated aPDI is dependent on iron availability in the culture medium.
After phototreatment of S. aureus 25923 strain with Ga 3+ MPIX (1 μM, 25.4 J/cm 2 ), surviving bacteria formed a small-colony variant (SCV) phenotype, which significantly differed from the original morphology. SCVs are classified as an atypical morphology with a lack of pigmentation, a smaller size, and a slower growth rate than the original cells. In ironrich medium, only ∼10% of the total pool of surviving bacteria formed SCVs with the same pigmentation as original cells before treatment ( Figure S4AB). However, under iron-poor conditions, SCV cells constituted nearly 60% of the total number of surviving bacteria. Moreover, constant, 20-h exposure to Ga 3+ MPIX during culturing (without light) had induced the SCV morphology in 100% of survived bacteria ( Figure S4C). Continuous iron starvation and exposure to Ga 3+ MPIX promoted the SCV phenotype, which indicated the effect on iron metabolism. Interestingly, the long exposure to Ga 3+ MPIX cells was efficiently eradicated after green light irradiation ( Figure S4D−F), indicating that SCVs are sensitive to Ga 3+ MPIX-mediated aPDI.
Haem Has a Protective Effect on aPDI and the Accumulation of Ga 3+ MPIX. Porphyrins with central metals in the oxidation state (III) might mimic structural haem and have an affinity to haem receptors. 24 Ga 3+ MPIX might also be recognized by haem transporters and accumulate in a similar manner to haem. To determine whether the presence of haem influences the effectiveness of aPDI against S. aureus, we incubated bacterial cells with a mixture of haem and Ga 3+ MPIX and then irradiated them with lower (19.08 J/ cm 2 ) and higher (31.8 J/cm 2 ) doses of light ( Figure 6). By incubating cells with equal concentration or excess haem (1× Figure 5. Photoinactivation of S. aureus 25923 with Ga 3+ MPIX in the presence and absence of iron. Overnight cultures of S. aureus 25923 strain were diluted to 0.5 MacF in medium with either the presence (+Fe) or absence (−Fe) of iron, then exposed to 0 or 1 μM Ga 3+ MPIX for 10 min at 37°C, and then irradiated with different green LED light doses ranging from 0 to 31.8 J/cm 2 . Colony-forming units (CFU/mL) were estimated with serial dilutions of 10 μL aliquots of irradiated samples and plated on TSA agar. Plots present the reduction of log 10 units of CFU/mL. The detection limit was 100 CFU/mL. Each experiment was performed in three biological experiments. The value is a mean of three separate experiments with bars as ± SD of the mean. Significance at the respective p-values is marked with asterisks [*p < 0.05; **p < 0.01; ***p < 0.001] with respect to 1 μM (+Fe) cells.
Molecular Pharmaceutics
pubs.acs.org/molecularpharmaceutics Article or 10×), we observed a protective effect, that is, much fewer bacterial cells were photoinactivated compared to the situation when there was no haem in the reaction mixture, exhibiting a decrease of 1.25 log 10 units in the reduction of CFU/mL for 1 and 1.7 log 10 for a 10× higher haem concentration. This effect was especially observed with a lower light dose (decrease in CFU reduction of 2.73 log 10 and 3 log 10 for 1-or 10-fold haem concentration). We did not observe a difference between the 1fold and 10-fold excess haem used. The observed protective effect of haem may be related to more efficient accumulation of haem in bacterial cells and competition of haem molecules with Ga 3+ MPIX for binding sites in/on cells. Next, we examined whether Ga 3+ MPIX accumulation in S. aureus is dependent on iron availability in the culture medium ( Figure 7). In the absence of iron in the medium (−Fe), the intracellular accumulation of Ga 3+ MPIX at 10 μM was 2.1 times higher than the accumulation at the same compound concentration in the presence of iron (+Fe). The accumulation of Ga 3+ MPIX was dose-dependent (data not shown). Iron starvation of S. aureus promotes higher accumulation of the compound. Based on our results, we checked whether the protective effect of haem in aPDI treatment would be reflected as lower intracellular accumulation of PS. S. aureus was incubated with a mixture of Ga 3+ MPIX and haem at a protective concentration of 10 μM in the absence of iron. The addition of haem resulted in a decrease in Ga 3+ MPIX uptake by 22% (1.2 × 10 6 molecules per cell) with respect to PS accumulation alone in the absence of iron. The accumulation of Ga 3+ MPIX is also dependent on the presence of haem in the culture medium. The addition of the ligand for haem recognition receptors decreased the uptake of Ga 3+ MPIX by S. aureus cells, although statistical significance was not achieved in this situation. These results together with haem protection from Ga 3+ MPIX-mediated phototoxicity confirm that Ga 3+ MPIX is recognized in the same manner as haem.
Under the tested conditions, Ga 3+ PPIX also accumulated in S. aureus cells in an iron-dependent manner. In the absence of iron in the medium (−Fe), the intracellular accumulation of Ga 3+ PPIX at 10 μM was 1.11 times higher than the accumulation at the same compound concentration in the presence of iron (+Fe). The addition of haem reduced the Ga 3+ PPIX uptake by 40% (2.3 × 10 6 molecules per cell) with respect to the accumulation of PS in the absence of iron. It is worth noting that under standard conditions, that is, in the presence of iron, bacterial cells accumulated more Ga 3+ PPIX compared to Ga 3+ MPIX, which may explain the greater toxicity of Ga 3+ PPIX in light-independent survival tests ( Figure 3). Impairment in the HrtA Detoxification Efflux Pump Promotes Dark Toxicity of Ga 3+ MPIX. As the presence of haem influenced the level of Ga 3+ MPIX accumulation and aPDI efficiency, we hypothesized that haem acquisition machinery might also be involved in PS recognition. To understand the molecular mechanism responsible for the uptake and detoxification of gallium conjugates, we analyzed the growth of S. aureus Newman (WT) and its isogenic mutants deprived of genes engaged in haem uptake (ΔIsdD and ΔHtsA) and detoxification (ΔHrtA). The growth curves of S. aureus of each phenotype were analyzed after constant exposure to gallium MPs such as Ga 3+ MPIX or Ga 3+ PPIX ( Figure 8) in an iron-rich environment. We compared several growth parameters such as maximum specific growth rate (μ max ), duplication time (T d ) of the exponential phase and for the stationary phase: time to reach the stationary phase, and maximum density (A max ) in each mutant after treatment with gallium compounds (Supplementary Table S4). Both compounds Ga 3+ PPIX and Ga 3+ MPIX reduced the μ max of each strain studied in a similar manner, that is, the highest inhibition was observed for ΔHrtA and ΔIsdD. Each treatment was compared to the controluntreated cells of each mutant (calculated as 100%). Despite gene deletion, untreated mutants achieved a similar growth rate to untreated WT. Under exposure to Ga 3+ PPIX, the growth of each strain at the end of the exponential phase (after 270 min of analysis, taken as the point of inhibition of the exponential growthcutoff point) was estimated to be 72−77% of the growth of respective untreated controls, which indicated the higher toxicity of this compound. However, the main difference between mutants' growth was observed under Ga 3+ MPIX exposure. The WT, In our previous studies on aPDI on the Newman WT strain and its isogenic mutants, ΔHrtA was the most susceptible to PPIX-mediated aPDI. 48 Here, we were interested in whether differences among haem transport mutants could also be observed in the sensitivity to Ga 3+ MPIX-based aPDI. Therefore, we performed aPDI against S. aureus Newman and its isogenic mutants (10 μM Ga 3+ MPIX, 19.8−38.16 J/cm 2 ) in the presence ( Figure 9A) and absence of iron ( Figure 9B). We increased the dose of green light to 38.16 J/cm 2 to observe more pronounced differences between phenotypes. In the presence of iron, the maximal bacterial reduction in CFU/mL was observed as follows: 3.78 − ΔHrtA, 3.15 − ΔIsdD, 2.5 − ΔhtsA, and 3.15 log 10 units for WT. In the absence of iron, the maximal reduction in CFU/mL was estimated to be 3.5 − ΔHrtA, 2.75 − ΔIsdD, 1.44 − ΔHtsA, and 1.5 log 10 units for WT. Interestingly, the absence of Fe 3+ in the medium did not significantly increase the efficiency of aPDI. Under both cultivation conditions, the ΔHrtA mutant presented the most PDI-sensitive phenotype. Moreover, the ΔHtsA mutant was the most resistant to aPDI treatment among all phenotypes. Taking these results together, impairment in HrtA ATPase in the HrtAB detoxification system provides higher sensitivity to Ga 3+ MPIX-based aPDI.
To understand the mechanism of the superior efficiency of aPDI in the ΔHrtA mutant, the accumulation of Ga 3+ MPIX was investigated in each phenotype. Briefly, bacterial cells were cultivated in different iron contents at the stationary phase of growth, diluted, and then incubated in the dark with PS for 2 h at 37°C with shaking. Then, bacterial lysates were prepared and measured as described in the Experimental Section. In the presence of iron (+Fe), we did not observe significant differences in accumulation between the studied phenotypes ( Figure 10). Iron starvation (−Fe) increased PS uptake in comparison to iron presence in the media for each phenotype, except for ΔIsdD, in which the accumulation remained at the same level. Ga 3+ MPIX accumulation in ΔHrtA was 2-fold higher than that in the WT strain in the absence of iron. Additionally, the uptake of the PS was decreased by approximately 50% for ΔHtsA and 90% by ΔIsdD compared to the WT strain.
The use of fluorescence microscopy did not give unequivocal results; that is, a stronger fluorescence signal was observed for ΔHrtA and WT under both (+Fe) and (−Fe) conditions ( Figures S5 and S6). In contrast, ΔIsdD and ΔHtsA showed a stronger fluorescence signal under (−Fe) compared to (+Fe) ( Figure S7). This indicates that the presence of Fe 3+ influences Ga 3+ MPIX uptake rather than removal from the cell. Ga 3+ MPIX Does Not Promote Extensive and Prolonged Cytotoxicity or Phototoxicity against Human Keratinocytes. PS safety toward eukaryotic cells is a crucial factor for optimization and further applications of photoinactivation protocols. We examined the phototoxicity and cytotoxicity of Ga 3+ MPIX against human keratinocytes. Variations in concentrations of Ga 3+ MPIX were used for treatment under both light and dark conditions with twice wash step ( Figure 11) and once wash step ( Figure S8). The highest dose of green light (31.8 J/cm 2 ) was selected, which corresponds to the bactericidal effect toward several S. aureus strains. Additionally, we increased the concentration of the PS up to 100 μM to ensure its high excess. Based on the MTT assay results in Figure 11A, the viability of cells was affected by neither the presence of Ga 3+ MPIX alone (94.73 and 93.7% survival upon 1 and 10 μM) nor under green light irradiation (92.87 and 86.9% survival upon 1 or 10 μM). Cell survival estimated at ∼80% is considered acceptable, modest toxicity to eukaryotic cells. 17 Increasing the compound concentration to 100 μM under green light showed significantly increased phototoxicity toward HaCaT cells (36.57% cell survival) in comparison to cells exposed only to light without the PS. In the dark, 100 μM Ga 3+ MPIX had no significant impact on cell viability (estimated 92.55% survival). Ga 3+ MPIX exhibited relative safety on HaCaT cell survival at 31.8 J/cm 2 green light irradiation up to 10 μM concentration.
However, cytotoxicity and phototoxicity were more pronounced when the cells were washed once compared to two times (Supplementary Figure S8A). After washing the cells once, a concentration of 100 μM Ga 3+ MPIX (31.8 J/cm 2 ) almost completely reduced the number of the viable HaCaT cells, while approximately 50% of the cells survived the treatment with 10 μM Ga 3+ MPIX (31.8 J/cm 2 ). The lightindependent cytotoxicity decreased to approx 80% (100 μM) compared to the twice-washing procedure when this value was negligible.
However, the MTT assay has some methodological limitations, such as measuring only at a specific time point. Additionally, the cell proliferation rate and morphology were not taken into consideration. RTCA on E-plates is a method consisting of electrographic detection of the cell number, morphology, adhesion, and rate of proliferation under experimental conditions. Based on real-time cell growth dynamic curves ( Figure 11B), we observed a slower proliferation rate of HaCaT cells after treatment with 1 or 10 μM Ga 3+ MPIX under either dark or illumination conditions. Untreated cells resumed growth and reached the plateau phase at approximately the 60th h. Ga 3+ MPPX darktreated cells reached the plateau phase at approximately 85 h at 1 μM and 100 h at 10 μM. After photodynamic treatment (1 μM Ga 3+ MPIX, 31.8 J/cm 2 ), HaCaT cells reached a plateau phase at the same hour as dark-treated cells at the same concentration of Ga 3+ MPIX. A higher difference between lightexposed and dark-kept cells was observed in 10 μM Ga 3+ MPIX, where the cell recovery phase was reduced, and the plateau phase was detected after 120 h. The cell proliferation rate and recovery were lowered in a concen-
Molecular Pharmaceutics
pubs.acs.org/molecularpharmaceutics Article tration-dependent manner. The illuminated PS provided higher inhibition of HaCaT growth dynamics than the PS in the dark. During aPDI treatment, a fraction of the cells was damaged, but in most aPDI-treated cells, the damage was repaired, and the cells continued to grow and divide. Growth dynamics of HaCaT cells after aPDI with one wash step instead of the two washes ( Figure S8B) was similar, showing the highest phototoxicity for 10 μM Ga 3+ MPIX. Thus, we claim that Ga 3+ MPIX alone or under photodynamic treatment does not promote extensive and prolonged cytotoxicity or phototoxicity against human keratinocytes.
■ DISCUSSION
Targeted PS recognition in aPDI is a novel trend of action against microorganisms, which developed from the origin of cancer cell phototreatments. 51 Based on the Trojan Horse strategy of action, gallium MPs could be potent antimicrobial agents. Recognized by bacterial cells in the same manner as the natural ligand haem, gallium compounds might interrupt haem/iron metabolism. 23 Many studies have only confirmed the activity of gallium MPs toward several ESKAPE pathogens, including antimicrobial and antibiofilm action. 25,26,28,29 Ga 3+ MPIX resulted in the same MIC value for S. aureus (1.6 μg/mL) as Ga 3+ PPIX. 23 Interestingly, porphyrins without metal ions (i.e., PPIX and MPPIX) were not efficient inhibitors of bacterial growth. 23 The rationale for choosing Ga 3+ MPIX for our research stems from studies on metalloporphyrins which showed effective induction of HrtAB, a molecular haem transport system. Previous studies on metalloporphyrin toxicity and the molecular mechanism underlying this process have shown that the HssRShaem detoxification system is quite widely activated by metalloporphyrins, while the HrtAB efflux pump will only be sensitive to certain metalloporphyrins, in particular Ga 3+ PPIX or Mn 3+ PPIX. 52 These observations prompted us to investigate the accumulation in cells and efficacy of the Ga 3+ MPIX compound in aPDI, excitation in a spectrum which is not commonly used in research, that is, green light. In particular, we were interested in studying the functionality of the vinyl group (protoporphyrin IX) to ethyl group (mesoporphyrin IX) change in the porphyrin macrocycle, as from the literature to date, the modification of the side compound chains was presented with minor interest, such as changes inside the porphyrin ring (i.e., central metal 34,53 ). We hypothesized that despite this difference, Ga 3+ MPIX might be recognized by haem receptors, and consequently, released gallium ions induce dark toxicity similar to Ga 3+ PPIX. The
Molecular Pharmaceutics
pubs.acs.org/molecularpharmaceutics Article dependence of the accumulation of Ga 3+ MPIX in bacterial cells on iron and the protective role of haem in aPDI indicates competition for binding with haem receptors and shows that specific haem transport systems into or out of the cell may play a role in the photoinactivation process. This is because bacteria do not distinguish some MPs from their natural ligand, the haem, and use them in their natural metabolic processes leading to inhibition of cell growth and death. Previously published data showed that only some MPs can do this, for example, Ga 3+ PPIX 34 and Mn 3+ PPIX. 52 Here, we showed that Ga 3+ MPIX can also behave in a similar manner. Furthermore, the molecular structure of the MPs to be used as a substrate for the targeted delivery to bacterial cells is of primary importance. Previously published data indicated that porphyrin ring ion groups (carboxyl) have been shown to be important for interactions with haem uptake systems. Replacing them with esters that cannot be ionized has resulted in the loss of selective uptake by haem extraction systems to the advantage of nonspecific uptake. 34 In our experiments, the transition from more hydrophobic vinyl (in Ga 3+ PPIX) to less hydrophobic ethyl (in Ga 3+ MPIX) groups in the porphyrin macrocycle resulted in several different behaviors, including solubility, absorption, but also accumulation and photoinactivation. The two molecules Ga 3+ MPIX and Ga 3+ PPIX are similar in terms of their general structure (the only difference being vinyl vs ethyl groups) and production of singlet oxygen (in aqueous solution), yet they differ in light-dependent (significantly) and light-independent (slightly) activity. The observed difference between the activity of both compounds without light is most likely due to the more effective accumulation of Ga 3+ PPIX than Ga 3+ MPIX (Figure 7) which results in a greater reduction in the growth rate of bacterial cells (Figures 3 and 8). The chemical modification of the porphyrin macrocycle seems to alter the potency of these compounds to regulate haem metabolism in vivo. Those molecules may differently react with their molecular targets in the bacterial cells, which results in the observed differences in the light-independent process. However, the issue of light-dependent action of the two compounds is more related to the biophysical properties of the compounds themselves. First, the differences in absorption spectra, although slight, are nevertheless noticeable. For Ga 3+ PPIX, the absorption maxima in the Q band region are λ max = 541 nm and λ max = 580 nm, while the analogous Ga 3+ MPIX maxima are λ max = 532 and λ max = 570 nm and are shifted toward shorter wavelengths. As a result, they better match the emission spectrum of the LED lights we used. The second and more important element explaining the different effectiveness of both compounds is the solubility in aqueous solutions. As is well known, porphyrin compounds do not readily dissolve in aqueous solvents. In our experiments, Ga 3+ MPIX dissolved much better in the aqueous solution (0.1 M NaOH titrated to PBS) than Ga 3+ PPIX. Ga 3+ PPIX in 0.1 M NaOH titrated to PBS generated a double peak, which is most likely responsible for the appearance of oligomeric forms in the solution ( Figure S1). The addition of 50% DMSO shifted the equilibrium of monomeric and oligomeric forms toward the one resulting in a higher absorption signal ( Figure S1) and most likely corresponding to a monomer. 54 Thus, the difference in the structure of both compounds (ethyl vs. vinyl groups) has a significant impact on their solubility in an aqueous solution. This feature is extremely important from a clinical point of view. From the available literature data, it appears that the difference between the activities of porphyrin and mesoporphyrin was not that significant (estimated as at most 1 log 10 ) 53,55 as observed in our experimental setup. It is worth noticing, however, that the elsewhere tested compounds were dissolved in solutions with the addition of DMSO, which strongly affects the solubility of protoporphyrin derivatives. Because of the potential clinical use of aPDI, photosensitizing compounds should be dissolved in aqueous solutions, avoiding the use of organic solvents. In this case, Ga 3+ MPIX meets this requirement and Ga 3+ PPIX does not.
The recent study of Morales-de-Echegaray et al. revealed the dual functionality of Ga 3+ PPIX. Despite gallium toxicity, these compounds might also act as PSs in aPDI upon blue light irradiation (405 nm, 140 mW/cm 2 ) with maximal staphylococcal reduction >6 log 10 of bacterial viability. 34 The photodestruction was characterized as rapid (after 10 s of irradiation), and authors suggested that high-affinity surface hemin receptors such as the Isd system might have a role in the process. 34 Moreover, the Skaar group recently showed that anti-Isd monoclonal antibody together with aPDI proved to be effective against drug-resistant S. aureus in a murine model of soft tissue infections. 56 Based on the literature, the lack of iron upregulates the gene expression of haem receptors of the Isd system on the bacterial surface. 23 This might be the possible explanation for the higher aPDI efficiency, where Ga 3+ MPIX is recognized by Isd or Hts similarly to haem. We confirmed this by studying aPDI in an Fe-dependent manner ( Figure 5), and we observed the protective effect of haem in the process of aPDI ( Figure 6) or accumulation of Ga 3+ MPIX (Figure 7). In our study, the impairment of haem surface receptors, such as IsdD or HtsA, was manifested by a reduction in Ga 3+ MPIX accumulation as measured by two fluorescence methods. Based on these results, we hypothesized that despite the ethyl instead of vinyl groups in the side chains of the porphyrin structure, Ga 3+ MPIX is recognized by haem uptake receptors (mainly Isd) and is a competitor of haem. Impairment in the HrtA component of the efflux pump potentiated the effect of aPDI, and this effect was the most visible of all mutants tested, although many factors could influence its efficacy. 57 We previously reported that increased aPDI efficacy in ΔHrtA mutant can also be observed because of physical changes in the membrane composition and not the lack of functional protein. 48 The lipid content of the bacterial membrane might also contribute to the observed result in Ga 3+ MPIX-mediated aPDI. 57 However, the substrate of the HrtAB efflux pump or the molecular mechanism of detoxification of gallium MPs is currently unknown, and further studies in this area should be encouraged.
Most studies on the antimicrobial activity of Ga 3+ PPIX were conducted in a light-independent manner. 23,27−29 Lightdependent action was demonstrated only for blue light with excitation in the Soret band (∼405 nm). 34,35 In this study, we propose the excitation of Ga 3+ MPIX within one of the Q-bands using green light. Green light ensures deeper tissue penetration than blue light while preserving sufficient energy to activate the compound. Moreover, the green LED lamp (λ max = 522 nm) exhibits a low light toxicity level toward bacterial cells themselves, as demonstrated in our current study. In lightonly treatments, there was no pronounced excitation of endogenous porphyrins, so the aPDI effect was related to only exogenously applied PSs. In the case of Ga 3+ MPIXmediated aPDI, we observed a maximal reduction in bacterial viability in the range of 3−6 log 10 (2−5 log 10 after a wash). This indicates that Ga 3+ MPIX has good efficiency against S.
Molecular Pharmaceutics
pubs.acs.org/molecularpharmaceutics Article aureus compared to other Ga 3+ PPIX excited with a shallow penetrating blue light. 53,55 Because of the use of green light, it is potentially possible to photoinactivate bacteria that penetrate deeper layers of the skin than blue light. Verifying such an approach, however, would require additional research on more complex in vivo models. Iron starvation alters bacterial metabolism by changes in the expression of several staphylococcal genes involved in iron acquisition, glycolysis, and virulence via a Fur-mediated mechanism. These changes are related to different colony phenotypes known as SCVs. 58 Based on previous research, MPs such as Ga 3+ PPIX induced this phenotype by inhibiting respiration or inducing oxidative stress, which was indistinguishable from genetic SCVs. 52 The SCV phenotype appears to be responsible for chronic and recurrent infections and is also highly resistant to antibiotics. 59 We observed the presence of the SCV phenotype during 16−20 h of light-independent, constant cultivation of bacteria with Ga 3+ MPIX ( Figure S4A− C). At the same time, it is worth noting that the exposure to Ga 3+ MPIX caused sensitization of SCVs to light and, as a result, the eradication of microbial cells upon green light ( Figure S4D−F).
Red light is usually employed in photodynamic applications of porphyrins because of the depth of tissue penetration (dermis layers). The use of green light to treat superficial skin lesions seems particularly attractive. Because green light does not penetrate as deeply into the skin as red light causes much less pain during the irradiation in patients. 60 It penetrates only the epidermis without irritating the nerve fibers. Ga 3+ MPIX can be efficiently activated by green light without causing extensive and prolonged phototoxicity against HaCaT cells. Although, under our experimental conditions the observed phototoxicity seems to be higher compared to Ga 3+ PPIX published by others, where only minor phototoxicity was observed after blue light activation. 34 Ga 3+ MPIX under photodynamic treatment does not promote extensive phototoxicity against human keratinocytes; however, cells exhibit a slower proliferation rate than untreated cells. The cells with moderate or none photodamage resume growth and divide thus indicating that there is a place here for "therapeutic window." The observed growth delay was not prolonged. These in vitro experiments confirmed the safety of Ga 3+ MPIXmediated aPDI application to further studies on ex vivo models (e.g., porcine skin) or in vivo models (e.g., mouse models).
Research on photosensitizing compounds using natural bacterial cell transport systems is an extremely interesting path in the development of targeted PDI. The Trojan Horse strategy based on haem analogues, proposed years ago, 23 shows that discrete changes in the structure of PS molecules can significantly affect its properties and enable further development of this strategy against S. aureus infections.
■ CONCLUSIONS
In conclusion, Ga 3+ MPIX acts in two ways: independent of light (by blocking iron metabolism) or dependent on light (photodynamic action). This type of two-way mechanism of action provides very good protection against the selection of S. aureus mutants resistant to photodestruction. This study demonstrated that green light excitation of Ga 3+ MPIX in the Q band absorption area resulted in eradication of bacteria (reduction >5log 10 CFU/mL) while maintaining relative safety for the eukaryotic cells tested. We have demonstrated that Ga 3+ MPIX-mediated aPDI exhibits Fe-dependent efficiency, and haem has a protective effect, indicating the importance of specific haem transport systems in the aPDI system under study. We have shown that Ga 3+ MPIX, with ethyl groups in the porphyrin macrocycle instead of vinyl groups present in Ga 3+ PPIX, can be recognized by haem uptake machinery, preferably by Isd. Impairment in the HrtA efflux pump turned out to be the most sensitive to aPDI with Ga 3+ MPIX. This study showed that despite the structural changes around the porphyrin ring, Ga 3+ MPIX was able to sustain its dual functionality. In addition, these changes can improve other properties of the compound, such as a higher efficiency in the photodynamic action. the confocal microscopy images, M.K. performed screening of light-dependent and independent action on several S. aureus strains, G.S. and T.S. performed, wrote the section of direct detection of ROS generation and critically reviewed the manuscript, and J.N. was involved in the coordination, conception, and design of the study and wrote the manuscript.
|
v3-fos-license
|
2018-05-25T17:20:49.583Z
|
2015-12-31T00:00:00.000
|
43946117
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.23937/2378-3656/1410073",
"pdf_hash": "e08c49c12f1172c4b7e2816040c52b45d30275cd",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1173",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e08c49c12f1172c4b7e2816040c52b45d30275cd",
"year": 2015
}
|
pes2o/s2orc
|
Trapped in a Breathless Condition-A Case Report and Discussion of a Malignant Pleural Effusion and Trapped Lung
C l i n M e d International Library Citation: Lal A, Desandre PL, Quest TE (2015) Trapped in a Breathless Condition A Case Report and Discussion of a Malignant Pleural Effusion and Trapped Lung. Clin Med Rev Case Rep 2:073 Received: September 28, 2015: Accepted: November 28, 2015: Published: December 02, 2015 Copyright: © 2015 Lal A, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Trapped in a Breathless Condition A Case Report and Discussion of a Malignant Pleural Effusion and Trapped Lung
Introduction
Malignant pleural effusions (MPE) are a common complication of underlying malignancies, frequently requiring management by specialty practitioners.Although the most common cause of malignant pleural effusions is bronchogenic carcinoma with a frequency of 7-23%, it is also seen with other malignancies [1].Unfortunately, individuals suffering from MPE have a poor prognosis, in the realm of 3-18 months depending upon functional status and primary malignancy [2,3].Therefore, with this limited time, relieving symptoms associated with MPE such as dyspnea, chest pain and cough to enhance quality of life is a major goal.This case represents the challenges associated with the treatment of malignant pleural effusions, and how to approach an "untreatable" effusion.We will present a case report followed by a discussion of identification of MPE, the use of manometry, trapped lung and treatment options with a focus on fibrinolytics and indwelling pleural catheters.
Case Report
A 26-year-old male with no significant past medical history presented in March of 2015 with hematuria found to have bilateral renal masses -large mixed cystic/solid lesions on computerized tomography.He was diagnosed with clear-cell renal cell carcinoma with metastasis to the lymph nodes and pulmonary nodules suspicious of metastasis.Shortly after diagnosis, he completed 3 cycles of chemotherapy (sunitinib) prior to developing a pleural effusion, requiring thoracentesis for symptom relief as an outpatient procedure.With progression of disease, his chemotherapy was switched to temsirolimus, however his condition continued to deteriorate.After multiple emergency room visits, he ultimately required hospitalization for pain management and treatment of his pleural effusion, at which time he had a Karnofsky Performance Status of 70% (he was able to care for himself, but unable to do any active work).
The day after his admission, he underwent his second thoracentesis with 1000 ml of straw colored exudative pleural fluid the pleural rind (thickening) can also be seen via direct visualization with air contrast computed tomography or video-assisted thoracoscopy [9].In the aforementioned case, a diagnosis of trapped lung was made after the second thoracentesis which was complicated by severe chest pain, resulting in aborted procedure.In cases of trapped lung, patients are less likely to benefit from pleurodesis, a procedure in which adhesions are formed between the visceral and parietal pleura, to prevent the reaccumulation of fluid within the pleural space [10].Similar to malignant pleural effusions, trapped lung will result in shortness of breath.Dyspnea due to trapped lung can be due to many physiological and pathological responses (Table 2).
Treatment options
There are a variety of treatment options for malignant pleural effusions including repeated thoracentesis, pleurodesis, pleuroperitoneal shunt, long term thoracostomy tube, implantable pleural catheter, video-assisted thoracoscopic surgery (VATS)/ decortication.Symptoms and performance status of the patient, tumor type, and degree of lung re-expansion after removal of fluid are some factors to take into consideration prior to choosing treatment option.
Repeated thoracentesis
Repeated thoracentesis was the initial treatment plan for the gentleman discussed above, as it provides transient symptom relief.Although he was hospitalized during his thoracentesis, this can be offered as an outpatient.Perhaps if manometry was used in the first thoracentesis, and malignant pleural effusion diagnosed early, the patient could have benefitted from pleurodesis prior to the development of complications.
Repeated thoracentesis provides immediate symptom relief but rapid re-accumulation can occur, requiring multiple visits.Practicioners should be cautious when removing more than 1.5 L during any single drainage.It is offered to patients with a short expected survival with a poor prognosis as it is the least invasive of the procedures available.Complications include infection, pneumothorax, bleeding and trapped lung [2,12].
Pleurodesis
Pleurodesis, as defined above, can be achieved with chemically or or with radiographic evidence of a pleural tumor, however is most commonly diagnosed following a diagnostic thoracentesis, definitively with cytology.(Table 1) [4] There are a few criteria included to classify an effusion as malignant; exudative (with a fluid protein > 3 g/dl, pleural fluid-serum protein ratio > 0.5, lactate dehydrogenase level > 200 IU and pleural fluid-serum LDH ratio > 0.6), although a minority of malignant effusions can be transudative.Cytology obtained from pleural fluid can also assist in diagnosing an effusion as malignant.Malignant Pleural effusions occur as a consequence of inflammation, vascular leakage and enhanced angiogenesis.This complication can arise in lung adenocarcinoma, malignant pleural mesothelioma, lymphoma, breast, colon, gastric and ovarian adenocarcinoma [5,6].
Trapped lung
Another complication of metastatic disease involving the pleura is a phenomenon known as trapped lung.This occurs when a dense layer of malignant tissue encases the visceral pleura resulting in incomplete lung re-expansion after pleural fluid drainage.As the peel restricts expansion of the lung parenchyma, a high negative pleural pressure develops with in the pleural space.This will result in increased pleural fluid formation and a chronic pleural effusion.
Trapped lung can be diagnosed via manometry during thoracentesis; pleural space elastance (change in pleural pressure/ amount of pleural fluid removed) more than 14.5 cm h20/L.Pleural pressure is the result of inward and outward forces changing during inspiration and expiration.During inspiration, there is expansion of the lungs due to increase in the negative pleural pressure of the thoracic cavity.Any pathology affecting lung expansion will result in abnormalities of pleural pressure during respiration.Currently, manometry is not routinely used during thoracentesis; perhaps, as it is time consuming, requires additional training, and can lead to inappropriate decisions if not coupled with clinical presentations.However, it can help guide management and identify pathophysiology of pleural effusion.As pleural elastance can change throughout the procedure, especially with large-volume thoracentesis, it can be beneficial to calculate it during thoracentesis for identification of unexpandable lung.It can also be used as a predictor of successful pleurodesis, in measuring the absolute closing pressure and overall elastance.The higher the elastance, the probability of the pleural layers being pulled apart is increased which can interfere with pleurodesis.A high index of suspicion should be maintained in the diagnosis and management of trapped lung in the setting of malignant pleural effusions so to prevent repeated thoracentesis; which, will result in more problems and complications such as chest pain, formation of loculations and recurrent effusions [8].
In addition to pleural pressure changes, measured by manometry, mechanically.Chemical pleurodesis is preferred over mechanical as it is better tolerated and minimally invasive.Chemical Pleurodesis is achieved using sclerosing agents, dissolved in 50 ml of normal saline introduced via small bore intercostal catheter.These agents include but are not limited tobleomycin (60,000 units), talc (5 gm of asbestos free, sterilized large particle talc), doxycycline (500 mg), iodine and quinacrine.The ideal agent should have a high molecular weight and chemical polarity, rapid systemic clearance, well tolerated, with a steep dose-response curve.The choice of which sclerosing agent to use can vary, but can be determined by efficacy, success rate, accessibility, ease of administration, safety and cost.It is recommended to use lidocaine 3 mg/kg intrapleurally prior to administering sclerosing agents into pleural space as the procedure can be quite painful.
It can be performed via surgical approach with thoracoscopy or video assisted thoracic surgery, via medical thoracoscopy, or via small bore chest tube at the bedside.The addition of these agents into the pleural space will result in inflammation and fibrosis, thus obliterating the pleural space [4].
Unfortunately, given that the case above was complicated by a trapped lung, pleurodesis was not the management of choice as it would not allow for re-expansion.Other contraindications include airway obstruction secondary to endobronchial tumors, multiple pleural locations and extensive intrapleural tumors.Complications include infection, empyema, fever, pain, hypotension, acute respiratory distress syndrome and acute pneumonitis [4,9,13,14].
Pleuroperitoneal shunt
Pleuroperitoneal shunt can transport up to 1.5 L of pleural fluid into the abdominal cavity with each compression.It was first proposed in 1982 by Weese and Schouten.It can be placed under local anesthesia and has the advantage of early hospital discharge [15] with the advances in the field, pleuroperitoneal shunts have fallen out of favor and are rarely used.
Implantable pleural catheter
Long-term thoracostomy drainage has fallen out of favor with the invent of an implantable pleural catheter (IPC), manufactured and trademarked as PleurX.They have been approved by the Food and Drug Administration since the late 1990s [16].This is small bore catheter 66 cm long, 15 F silicone rubber catheter with fenestrations along the distal 24 cm, placed within the pleural space.This would have been the treatment option of choice for above mentioned case, if not for the development of dense loculations found on ultrasound prior to the procedure.This enforces the importance of timely diagnosis and approach of malignant pleural effusions are essential in the successful palliation of symptoms, such as dyspnea.Implantable pleural catheters can be advantageous for malignant pleural effusions as well as in select cases for symptomatic treatment of trapped lung, in that it can aid in partial lung expansion and therefore improve symptoms.
Among the many advantages of implantable pleural catheters (Table 3), the ability to intermittently drain the pleural fluid to alleviate symptoms and the avoidance of hospitalization are perhaps the most meaningful advantage for the patient [10,[17][18][19].
Video Assisted Thoracoscopic Surgery (VATS)
VATS and decortication is an inpatient procedure which can be offered for a select group of patients.Given the invasiveness of the procedure, requiring general anesthesia and single lung ventilation, along with post-operative chest tubes, patients must have a relatively good prognosis to be offered this treatment option.Although the management of choice in a symptomatic patient with trapped lung, the setting of malignant pleural effusion and associated poor prognosis, will be a relative contraindication for surgical approach in the aforementioned case.
Fibrinolytics
In this case, given the development of loculations due to recurrent thoracentesis resulting in fibrin strands and multiseptation, there were discussions of intrapleural fibrinolytic therapy.As a result of loculations, drainage and therefore palliation of symptoms associated with effusions becomes difficult to control.Fibrinolytics such as tissue plasminogen activator, urokinase or streptokinase can be introduced into the pleural space to break down fibrin adhesions and promote drainage.Although not a feasible option for our patientgiven his goals of care as he developed multiple dense loculations, lytics can be given along with chest tubes or tunneled pleural catheters.Given the risk of allergic reaction, streptokinase is not favored.Usually, 10 mg of tPA can be mixed with 50 ml of normal saline, however dose can range from 2-25 mg of tPA.This is instilled into the chest tube and clamped for 2 hours after which it is unclamped and fluid drains into a drainage unit.This process is repeated twice a day for 3 days followed by repeat imaging.If repeat imaging shows resolution of loculations, pleurodesis may be an option [2,4].Some cases will arise in which a patient will not be a candidate for the aforementioned interventions.Due to the development of trapped lung and multiple dense loculations, along with a poor prognosis with progression of disease, the patient was no longer a candidate for PleurX indwelling catheter, chest tube with lyticsor pleurodesis.In those situations, what is the best management option?All of these interventions are in essence palliative.Patients should be informed of their prognosis and options early in the disease process.Advanced directives and goals of care should be discussed by all involved, however are commonly addressed by the interdisciplinary palliative care team.Services offered by palliative care in MPE can include emotional and spiritual support with chaplaincy along with symptom management such as relief of dyspnea and/or chest pain.Opioids can be used to minimize symptoms of dyspnea by decreasing work of breathing and therefor decrease anxiety associated with breathlessness.Palliative care is able to form a bridge to hospice care for these patients, in focusing on the quality of life left to live [20].
In hindsight, it is easy to say the patient in this case should have been offered pleurodesis after the first thoracentesis or an indwelling catheter placed after the second.Regardless of the palliative treatment of the underlying effusion, patients with MPE are known to have a poor prognosis.We must focus on providing treatments to enhance patients' quality of life when such time is limited.We learn through experiences, with an understanding of the crucial timing of the management of a complication such as a malignant pleural effusion and trapped lung; and, in providing symptomatic relief consistent with improving quality of life.
|
v3-fos-license
|
2020-12-24T09:04:24.463Z
|
2020-01-01T00:00:00.000
|
235088767
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2020/17/shsconf_cc2020_07004.pdf",
"pdf_hash": "8ca6ec11c33434eacf451fb5f8b4f6c4468b8b5c",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1179",
"s2fieldsofstudy": [
"Education",
"Engineering",
"Business",
"Economics"
],
"sha1": "73ce8d919c7838a17f8cf59b901c8a296727bb00",
"year": 2020
}
|
pes2o/s2orc
|
Interaction of technology entrepreneurship and higher education as a factor in the innovative development of the region
. The purpose of the study is to substantiate the strengthening of the role of scientific and educational institutions as centers for the formation of technological, managerial and innovative competencies in the formation of an innovation ecosystem, as an environment for the functioning of technological entrepreneurship. In the course of the work, it was revealed that the motivation of the entrepreneurial sector in funding scientific research of educational and scientific institutions should come to the fore. It has been substantiated that it is precisely the close interaction of business, educational and government structures that is the key to increasing investment and innovation activity in the region. The theoretical significance of the article is to substantiate the need to strengthen the role of educational institutions as centers for the formation of technological, managerial and innovative competencies in the formation of an innovation ecosystem, as an environment for the functioning of technological entrepreneurship.
Introduction
End of XX century characterized by the emergence of a new stage in the development of the leading countries of the world, a feature of which was the transition to the fifth technological order, the distinctive features of which are the active use of information and communication technologies, bio-and nanotechnologies, genetic engineering, renewable energy sources, which is quite naturally reflected in priority areas development of science and technology. Today the world economy is on the verge of the sixth technological order, the outlines of which are beginning to form in developed countries of the world, primarily in the USA, Japan and China, the vector of development and application of science-intensive, or so-called "smart technologies" becomes the system-forming signs of a new technological order [1].
In turn, the transition to a knowledge-based economy aggravates a number of problems of the world economy, among which, first of all, it is necessary to highlight the sustainable development of economic systems, the development and implementation of social innovations, the development and implementation of renewable energy sources, and much more. In this regard, the demand for information and knowledge-rich technologies and goods to meet the growing material and social needs is growing. All this determines the modern vector of development of human civilization in the forecasted future.
Materials and methods
The main content of the study is an analysis of the activities of the Kalmyk State University, which received the status of a pivotal regional university, on the example of which it is shown what role educational structures can play in the formation of the innovation ecosystem of the region as an environment for the functioning of technological entrepreneurship. It is substantiated that it is the state that must create conditions in which it is entrepreneurship that will become not only the customer of qualified personnel with innovative, managerial and technological competencies, but also the customer of scientific research carried out by educational institutions.
(Results and Discussion)
Structural change in the determinants of the external and internal environment of economic systems as a result of scientific and technological progress leads to the dominance of the development of the technosphere, which acts as a set of artificial systems created by man [2]. The result of this development is the emergence of a qualitatively new type of founders of business, with fundamentally new technological, managerial and innovative competencies. It is economic entities with such competencies that become representatives of technological entrepreneurship [3].
One of the reasons for the rather low level of innovative activity of economic entities in the Russian economy is the low level of spread of technological entrepreneurship. At the same time, according to some estimates of researchers, during the first five years, most technology startups go bankrupt, while in 80% of the reasons for failures are problems of marketing and management, as well as capitalization of assets. Proceeding from the fact that it is the institutions of science and education that are the centers for the formation of technological, managerial and innovative competencies, it becomes necessary to clarify their role in the development of technological entrepreneurship.
There is generally no connection in the public mind between successful technological companies and higher education institutions. However, it is universities that are traditionally centers for generating innovations, it is there that the emergence and transfer of new knowledge takes place, and it is in universities that the greatest concentration of young people is observed, presenting demand for new technological, managerial and innovative competencies [4].
It is fair to note that research carried out in universities can be divided into the following types. First, it is fundamental research, which is deep in nature, which is the source of the formation of fundamentally new theories and directions of development of science and technology, and determines the directions of scientific and technological progress. Moreover, this type of research is extremely financially costly, requires unique equipment and highly qualified personnel, but does not bring profit. As a rule, this type of research is within the power of only highly developed countries, which can afford to finance basic science through sufficiently tangible budgetary expenditures. As a rule, large-scale fundamental research results in a number of narrower applied research areas that have broader opportunities for practical application. Thus, there is the formation of applied science, the results of which are easier to commercialize and are designed to create new products or services or create new properties of products and services required by the consumer. Moreover, it is natural that the period of applied development is much shorter and cheaper than the period of the corresponding fundamental research [5]. Thus, it is the fundamental developments that are the basis for applied science, the results of which are reflected in various types of technological entrepreneurship. SHS Web of Conferences 89, 07004 (2020) Conf-Corp 2020 https://doi.org/10.1051/shsconf/20208907004 Regional economic systems as a consequence of the development of technological entrepreneurship on their territory receive a list of technologies, products and services characterized by a high level of competitiveness in the market. This accelerates the rate of economic growth in the region, changes the structure of the economy and increases the investment attractiveness of the territory. The experience of developed countries confirms that it is the level of development of technological entrepreneurship that is one of the most important indicators of the level of regional development [6]. Consequently, it is necessary to strengthen the role of universities in the development of technological entrepreneurship, and the state should create not only favorable conditions for innovative activities of universities, but also conditions for the commercialization of these innovations [7].
In modern conditions, the creation of a system of flagship universities in Russia presupposes positioning them as the cores of the region's innovation ecosystems [8]. For example, in the Republic of Kalmykia, the only institution of education and science on the territory is becoming one of the most important drivers of the socio-economic development of the region. At the same time, it should be noted that in these conditions regional universities are faced not only with the task of providing high-quality educational services and research activities, but also independently attracting investors to finance these activities. The consequence of this situation is that a regional university has to become not only a university for students, but also an entrepreneurial university in order to obtain an opportunity to finance the research activities. In this regard, it is the development of technological entrepreneurship on the basis of the university that opens up wide opportunities for educational and scientific institutions to attract resources to finance scientific research and more active interaction with the regional community.
Today, after receiving the status of a reference regional university, Kalmyk State University plays a major role in the formation of a regional innovation ecosystem. The higher educational institution of the Republic of Kalmykia is the central research site of the territory and is focused on the formation of priority areas for the socio-economic development of the region.
Kalmyk University has strong friendly relations with the Innovation Support Fund, within the framework of which KalmSU acts as the main platform for competitions within the framework of the "U.M.N.I.K." and "START", the annual forum "Innovative Kalmykia", which are the centers of attraction for the best innovative projects in the south of the country. Also, innovative activity at KalmSU is represented by a network of small innovative enterprises, which are a clear example of the development of technological entrepreneurship on the basis of an educational institution.
It is innovation and entrepreneurial activity that is becoming a promising direction for the development of the university in modern conditions. As the tasks of the implementation of this direction, it should be noted the formation of the innovation ecosystem of the region, stimulation of the development of technological entrepreneurship, support of innovative activities, search and development of talented youth as the basis of the human potential of technological entrepreneurship, formation of technological, managerial and innovative competencies of future entrepreneurs.
The studies carried out indicate that modern Russian share of universities acting as partners in technological entrepreneurship is extremely small. [9]. At the same time, as international experience shows, in highly developed countries, the level of interaction between technological entrepreneurship and higher educational institutions is much higher. The following mutually beneficial reasons for such a high level of cooperation between technology business and universities in world practice can be distinguished: 1) universities are the main subject of the supply of qualified personnel for the labor market, while the peculiarity of the new federal state educational standards is that it is the request of employers that determines the profile of training areas; 3) universities are the basic platform for creating technology startups. Thus, mutually beneficial cooperation between business and educational structures can bring significant social, cultural and financial results for both parties. Businesses will benefit from the supply of skilled workforce and applied university research. Educational and scientific institutions can act as producers of knowledge commercialized through technology entrepreneurship, which will help build the capacity of resources for further research funding. Thus, it is precisely the close successful interaction of business and higher educational institutions that will accelerate the socio-economic development of the territory. World experience testifies that in modern conditions it is becoming an important factor of success to build up regional interaction between education and business.
However, it is the trust between potential partners that becomes the main problem in the development of cooperation between universities and business. This is clearly seen in the modern relations between educational and business structures. The asymmetry of information about research conducted within the higher educational institutions leads to a mismatch between supply and demand in the market for technological innovations, which then results in a low level of commercialization of scientific and technical developments and a low level of patenting [10].
An insufficiently high level of cooperation between business and educational institutions, which is reflected in the insufficient level of development of technological entrepreneurship, can be called as a factor that inhibits development on the technological development of Russia. The reluctance of business structures to invest in research in universities leads to a slowdown in research growth and an increase in demand for public investment. At the same time, the effect of state funding for scientific research is usually lower than the effect of commercial funding. There are many examples in the world when there were the startups that emerged on the basis of an educational institution that, over time, were transformed into successful large businesses. It is the high-quality cooperation of business and educational structures in modern conditions that becomes the key to the socio-economic development of the territory. At the same time, it is the enterprises that are actively involved in cooperation with educational institutions that are the first to gain access to technologies for the production of unique intellectual products and services.
Conclusions
The effective promotion of innovation and the achievement of technological leadership can only be built on the basis of close cooperation between educational and business structures. The state should play a key role in this process, but funding of scientific activities from the state budget alone is not enough. The state must create conditions in which it is entrepreneurship that will become not only the customer of qualified personnel with innovative, managerial and technological competencies, but also the customer of scientific research carried out by educational institutions. Therefore, the motivation of business structures in investments in scientific research of educational institutions comes to the fore. Here the state can use such instruments of state incentives as state guarantees and orders, as well as measures to improve the social status of research activities of universities with the aim of innovative development of territories. As for the regions, here the regional innovation policy should come to the fore, the purpose of which should be the formation of an effective innovation ecosystem that will promote the spread and successful functioning of technological entrepreneurship. In particular, the Kalmyk State University today can become the platform on which the interaction of the scientific and entrepreneurial potentials of the region will take place so that they produce SHS Web of Conferences 89, 07004 (2020) Conf-Corp 2020 https://doi.org/10.1051/shsconf/20208907004 unique intellectual goods and services and contribute to an increase in the general level of well-being of the region. In this regard, the main goal of the regional policy to stimulate innovation is to create favorable conditions for the development of technological entrepreneurship on the basis of the university and through the private investment. The positive consequences of such a policy will be an increase in the competitiveness of local goods and services, an increase in the number of jobs created, an increase in the investment attractiveness of the territory and, as a consequence, the solution of many socio-economic problems of regional development. Fundamental is the creation of the necessary institutional environment for the implementation of innovative activities, which implies the creation of an appropriate regulatory legal framework, the development of targeted programs, the creation of a system of grant support and much more. The main tasks of regional target programs should include stimulating interaction between subjects of supply and demand for research activities, dissemination of best practices for creating innovative products, and developing a general strategy for innovative processes in the region. At the same time, with a lack of its own resources for regional and federal funding of scientific research, the state must create conditions under which the entrepreneurial structures would benefit from participation in the innovation processes taking place in the universities. It is precisely the close interaction of business, educational and government structures that will be the key to increasing investment and innovation activity in the region. Another priority area for the development of technological entrepreneurship at the regional level is the creation and development of a technological platform, in which the efforts of entrepreneurial, educational and government structures will be consolidated. It is possible to implement this direction within the framework of the cluster approach by creating and developing a technological cluster, the directions of activity of which will determine the vectors of development of priority sectors and complexes of the region's economy. Kalmyk University in the status of a reference regional university can become a key participant in the technological cluster of the Republic of Kalmykia, solving the problems of reproducing the intellectual potential of the region and its effective use in innovation. The result of the creation of a technological cluster can be the formation of an effective innovation ecosystem in the region, an increase in the investment attractiveness of the territory, a decrease in educational migration, an increase in regional and local budget revenues, and much more.
|
v3-fos-license
|
2017-05-05T09:00:12.701Z
|
2016-08-30T00:00:00.000
|
5964099
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01276/pdf",
"pdf_hash": "9ba756cb115a55e51e60662687fc81346f00c470",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1180",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "9ba756cb115a55e51e60662687fc81346f00c470",
"year": 2016
}
|
pes2o/s2orc
|
Effects of Type of Agreement Violation and Utterance Position on the Auditory Processing of Subject-Verb Agreement: An ERP Study
Previous ERP studies have often reported two ERP components—LAN and P600—in response to subject-verb (S-V) agreement violations (e.g., the boys *runs). However, the latency, amplitude and scalp distribution of these components have been shown to vary depending on various experiment-related factors. One factor that has not received attention is the extent to which the relative perceptual salience related to either the utterance position (verbal inflection in utterance-medial vs. utterance-final contexts) or the type of agreement violation (errors of omission vs. errors of commission) may influence the auditory processing of S-V agreement. The lack of reports on these effects in ERP studies may be due to the fact that most studies have used the visual modality, which does not reveal acoustic information. To address this gap, we used ERPs to measure the brain activity of Australian English-speaking adults while they listened to sentences in which the S-V agreement differed by type of agreement violation and utterance position. We observed early negative and positive clusters (AN/P600 effects) for the overall grammaticality effect. Further analysis revealed that the mean amplitude and distribution of the P600 effect was only significant in contexts where the S-V agreement violation occurred utterance-finally, regardless of type of agreement violation. The mean amplitude and distribution of the negativity did not differ significantly across types of agreement violation and utterance position. These findings suggest that the increased perceptual salience of the violation in utterance final position (due to phrase-final lengthening) influenced how S-V agreement violations were processed during sentence comprehension. Implications for the functional interpretation of language-related ERPs and experimental design are discussed.
INTRODUCTION
Most native-speaking adults are able to instantaneously recognize whether a sentence is grammatical or not during sentence comprehension. This is an amazing feat given that the processes underlying sentence comprehension are by no means simple (e.g., Nichols, 1986;Nicol et al., 1997;Pearlmutter et al., 1999;Rayner and Clifton, 2009;Wagers et al., 2009). For example, when presented with sentences such as "The boy often cooks on the stove" or "The boys often cook on the stove," English speakers must keep track of the grammatical information (i.e., number) of the subject noun phrase in order to determine which verb form qualifies as a suitable continuation of the sentence. Thus, in the first sentence, the verb will take the 3rd person singular −s (3SG) inflection, whereas in the second sentence, the verb remains uninflected. Failure to use the appropriate verb form results in ungrammatical sentences, as in "The boy often * cook on the stove" and " * The boys often * cooks on the stove." This phenomenon of establishing grammatical relations between the subject and the verb is known as subject-verb agreement (S-V agreement).
Knowledge of the S-V agreement rule is thus considered to facilitate successful sentence comprehension. However, recent studies suggest that there are a number of other factors, such as type of morphological feature or syntactic complexity of the morpheme, that interact with the processing of grammatical information during on-line sentence comprehension (for a review, see Molinaro et al., 2011). One factor that has received relatively little attention in agreement processing studies is relative perceptual salience due to (i) the prosodic context of the target word (utterance-medial vs. utterance-final) and (ii) the overtness of the violation (errors of omission vs. commission). The present study therefore examined how effects of perceptual salience due to utterance position and type of agreement violation may modulate the neural responses to S-V agreement violations during on-line speech comprehension. The findings contribute to our understanding of the types of information that influence on-line sentence comprehension, and have implications for study design.
While it is not yet known how perceptual salience may impact on the neural responses to S-V agreement, there is abundant evidence that the position of the target verb in the utterance modulates young children's production of grammatical morphemes (e.g., Song et al., 2009;Theodore et al., 2011Theodore et al., , 2012. For example, Song et al. (2009) observed that children typically produce 3rd person singular morphemes more reliably when the verb occurs utterance finally compared to utterance medially. This is thought to be due to the fact that syllables (and morphemes) occurring utterance finally are longer in duration than those that occur utterance medially (Wightman et al., 1992;Hsieh et al., 1999;Christophe et al., 2003Christophe et al., , 2004Oller, 2005;Wagner and Watson, 2010). As a result, these longer utterance-final morphemes might also be perceived better than the utterance-medial ones.
To test this hypothesis, Sundara et al. (2011) investigated 2-year-olds' perceptual sensitivity to grammatical (inflected) vs. ungrammatical (uninflected) 3rd person singular verbs in utterance-final versus utterance-medial position in an auditory visual-fixation task (e.g., Now he cries vs. * Now he cry; He cries now vs. * He cry now). As expected, infants showed a difference in looking times to the grammatical vs. ungrammatical sentences when the verb and morpheme occurred utterance finally, but not utterance medially. They interpreted these findings to suggest that the increased duration of the −s morpheme at the end of the utterance provides extra acoustic cues for listeners, enhancing infants' ability to detect its presence, and ungrammatical absence. That is, infants were more sensitive to the missing morpheme utterance finally compared to utterance medially due to the greater perceptual salience of the morpheme in durationally longer utterance-final position. However, Sundara et al. (2011) did not explore whether children would be equally sensitive to grammatical violations involving errors of commission (Now they cry vs. * Now they cries; They cry now vs. * They cries now).
Given that both errors of omission and commission result in S-V agreement violations, we would expect listeners to be equally sensitive to the grammatical violation. However, there are a number of reasons to assume that listeners might be more sensitive to errors of commission compared to errors of omission. One of the assumptions is that listeners often perceive speech sounds that they expect to hear even when they are physically absent from the stimuli, that is, phoneme restoration (Warren, 1970). This may make omission errors more difficult to detect than commission errors in which an unexpected morpheme is inserted into the speech. Another related assumption is that with auditory presentation, the perception and identification of the morpheme may be dependent on its physical characteristics, which may in turn affect the detection of agreement violations. Thus, the mere presence of the superfluous −s morpheme in the errors of commission makes the violation more overt compared to errors of omission. Listeners might therefore be more sensitive to the overt error. However, to our knowledge, there is no empirical evidence showing how effects of auditory perceptual salience due to utterance position or type of agreement violation influence neural responses to S-V agreement processing during on-line speech comprehension.
One of the tools ideally suited for exploring the different kinds of information that modulate on-line sentence comprehension is the event-related potentials (ERPs). The ERPs are characteristic patterns of voltage change extracted from brain electrical activity recorded on the scalp by time-locking the electroencephalogram (EEG) to the presentation of the stimuli (Luck, 2014). The excellent temporal resolution of ERPs allows exploration of the nature and timing of the processes that underlie the online computation of grammatical agreement. Researchers can determine if the processes are qualitatively or quantitatively different by comparing the ERP waveforms in terms of their polarity, amplitude, latency, and scalp distribution. Evidence from ERP studies demonstrates that native-speaking adults are exquisitely sensitive to S-V agreement violations (Molinaro et al., 2011). There are two ERP components that have been widely associated with the processing of S-V agreement violation in native-speaking adults.
Based on the evidence from previous studies that correlated morphosyntactic processing to the LAN and the P600, Friederici (2002) proposed a neuro-cognitive model of auditory sentence comprehension. This model is influenced by the syntax-first models of sentence comprehension which assume that syntactic information is processed autonomously, prior to any other (non-syntactic) information (e.g., Frazier and Fodor, 1978). The syntax-first models do not accommodate the view that syntactic and other types of information interact at each stage of language processing as assumed by the interactive processing models (e.g., Trueswell et al., 1993Trueswell et al., , 1994Garrod, 2007, 2013). However, Friederici (2002) argues that both autonomous processing and interactive processing, hold in principle, but describe different processing phases during language comprehension (i.e., early versus late). Thus, according to Friederici's model, the early stages of sentence comprehension entail syntactic categorisation, morphosyntactic segmentation and thematic role assignment; these processes are correlated to the ELAN, LAN, and N400 effects, respectively (see also, Friederici, 2011). On the other hand, the late stage of syntactic re-analysis entails the integration of other information relevant for the interpretation of the sentence; this process is correlated to the P600 effect. This model thus assigns a modular-specific functional interpretation to the LAN and P600 components. Furthermore, the proposition of this model suggests that the LAN would be a more reliable and stable component compared to the P600.
However, Friederici's model of sentence comprehension is not explicit on whether or how the nature of incoming syntactic and other types of information may modulate the LAN and P600 effects. As a result, the model has been challenged by studies which have observed these ERP effects to vary in their presence, latency, amplitude, and distribution as a function of the characteristics of the morphosyntactic elements in question. For example, some studies investigating agreement processing, in languages other than English, have reported an N400 effect instead of the typical LAN effect (e.g., Wicha et al., 2004). Others did not observe the LAN (e.g., Osterhout et al., 1994;Hagoort and Brown, 2000;Kaan et al., 2000;Kos et al., 2010). On the other hand, while the P600 effect is often reported for agreement violations, some studies have not reported it (e.g., O'Rourke and Van Petten, 2011). This variable realization of the LAN and P600 effects has resulted in some scholars questioning the modular functional interpretation of these ERP components (for discussion, see Kaan and Swaab, 2002;Bornkessel-Schlesewsky et al., 2015;Tanner, 2015). However, despite the ongoing debate about the functional significance of the LAN and P600 effects (see also, Kolk and Chwilla, 2007;Kuperberg, 2007), there is generally a strong correlation between grammatical violations and the presence of the LAN and/or P600 in native-speaking adults (Molinaro et al., 2011). In the following paragraphs, we take a closer look at previous ERP studies that have investigated S-V agreement processing involving inflectional violations, as summarized in Table 1.
A consistent finding across all the 10 studies in Table 1 is that S-V agreement violations elicited P600 effects, albeit with varying latencies, amplitudes and scalp distribution. However, only half of these studies have also reported a left anterior negativity (LAN) or anterior negativity (AN) preceding the P600. The variability of the LAN effects is often explained in terms of morphological feature differences, while that of the P600 is explained in terms of whether the task was passive or active (e.g., Kolk and Chwilla, 2007) or whether the violation was syntactically simple or complex (e.g., Kutas and Hillyard, 1983;O'Rourke and Van Petten, 2011). However, the studies highlighted in Table 1 also show that the P600 effects reported in these studies also vary due to a number of experimental-related factors which include modality of presentation, position of the violation, and the type of agreement violation used. For example, studies which used the visual modality reported a LAN with an onset latency around 300 ms, and a P600 around 500 ms. In contrast, studies that used the auditory modality reported ERP effects with earlier onset latencies. For example, (Shen et al., 2013) reported the LAN with an onset around 140 ms while Hasting and Kotz (2008) reported the LAN with an onset around 100 ms and a P600 with an onset latency around 300 ms.
These gradient effects on the latency of the negativity are generally assumed to reflect the ease of detecting the violation whereas those of the P600 reflect the speed of the revision or reanalysis of the violation (Friederici, 1998). Thus the different latencies observed between the visual and auditory modalities have been interpreted to suggest that modality of presentation impacts on the processing of S-V agreement violations (e.g., Hasting and Kotz, 2008). However, Hasting and Kotz have further noted that the time-locking point used in S-V agreement studies also matters, suggesting that time-locking at the onset of the morpho-syntactic violation instead of word onset may contribute to latency differences.
Besides the latency differences occurring between different modes of presentation, the scalp distribution and the size of the P600 component reported in previous studies also differ as a function of syntactic complexity. For example, the difference between the longer P600 effects (500-900 ms) with a centroposterior distribution reported by Kos et al. (2010) and the shorter P600 effects (500-700 ms) with a posterior distribution reported by De Vincenzi et al. (2003) are interpreted to be a function of type of violation complexity. The differences observed in the scalp distribution and sizes of the components are assumed to reflect the degree to which the brain is engaged in syntactic reanalysis (e.g., Osterhout et al., 2004). The degree of brain involvement during sentence processing has been shown to be influenced by the level of syntactic integration difficulty (e.g., Kaan et al., 2000) or complexity of the syntactic structure involved (e.g., Coulson et al., 1998;Nevins et al., 2007;O'Rourke and Van Petten, 2011). These findings show that ERPs are ideal for identifying factors that modulate the processing of S-V agreement violations during sentence comprehension. However, they also indicate that different methodological aspects of the experiment influence the realization and interpretation of the ERP components. The question of whether different types of agreement violation and utterance position influence the processing of S-V agreement violations is therefore important, given that these factors have been variably used in previous studies. However, the variability of the LAN and P600 effects has never been considered in light of the type of agreement violation (errors of omission vs. errors of commission) and utterance position (medial vs. final). For example, Osterhout and Mobley (1995) looked at errors of commission, i.e., superfluous addition of the 3SG, (e.g., the officials hope/ * hopes....) occurring sentence medially, in a visual modality paradigm. They reported a left-anterior negativity (LAN) with an onset around 300 ms followed by a centro-posterior P600 with an onset around 500 ms. Similar biphasic LAN/P600 effects were observed in other studies that used the visual paradigm and sentence-medial position, although they looked at both errors of omission and commission that were collapsed together in the analysis (e.g., Coulson et al., 1998). In contrast, Shen et al. (2013) looked at errors of omission, i.e., omission of the 3SG, (e.g., Larry pushes/ * push his ...) occurring utterance-medially in an auditory modality paradigm. They reported a bilateral anterior negativity (AN) with an onset around 150 ms followed by a posterior P600 with an onset around 700 ms.
Similarly, early LAN effects were observed in Hasting and Kotz (2008), who investigated agreement violation processing in German, using the auditory modality. However, the P600 effects observed in their study had an early onset latency around 300 ms. Importantly, Hasting and Kotz's study differed from Shen et al. (2013) in that it looked at S-V agreement violations involving substitution errors that occurred in utterance-final position. So while it seems that modality of presentation modulated the ERP latencies in these studies, these effects are confounded with effects of errors of omission vs. commission. Moreover, we do not know if utterance-final S-V agreement violations in English will result in similar effects reported in Hasting and Kotz (2008) given that none of the previous ERP studies have investigated utteranceposition effects on the processing of S-V agreement violations during on-line auditory sentence comprehension.
The foregoing discussion has thus motivated the purpose of the present study in two ways. The first is that, to date, most ERP studies of S-V agreement have presented stimuli in the visual modality, with participants viewing sentences presented one word at a time. While this allows precise timelocking to the onset of individual words and is relatively straightforward to implement, it is clearly very different to the typical reading experience. Visual presentation also limits research to participants who are fluent readers. It is thus unsuitable for studies of grammatical development in young children and other special populations, such as second language learners. Moreover, insights gained from studies in the visual modality may not readily translate to auditory presentation.
The second, which is linked to the first, is that the few ERP studies of S-V agreement that have been carried out in the auditory modality have used a range of stimulus manipulations and different languages, and perhaps as a result, have produced inconsistent results. The first, conducted by Hasting and Kotz (2008), investigated substitution errors occurring sentence medially, in German. They noted an early LAN with an onset around 100 ms and an early but long-lasting positive component with an onset latency around 300 ms. Subsequently, Shen et al. (2013) looked at errors of omission (e.g., Larry pushes/ * push his ...) occurring sentence medially in English sentences. They reported an early bilateral anterior negativity (AN) with an onset around 150 ms followed by a posterior P600 with an onset around 700 ms. It is possible that the different results may be due to the different experimental designs used in these studies, e.g., stimuli manipulation and utterance position. However, no ERP study has systematically explored these factors in the same study, to establish whether or how they may impact the neural responses to S-V agreement during on-line speech comprehension.
The aim of the present study, therefore, was to use ERPs to systematically explore the effects of type of agreement violation and utterance position on listeners' neural responses to S-V agreement violation in English. To achieve this, we recorded listeners' ERP responses to grammatical and ungrammatical sentences in which the S-V agreement violations differed according to the utterance position (medial vs. final) in which they occurred. Furthermore, the type of agreement violation differed depending on whether the 3SG −s was omitted (errors of omission) or superfluously added (errors of commission) as shown in Table 2.
The manipulation of the S-V agreement violations by type of agreement violation and utterance balanced design, as described by Steinhauer and Drury (2012). However, given that our study used speech stimuli, we made an important decision on how we paired the grammatical and ungrammatical sentences for analysis. Instead of comparing grammatical and ungrammatical sentences that only target verb (also known as target verb manipulation), as shown in Table 2, we compared sentences that had the same target verb but differed in context (also known as context manipulation) as shown in Table 3. Thus, ungrammatical verb-forms without an −S (errors of omission) were compared with grammatical verb-forms without −S whereas the ungrammatical verb-forms with a superfluous −S (errors of commission) were compared with grammatical verb-forms with −S. The context manipulation comparisons thus avoided possible confounding effects of the acoustic presence/absence of the −S sound.
Based on previous findings from ERP studies on agreement processing, we predicted that S-V agreement violations would elicit a biphasic LAN/P600 effect. However, if effects of perceptual salience modulated listeners' sensitivity to the violations, we expected that listeners might be more sensitive to ungrammatical verb-forms with −S (errors of commission) than to ungrammatical verb-forms without −S (errors of omission) due to greater perceptual salience of the overt violation. We also hypothesized that S-V agreement errors occurring utterancefinally would elicit larger LAN/P600 effects compared to errors that occurred utterance-medially.
Ethics Statement
The Ethics committee for Human Research at Macquarie University approved the experimental methods used in this Frontiers in Psychology | www.frontiersin.org study. Written informed consent was obtained from all participants before the experiment began.
Participants
Twenty monolingual Australian-English speaking adults (age range: 18-25 years; mean: 22; 11 female, 9 male) participated in this study. Participants were recruited from the university student population. All completed a questionnaire on their developmental and linguistic history before participating in the study, and all were right-handed, with no clinical history of hearing or learning disorders. They received either course credits for participation or $20 if they did not require the course credits. Eight additional participants were excluded from the final analysis due to excessive ERP artifacts (e.g., as a result of sweating, or too much movement).
Stimuli
The auditory stimuli included 50 CVC target verbs that could be used intransitively in both sentence medial and final positions (e.g., The boy often cooks on the stove vs. The boy often cooks). This ensured that all verbs could be used in both utterance-medial and utterance-final conditions, respectively. The sub-categorisation status of the verbs was verified by five native speakers of English. Only those verbs with high-medium frequency were selected to ensure familiarity and to facilitate processing. The criteria for lexical frequency was that the verbs had between 1-3 counts on the SUBLEX Log 10 CD (Hofmann et al., 2007). In addition, only those verbs that ended with the voiceless coda stops /p/, /t/, /k/ were selected to make sure that the inflected−s morpheme was always realized in the same allophonic condition (e.g., as /s/). This facilitated subsequent splicing of the materials and ensured that all similar items had the same morpheme length (see below). As the stimuli were later paired with a picture to provide a visual context while listening to the sentence, the verbs also had to be highly imageable.
The verbs were inserted into carrier sentences that were composed of monosyllabic words, thereby controlling for utterance length and processing load. The carrier sentences had a singular vs. plural subject to enable manipulation of type of agreement violation (verb-form without −S/errors of omission vs. verb-form with −S/errors of commission). The verbs appeared in the middle vs. end of the carrier sentence to create the utterance-medial vs. utterance-final conditions, respectively (as shown above in Table 2). In the utterance-medial position, the verb was always followed by a preposition with a vowel onset to avoid masking of the morpheme in the preceding verb. All sentence stimuli were accompanied by cartoon pictures that were designed by a professional cartoonist (see example in Figure 1). The drawings had a constant level of visual complexity to avoid distracting details. The purpose of the pictures was to sustain participants' attention, and keep their eyes focused on the computer display to minimize head movement (muscle movements introduce artifacts to the ERP data).
This study employed a 2 × 2 × 2 design by crossing type of agreement and utterance position with grammaticality. Each verb therefore appeared in a total of eight conditions, resulting in 50 test items per condition and a total of 400 test items. In addition to the test items, there were 44 catch trials. All catch trials were grammatical and had the same structure as that of the target carrier sentences, but the verbs were not fully controlled for CVC structure (e.g., eat). These catch trials were used as a probe task in order to maintain participants' attention during the experiment (see Task and procedure for further details).
Auditory Stimulus Preparation
All grammatical sentences were spoken by a female native speaker of Australian English who was trained in how to produce the sentences. To control for naturalness and intonational constancy, the sentences were read in response to a question and the accompanying picture. For example, all medial sentences were responses to a question like, "What do the boys often do on the stove? (Answer: The boys often cook on the stove). For the final conditions the question was "What do the boys often do? (Answer: The boys often cook). Medial and final conditions were separated into two lists and all sentences within the same list were recorded together. The sentences were recorded using Audacity (Audacity Team) in a sound-attenuated booth with a Behringer C2 microphone and a USBPre-2 amplifier. The recordings were digitized at a sampling rate of 44 KHz (16 bit; mono). Following the recording, the sentences were normalized using Audition C6 (Adobe Systems) and then extracted into individual sentences using Praat (Boersma and Weenink, 2012).
Source Result
The boys often |cook on the stove The boys often *cooks on the stove The boy often |cooks on the stove The boy often *cook on the stove The boys often |cook The boys often *cooks The boy often |cooks The boy often *cook Instead of recording ungrammatical sentences, we created the stimuli by cross-splicing the grammatical productions from the onset of the verb, as shown in Table 4. All sound files were spliced at the zero-crossing from the beginning of the verb using Audition C6 (Adobe Systems). This procedure was meant to minimize the possibility of listeners using any early acoustic cues to distinguish between the grammatical and the ungrammatical condition. Previous studies using the auditory EEG paradigm have observed that recording ungrammatical structures, even with a trained speaker, introduces subtle but systematic slowing in production as well as intonation modifications (Hasting and Kotz, 2008;Royle et al., 2013). Therefore, the splicing procedure was used to avoid possible acoustic differences between grammatical and ungrammatical sentences before the point of violation. All stimuli were later rated for naturalness by a highly trained phonetician.
After splicing the stimuli, we used Audition C6 (Adobe Systems) to examine the waveforms and insert triggers into the individual sound files. We systematically used the end of closure for the coda stops, instead of the end of burst release, as the time-locking point for all four conditions. This is because the burst release of some coda stops such as /t/ is not always clearly identifiable when followed by frication (i.e., the /s/ 3SG morpheme). By time-locking to the end of closure, we made sure that the time-locking points for grammatical and ungrammatical sentences were identical in all conditions. The spectrograms in Figure 2 illustrate the time-locking points for grammatical and ungrammatical conditions that had inflected and uninflected verb-forms. Having the same time-locking point ensured that the grammatical and ungrammatical conditions were comparable in terms of where and when the ERP violation effects appeared in both medial and final contexts.
Recall that one of the aims of this study was to explore the effects of perceptual salience on the sensitivity to S-V agreement violations. Critical to this effect is the prediction that 3SG −s will be longer utterance finally due to phrase-final lengthening. To ensure that this was the case we used Praat to conduct acoustic measures of frication duration across all 50 tokens of 3SG −s. As expected, the −s in utterance-final position was twice as long as the morpheme utterance medially, with a mean duration of 238 ms (SD 28 ms) compared to 114 ms (SD 22 ms). Paired t-tests were used to compare the duration of the −s in medial and final position, and as expected, this difference was statistically significant, t (49) = −5.989, p < 0.001. This confirmed that the 3SG morpheme in utterance-final position was longer than that in utterance-medial condition.
Task and Procedure
Participants were fitted with an electrode cap (Easycap, Brainworks, GmbH) while seated in a comfortable plush chair at a distance of one meter from a CRT computer screen, in a dimly lit sound-attenuated and electromagnetically shielded room. EEG signals were recorded continuously as participants listened to sentences. They were instructed to listen attentively to all sentences and to immediately press a given response button when they heard the words "cut/cuts" or "eat/eats" in the sentence. These verbs were used as catch trials while the button-press task prevented participants from performing explicit grammaticality judgments. This probe task was therefore used to distract participants from concentrating on the grammaticality of the sentences without hindering the natural comprehension process (Dragoy et al., 2012).
The sentences and their matching pictures were presented using Presentation (Neurobehavioral Systems) which also recorded responses (hits, misses and false alarms) for the probe task. These behavioral responses helped us to determine if participants were attending to the task. The sentences were presented via two audio speakers, at an intensity of 75 dB SPL, while the matching images appeared on the screen. The speakers were positioned on the left and right of the computer screen.
The sentences were grouped into medial and final lists in which each list had two 10-min blocks. Each block had 111 sentences with accompanying pictures. The lists were presented separately to avoid mixing the medial and final conditions as they were of different word lengths. By blocking the presentation we also controlled for the possibility that the transitivity of the medial condition (verb + prepositional phrase) would influence participants' interpretation of final sentences, as they might then have expected a prepositional phrase in this condition as well. This was particularly important given that one of the aims of this study was to explore utterance position effects, we had to minimize any possible confounds. To control for presentation list effects, the order of the blocks was counterbalanced among the participants so that half of the participants heard the medial-final order first, and the other half had the final-medial order first.
Within each block, the order of sentence/picture presentation was pseudo-randomized with the constraint that the same verb did not occur consecutively. Two catch trials were presented at the beginning of the first block of each list and the presentation was pseudo-randomized with the constraint that they occur after five to eight consecutive target items within the block. A picture of an eye appeared on the screen ∼1000 ms after the end of each sentence to control for eye blinks and remained on the screen for 1000 ms. Participants were asked to avoid blinking during the presentation of the sentences but to blink when the picture of an eye appeared on the screen. They were also asked to sit still during the presentation of the sentences to avoid movement artifacts during the EEG recording. The sentences had an inter-stimulusinterval of 3 s. A short break was taken at the end of each block. The duration of the break was determined by the participant. Altogether, the experiment lasted about 60 min.
EEG Data Processing
The digitized data were processed off-line in Matlab (Version R2013b: MathWorks, Machussets, USA) using the Fieldtrip toolbox (Oostenveld et al., 2010: Version 2014. The data were epoched into trials of 1000 ms including a 100 ms pre-stimulus interval and then filtered with a Butterworth bandpass of 0.05-20 Hz for Independent Component Analysis (ICA) analysis. Extreme trials with amplitudes larger than ± 300 µV were removed before entering all trials into the ICA. The purpose of the ICA was to identify any components resembling eye blinks, horizontal eye movements, noisy channels and other focal artifacts. The identified components were then mathematically removed from the data and signals were back projected to the original unfiltered data. After ICA, each channel was re-referenced to the mean mastoids and baseline corrected using the 100 ms pre-stimulus interval. Trials with artifacts that exceeded 100 µV, with trends greater than 75 µV, or with abnormal distributions or improbable data exceeding five SDs, were also rejected. This procedure removed a total of 172 trials or (0.46% of all trials) from the eight experimental conditions: 21 medial-singular grammatical, 24 medial-singular ungrammatical (omission), 23 final-singular grammatical, 19 final-singular ungrammatical (omission), 21 medial-plural grammatical, 22 medial-plural ungrammatical (commission), 24 final-plural grammatical, and 18 final-plural ungrammatical (commission). There was no reliable difference between the numbers of rejected trials across conditions. The remaining trials in each of these conditions were averaged for each participant and grand averages were then computed for each of the conditions.
EEG Data Analysis
An important decision in conducting data analysis was how to pair ungrammatical sentences with corresponding grammatical sentences. For example, the ungrammatical sentence "The boys often cooks on the stove" could be paired with "The boys often cook on the stove, " keeping the context consistent but changing the inflection on the verb. However, in auditory studies, this entails that grammaticality effects are confounded with differences in the acoustic content following the verb stem, in terms of both the presence/absence of the −s and the timing of the subsequent word. This in turn means that "grammaticality" effects on ERPs may arise even when participants are insensitive to the grammatical violation. We therefore chose instead to manipulate the context whilst keeping the verb inflection constant by comparing the grammatical vs. ungrammatical verbs across the singular and plural conditions. (e.g., The boy often cooks on the stove vs. The boys often cooks on the stove). This removes any potential acoustic confound following the verb. Although the context manipulation could itself present as an acoustic confound affecting the pre-stimulus baseline, this should be minimized by the intervening adverb (see Steinhauer and Drury, 2012) for discussion on effects of context/target manipulation on syntactic violation processing).
Another important decision was to objectively select an appropriate time-window for our auditory ERP data so that we could make direct comparisons across conditions. As discussed in the Introduction, different ERP latencies have been reported for studies using the visual and auditory modality (see literature review Table 1). Thus, instead of relying on a priori time windows associated with (L)AN or P600, we used non-parametric cluster-based permutation tests (Maris and Oostenveld, 2007) to identify time windows where significant effects of grammaticality were present in the grand averaged data collapsed across type (omission vs. commission) and position (utterance-medial vs. utterance-final). As described by Maris and Oostenveld (2007), the cluster-based permutation test first identifies sampling points with t-statistic exceeding a critical threshold (p < 0.05, twotailed). Clusters are then formed by connecting significant sampling points on the basis of spatial and temporal adjacency. This is done separately for sampling points with positive and negative t-values. The maximum cluster-level test statistics (the sum of all individual t-values within a cluster) are then computed to generate permutation distributions, one for positive clusters and one for negative clusters, based on 1000 random partitions. The significance of a cluster is determined by whether it fell in the highest or the lowest 2.5th percentile of the corresponding distribution. To foreshadow our results, we identified two significant clusters, corresponding to the AN and P600.
For each component, we then performed a repeated-measures MANOVA using Grammaticality, Verb-form, Position, and Region of interest (ROI) as within-Subject variables. We defined nine ROIs, taking the means of electrodes in the parenthesis: Figure 3.
We present the results from the cluster-based permutations first, and then the procedure and results for the MANOVAs. Note that the statistical analyses were performed on original unfiltered data, but for presentation purpose, the ERP waveforms presented in this paper were filtered using a 40-Hz low-pass filter.
Effects of Grammaticality
The primary goal of this study was to test if adult native English speakers would be sensitive to S-V agreement violations, as often reported in previous studies where there is generally a strong correlation between grammatical violations and the presence of the (L)AN and/or P600 in L1 adults (Molinaro et al., 2011). However, we further sought to explore if these responses would be modulated by the relative perceptual salience of S-V agreement violations as a function of utterance position (medial vs. final) and type of agreement violation where the verb-form occurred without an −S (errors of omission) or with a superfluous −S (errors of commission). We begin by reporting the results of the cluster-based permutation tests, which contrasted the grand average ERP waveforms of the grammatical condition with those of ungrammatical condition (collapsed over type of agreement and utterance position). The grammaticality effects are shown at nine representative electrodes (corresponding to locations F3, Fz, F4; C3, Cz, C4, and P3, Pz, P4 in a standard 10-20 set-up) in Figure 4, which also shows the topographic maps highlighting the distribution and time course of the significant clusters.
Visual inspection of the waveforms indicates that, relative to the grammatical verbs, ungrammatical verbs elicited a bilateral negative-going waveform over the anterior-central electrodes followed by a positive-going waveform over the central-posterior electrodes. Statistical analysis using cluster-based permutation tests revealed that contrasts observed for grammatical vs. ungrammatical verbs yielded a significant negative cluster (p = 0.036) between 130 and 210 ms in the anterior-central electrodes and a significant positive cluster (p < 0.0001) between 350 and 590 ms with a centro-posterior distribution.
MANOVA: Effects of Type of Agreement and Utterance Position
Waveforms for each of the four conditions are shown in Figures 6-9. Having established the presence of a Grammaticality effect, we then performed MANOVA on the two significant time windows (130-210 ms and 350-590 ms) to test the interaction between Grammaticality and type of agreement violation, utterance position, and ROI. The results of the two MANOVAs are reported in Table 5.
130-210 ms Time Window
Consistent with the cluster analysis, the MANOVA showed a main effect of Grammaticality. However, the absence of any interactions involving Grammaticality indicates that the response to grammatical versus ungrammatical conditions was similar regardless of verb-form or positions. positivity is plotted upwards. The topographic maps show brain voltage distributions for the negative and positive clusters. These maps were obtained by interpolation from 64 electrodes and were computed by subtracting the grand averages of grammatical from the ungrammatical conditions. Electrodes in the significant clusters are highlighted with a black circle and the F3, Fz, F4, C3, Cz, C4, P3, Pz, and P4 electrodes in the significant clusters are highlighted with a white circle. Time-windows for significant clusters is highlighted in gray over the waveforms.
The main effect of Verb-form and the interaction between Verb-form and ROI suggests that the response in this early time window differed significantly depending on the Verb-form (presence or absence of −S) independent of Grammaticality. Follow up pairwise t-tests revealed that, the verb-forms without −S elicited greater negativity compared to verb-forms with −S at the anterior-left region [t (19) = −2.118, p < 0.05], and centralmid region [t (19) = −3.818, p < 0.005].
The significant interaction between Position and ROI suggests that the mean amplitude of the negativity also differed across the electrodes depending on utterance position. Follow-up pairwise t-tests for the Position and ROI interaction revealed that verbs in utterance-medial position elicited more negativity compared to those in the utterance-final position at the front-mid region [t (19) = −2.494, p < 0.05], anterior-left region [t (19) = −2.438, p < 0.005), and anterior-right region [t (19) = −3.017, p < 0.005].
These results thus suggest that the distribution of the negativity observed in the cluster-based permutation varied due to verb-form and utterance position. However, the absence of grammaticality interactions in this time window suggests that, although the mean amplitude of the negativity varied across the electrodes due to type of verb-form and utterance position, the difference between grammatical and ungrammatical conditions was the same in both types of verb-form and positions.
350-590 ms Time Window
The statistical analysis for this time window showed main effects of Grammaticality, Verb-form and Position. There were no interactions between Grammaticality and Verb-form or Position. There was, however, a three-way interaction between Grammaticality, Position, and ROI. To test this, follow-up MANOVAS were performed on each ROI with Position and Grammaticality as within-subject factors. Results indicated that the interaction was significant in the anterior-mid region [Pillai's trace = 0.197,F (1,19) = 4.660, p < 0.05]. Further pairwise comparisons showed that the mean amplitude of the positivity Overall, the interactions observed in this later time window indicate that the amplitude and distribution of the positivity was influenced by perceptual salience due to utterance-final lengthening.
DISCUSSION
This study used ERPs to investigate how Australian-English speaking adults processed S-V agreement during auditory sentence comprehension. The aim was to explore whether the LAN and P600 effects would vary as a function of the relative perceptual salience associated with utterance position and type of agreement violation (verb-form). Previous ERP studies investigating the processing of agreement have shown that different aspects of experimental design (e.g., syntactic complexity of the stimuli) can influence the on-line computation of agreement information (Molinaro et al., 2011). However, the possibility that perceptual salience may influence the computation of S-V agreement has not until now been systematically explored. Given the findings from previous S-V agreement studies, we hypothesized that S-V agreement violations will elicit LAN and/or P600 effects. However, we further hypothesized that the effect size of these effects would be moderated by both utterance position (medial versus final) and type of agreement violation (errors of omission versus commission). More specifically, we predicted that the effects would be more robust for the more perceptually salient conditions (errors of commission and utterance-final position) than for their counterparts.
Results for the overall Grammaticality effect, with all conditions collapsed, showed that S-V agreement violations elicited a bilateral negativity with an anterior-central distribution, in the early 130-210 ms time window, followed by a positivity in the 350-590 ms time window with a centro-posterior distribution. Based on the latency and scalp distribution of the negativity, we interpret the negativity to be an anterior negativity (AN) which has been traditionally taken to reflect similar processes to those reflected by the LAN-i.e., detection of morphosyntactic violations (Friederici et al., 1993;Hagoort et al., 2003;Bornkessel and Schlesewsky, 2006). We also interpreted the positivity to be a P600 effect, which has been traditionally taken to reflect repair, reanalysis or recovery from ungrammatical sentences (Osterhout and Holcomb, 1992;Osterhout and Mobley, 1995;Friederici et al., 1996;Kolk and Chwilla, 2007). The bilateral negativity and the later P600 effect observed for S-V agreement violations is in line with previous studies in the auditory modality (Hahne and Friederici, 2002;Hagoort et al., 2003;Shen et al., 2013). Having established the overall Grammaticality effects, we extracted the two significant time-windows to perform MANOVAs, exploring whether type of agreement violation and utterance position influenced participants' sensitivity to grammaticality. Contrary to our predictions, we found no interactions involving grammaticality in the early (AN) window. However, for the later (P600) window, we did find a significant three-way interaction between Grammaticality, Position and ROI. This interaction arose because the topography of the Grammaticality effect was different for medial versus final positions. Specifically, while central and parietal ROIs showed comparable P600 effects regardless of position, the P600 at frontal sites was larger for the final position compared to the medial position. According to Rugg and Coles (1995), such quantitative differences in the ERP effects suggest that more neural structures were activated during the processing of the stimuli.
These finding thus provide support for the hypothesis that effects of perceptual salience due to utterance position modulate listeners' sensitivity to S-V agreement violations during on-line speech comprehension. The findings are thus broadly in line with the earlier infant perception study by where infants showed a difference in looking times to the grammatical vs. ungrammatical sentences when the verb and morpheme occurred utterance finally (e.g., Now he cries vs. * Now he cry) but not when they occurred utterance medially (He cries now vs. * He cry now, Sundara et al., 2011). Sundara et al.'s results suggest that the effect of position in our ERP paradigm may be more clear-cut in infants and possibly young children than they were in the adults tested here.
However, for type of agreement violations, where the verbform occurred without −S (errors of omission) and where the verb-form occurred with −S (errors of commission), we found no interactions between Verb-form and Grammaticality in either time window. In other words, participants appeared equally sensitive to omission and commission errors. This is, to our knowledge, the first ERP study to directly compare omission and commission errors in the context of S-V agreement (previous studies have either looked at one error type or have collapsed across both error types). Again, it is worth noting that our participants were all adults listening in their first-language in a pristine auditory environment. It remains to be determined whether omission and commission errors are equally salient for other populations such as children or second-language learners, or indeed, whether L1 adults show differential sensitivity if they have hearing impairment or if the acoustic environment is more challenging.
Although we did not find the predicted interaction between Verb-form and Grammaticality, we did note a main effect of Verb-form for both the AN and the P600. That is, irrespective of Grammaticality, brain responses were different depending on the presence or absence of the −s suffix. This is an important finding from a methodological point of view, demonstrating the need to differentiate between ERP effects that reflect sensitivity to grammatical violation as opposed to those reflecting differences in the acoustic properties of the stimulus. As discussed above, a balanced design (cf. Steinhauer and Drury, 2012) is optimal for investigating the overall effect of S-V agreement, but it does not allow for a more fine-grained analysis that disentangles grammatical and type of agreement violations. Previous studies investigating either omission or commission errors (see Table 1) have taken the opposite approach, keeping the grammatical context constant whilst manipulating the Verb-form. This was also the approach we took in our initial analyses (see Supplementary Analysis). However, it fails to disentangle grammatical and acoustic effects on the ERP. Fortunately, the balanced design of our study allowed us to reframe the analysis, contrasting the response to the same verb form in different grammatical contexts, and treating Grammaticality and Verbform as orthogonal as opposed to confounded factors.
Implications for the Interpretation of the LAN and P600 Effects
This study does not allow us to resolve the debate on the processes underlying sentence comprehension. However, it is worth considering how our findings might be incorporated into existing theoretical accounts for the functional interpretation of the LAN/P600 components. As we discussed earlier in the introduction, the functional interpretation of these ERP components has been challenged by reports from agreement processing studies where the realizations of these components has been shown to vary, especially the LAN (for discussion, see Tanner and Van Hell, 2014;Tanner, 2015;Dröge et al., 2016). We argued that these inconsistencies may in part be due to confounding influence the ERP effects during the LAN/AN time window. Importantly, our analysis collapsing across all conditions revealed both AN and P600 effects indicating that listeners detected the morphosyntactic violation and engaged in syntactic re-analysis. The comparisons we conditions represents what Steinhauer and Drury (2012) have referred to as a "balanced" design with the same noun and verb forms occurring equally across conditions such that all confounding factors average out. Notably, the two other studies adopting a balanced design (De Vincenzi et al., 2003;Hasting and Kotz, 2008) also reported a biphasic LAN/P600 response, indicating that the LAN is a robust response to agreement violations if confounding factors are eliminated.
What is interesting, however, is that when we further explored effects of perceptual salience we observed that utterance position effects only influenced S-V agreement processing in the later (P600) time window and not in the earlier (AN) window. This is arguably consistent with the modular-specific model otherwise known as the serial/syntactic-first view (Friederici, 2002). According to this view, syntactic and non-syntactic information interact at a later stage of sentence reanalysis, rather than during the assignment of thematic roles. As a result, P600 effects may vary depending on the non-syntactic information available during sentence re-analysis, whereas the LAN/AN is not expected to vary. The alternative interactive models would predict that effects of perceptual salience should affect both stages of morpho-syntactic processing and syntactic re-analysis given that information about the perceptual salience of the violation is available at every stage of processing (Bornkessel and Schlesewsky, 2006;Pickering and Garrod, 2013). Since we did not observe any significant effects of perceptual salience in the early time window, our data appears to be in support of Friederici's (2002) neurocognitive model of sentence comprehension.
Overall, our results are in line with studies that have reported gradient P600 effects as a result of different agreement-violation manipulations (e.g., Coulson et al., 1998;Nevins et al., 2007). These studies also suggest that the salience of the agreementviolation (e.g., due to type of morphological feature) influences sentence processing. The difference is that, unlike the previous studies, the present study explored the effects of auditory perceptual salience during S-V agreement processing. Our study is therefore the first to show that relative perceptual salience, due to utterance position effects, interacts with syntactic processing during on-line processing of S-V agreement violation and that this interaction happens during the later stage of syntactic reanalysis.
CONCLUSIONS
Studying language-related ERPs in the auditory modality is more ecologically valid for understanding the factors and processes that underlie speech comprehension. However, it raises a number of issues and brings several challenges that are not present with visual presentation. In this study, we explored the possibility that perceptual salience related to utterance position (medial vs. final) and type of agreement violation (errors of omission vs. commission) influences the computation of S-V agreement violation during speech comprehension. We found significant differences in the ERPs of native-speaking adults for violations occurring in utterance-medial versus utterancefinal positions but did not find any significant differences for errors of omission versus errors of commission. We also showed that balanced experimental designs are important, especially in auditory ERP studies where grammaticality effects may be confounded with acoustic differences of the stimuli. The current findings from this study therefore highlight the importance of deconfounding grammaticality effects on ERPs from acoustic and prosodic differences in the stimuli as this has implications for the interpretation of the ERP components associated with morphosyntactic processing. The methodological advances outlined in this paper will be critical in future studies investigating other populations in which perceptual effects might be expected to have more of an impact on agreement processing.
AUTHOR CONTRIBUTIONS
SD, Designed the experiment, collected and analyzed data, wrote up the paper. CK, Substantial contribution to data analysis, feedback on content, feedback on overall paper for submission. VP, Substantial contribution to experimental design, analysis and feedback on overall paper for submission. JB, Substantial feedback on experimental design, data analysis, and content. KD, Substantial contribution to the design of the experiment, theoretical issues addressed in the study, feedback on the overall paper for submission.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2008-03-01T00:00:00.000
|
5980238
|
{
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1000019&type=printable",
"pdf_hash": "29c29a5858e90822639b396ec4312e8e3e87a173",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1182",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "29c29a5858e90822639b396ec4312e8e3e87a173",
"year": 2008
}
|
pes2o/s2orc
|
Modeling the Effects of Cell Cycle M-phase Transcriptional Inhibition on Circadian Oscillation
Circadian clocks are endogenous time-keeping systems that temporally organize biological processes. Gating of cell cycle events by a circadian clock is a universal observation that is currently considered a mechanism serving to protect DNA from diurnal exposure to ultraviolet radiation or other mutagens. In this study, we put forward another possibility: that such gating helps to insulate the circadian clock from perturbations induced by transcriptional inhibition during the M phase of the cell cycle. We introduced a periodic pulse of transcriptional inhibition into a previously published mammalian circadian model and simulated the behavior of the modified model under both constant darkness and light–dark cycle conditions. The simulation results under constant darkness indicated that periodic transcriptional inhibition could entrain/lock the circadian clock just as a light–dark cycle does. At equilibrium states, a transcriptional inhibition pulse of certain periods was always locked close to certain circadian phases where inhibition on Per and Bmal1 mRNA synthesis was most balanced. In a light–dark cycle condition, inhibitions imposed at different parts of a circadian period induced different degrees of perturbation to the circadian clock. When imposed at the middle- or late-night phase, the transcriptional inhibition cycle induced the least perturbations to the circadian clock. The late-night time window of least perturbation overlapped with the experimentally observed time window, where mitosis is most frequent. This supports our hypothesis that the circadian clock gates the cell cycle M phase to certain circadian phases to minimize perturbations induced by the latter. This study reveals the hidden effects of the cell division cycle on the circadian clock and, together with the current picture of genome stability maintenance by circadian gating of cell cycle, provides a more comprehensive understanding of the phenomenon of circading gating of cell cycle.
Introduction
For organisms living on the surface of the earth or in shallower aquatic biotopes, the ability to adjust their metabolic processes and behaviors according to a 24-hour periodicity, and the synchronization of their internal molecular processes may provide important evolutionary advantages. Circadian clocks are endogenous timekeeping devices that are responsible for the <24-hour biochemical rhythm of almost all organisms ranging from simple single cellular prokaryotes to complex multi-cellular eukaryotes. Circadian clocks coordinate synchronization between internal biological processes and between environmental cues and internal biological processes.
An endogenous circadian clock consists of single or multiple autoregulatory oscillator(s) composed of interconnected transcriptional feedback loops [1][2][3][4]. These molecular feedback loops contain positive and negative elements. Positive elements activate transcription of the negative elements, while negative elements inhibit the positive elements. This regulatory regime between positive and negative elements causes oscillatory fluctuation of the concentrations of both components. Recent years have seen great advances in deciphering the molecular components and concomitant regulatory logic of circadian controlling systems in at least five model systems: the cyanobacterium Synechococcus elongates, the filamentous fungus Neurospora crassa, the fruitfly Drosophila melanogaster, plant and mammals [5]. One important feature of circadian clock is that it is flexible in response to environmental and physiological changes and can be entrained or reset by many environmental factors like light, food cues and many other physiological chemical factors [6][7][8][9]. Chemicals with transcriptional inhibition activity has also been reported being able to entrain the circadian clock [10]. With this flexibility, circadian clocks can easily adapt to environmental conditions and reconcile and coordinate various physiological processes.
The cell cycle is another fundamental clock-like periodic biological process for which interesting molecular details have been elucidated. At the molecular level, a similar regulatory scenario to the circadian clock is observed, with transcriptional and translational feedback loops underlying the cell cycle engine mechanism. The phenomena of coupling between cell cycle and circadian cycle were observed and investigated over 40 years ago [11,12]. In 1964, Edmunds et.al. found that the autotrophic Euglena gracilis Klebs, grown on defined medium with a regime of 14 hours of light and 10 hours of darkness, double their cell number every 24 hours, dividing synchronously during the dark period [13]. This observation was subsequently further confirmed by Edmunds' group [12,14,15]. Such circadian phase specific distribution of cell cycle phases of DNA synthesis or mitosis was also observed in mammals both in vivo and in vitro [16] and even in tumor cells [17]. In the last few decades, this phenomenon was also observed in many other organisms [18,19]. These observations were all interpreted as gating of specific events of cell division by a circadian clock [11,[20][21][22].
This prompts two questions. Why is there widespread gating of the cell cycle by a circadian clock mechanism in most organisms? And is there any reciprocal ''gating'' effect of the cell cycle on the circadian clock? As yet, there is no clear answer to this second question. However, recent findings by Nagoshi demonstrate that cell division can indeed influence circadian period length [23], although it is not clear whether this effect on circadian period length is a gating effect on the circadian clock. Regarding the first question, the current opinion emphasizes the role of circadian clock in genome stability maintenance [24]. In order to obtain meaningful answers to these questions, one has to have a closer look at the molecular mechanisms of the circadian clock and the cell cycle engine. Because circadian rhythms involve complex transcriptional feedback loops, unperturbed transcriptional regulation of clock genes is critical for the stability of circadian rhythms. This was partially supported by the observation that treatment with the reversible transcription inhibitor 5,6-dichloro-1-beta-D-ribobenzimidazole alters both circadian phases and periods in the isolated eye of Aplysia [10]. During cell cycle progression, transcriptional regulation continuously changes. The most prominent changes occur at M-phase when the chromosomes condensed into compact structures. Most factors necessary for active gene expression are inaccessible to their binding site on DNA and cells undergo global transcriptional inhibition. In proliferating cells, this cell cycle-dependent transcriptional regulation occurs simultaneously with transcriptional programs of circadian regulatory machinery and, thus, transcriptional regulation events of these two molecular processes very possibly interact with each other. In this way, the two periodic molecular clock processes may interlock, especially during the global transcriptional inhibition during M-phase, which could potentially disturb the transcriptional feedback loops of the circadian clock machinery. With this possibility in mind, we reasoned that gating of the cell division cycle might help to minimize or eliminate potential disturbance of the transcriptional feedback loops of the circadian rhythm machinery.
It is not easy to experimentally study the cell cycle mediated effects of transcription inhibition on the circadian clock. It is, however, feasible to investigate this problem with mathematical modeling. A number of modeling approaches have already been successfully employed to individually study circadian clocks and the cell cycle [1,[25][26][27][28]. Modeling can not only reveal the underlying intrinsic molecular design principles of circadian clocks and the cell cycle machinery, but also help to predict and identify unknown components and regulatory principles. For example, using mathematical modeling approaches, Locke and colleagues predicted the presence of a new regulatory loop in the plant circadian clock system, which was supported by experimental results [29].
In this study, we investigate the hypothetical effects of global transcription inhibition in cell cycle M phase on the properties of the mammalian circadian clock and explore the implications of this effect on circadian gating of the cell cycle. Our simulation results show that transcriptional inhibition could entrain the circadian clock and at equilibrium entrainment, transcriptional inhibition pulses are always located at certain circadian phases, where they minimize inhibition induced circadian perturbation.
Entrainment of Circadian Period by Transcriptional Inhibition at Constant Darkness Condition (DD Condition)
Entrainment of a circadian cycle to light is a well established biological observation. Light induced transcriptional alteration or protein degradation contributes to such entrainment. To assess whether M-phase transcriptional inhibition can also serve as an entrainment cue for the circadian clock, we numerically simulated a mammalian circadian model modified from the model published by Goldbeter et.al. [30] by incorporating periodic transcriptional inhibition (we will call this modified model henceforth the ''coupled model'') using fourth and fifth order Runge-Kutta method. In the coupled model, the cell cycle M-phase was mimicked by periodic transcriptional inhibition of clock genes. With this modification, maximum transcription rates of clock genes fluctuate according to a square wave ( Figure 1). The trough phase of the square wave represents M phase where transcription activities lower down to zero, while the peak phase represents other phases where transcriptions take place unchanged. The cycling period was set between 10 to 50 hours with steps of one hour, which practically covers the spectrum of mammalian cell cycle periods. Figure 2 gives an overview of the equilibrium circadian periods of the coupled system. When cells divide with a period close to 23.85 hours, which is the intrinsic period of the original mammalian circadian model from Goldbeter et. al., the equilibrium period of the coupled system is constant and equal to the imposed cell cycle period regardless of the circadian phase of
Author Summary
Circadian clock and cell cycle are two important biological processes that are essential for nearly all eukaryotes. The circadian clock governs day and night 24 h periodic molecular processes and physiological behaviors, while cell cycle controls cell division process. It has been widely observed that cell division does not occur randomly across day and night, but instead is normally confined to specific times during day and night. These observations suggest that cell cycle events are gated by the circadian clock. Regarding the biological benefit and rationale for this intriguing gating phenomena, it has been postulated that circadian gating helps to maintain genome stability by confining radiation-sensitive cell cycle phases to night. Bearing in mind the facts that global transcriptional inhibition occurs at cell division and transcriptional inhibition shifts circadian phases and periods, we postulate that confining cell division to specific circadian times benefits the circadian clock by removing or minimizing the side effects of cell division on the circadian clock. Our results based on computational simulation in this study show that periodic transcriptional inhibition can perturb the circadian clock by altering circadian phases and periods, and the magnitude of the perturbation is clearly circadian phase dependent. Specifically, transcriptional inhibition initiated at certain circadian phases induced minimal perturbation to the circadian clock. These results provide support for our postulation. Our postulation and results point to the importance of the effect of cell division on the circadian clock in the interaction between circadian and cell cycle and suggest that it should be considered together with other factors in the exploitation of circadian cell cycle interaction, especially the phenomena of circadian gating of cell cycle.
the initiation of the M-phase transcriptional inhibition. This clearly indicates that entrainment occurs. Interestingly, such entrainment also occurred with a cell cycle period of 11 hours, approximately one half of 23.85 hours, or of about 48 hours (46, 47 and 48 hours in Figure 2), twice the 23.85-hour period. At other cell cycle periods, entrainment occurred irregularly and was strictly dependent on the phase of the circadian rhythm where transcriptional inhibition is initiated (data not shown). This latter case can be referred to as conditional entrainment. Although we did not extend our simulation to cycle periods longer than 50 or shorter than 11 hours, we think the extrapolation is reasonable.
Next, we assessed the distribution of cell cycle M-phase (transcriptional inhibition pulse) on the circadian phase of the coupled system at equilibrium entrainment. To this end, the phases of the circadian cycles where inhibition pulses occurred were determined at equilibrium of every simulation and plotted against the cell cycle periods. As shown in Figure 3, patterns similar to those in Figure 2 emerge. At cell cycle periods close to half of 24 h, 24 h or twice 24 h, where period entrainment occurs, inhibition pulses were also entrained to specific circadian phases. At other phases of the period, no such phase entrainment could be detected. Figure 4 shows the details of the simulation results for cell cycle periods of 18, 22, 23, 24 and 25 hours, where entrainment occurred at periods of 22, 23 and 24 hours. For the 22 hours cell cycle period, the circadian cycle period was strictly entrained to 22 hours. The standard deviations of the circadian periods were for none of the circadian phases larger than 0.1 h (data not shown). The inhibition pulse occurred at a single circadian phase close to peak of Per mRNA curve which is defined as CT0. Similar strict entrainment was also observed at a period of 24 hours. In this case, the circadian period was entrained to 24 hours and the inhibition pulse occurred at a single circadian phase close to CT13. There is a subtle difference between the case of a 23 h period and the 22 and 24 h periods. The circadian cycle of the 23 h period was still entrained to 23 hours, but equilibrium inhibition pulses occurred at two circadian phases, one that was close to CT0 and another close to CT13, corresponding to the entrainment phases of the 22 and 24 hour periods, respectively.
Clock Gene mRNA Synthesis Rate Curves
If inhibition occurs at circadian phases where synthesis of clock gene mRNAs are actively expressed, circadian rhythms will possibly be perturbed. However, if inhibition occurs at circadian phases either without clock gene mRNA expression or with balanced synthesis of two antagonistic genes, there will be no or minimal effect on the circadian clock. Figure Figure 5. The synthesis rate curves of the two mRNA molecules intersect at two points across the circadian period. These two intersection points are close to those two locking circadian phases where inhibition pulses occurred at equilibrium, as shown in Figure 4. Since the syntheses of the Per and Bmal1 mRNAs oscillate in anti-phase, transcriptional inhibition at any point other than these two intersection points will lead to unbalanced inhibition, e.g. the less the inhibition of one gene, the greater that of the other, thus resulting in larger system perturbations. On the other hand, inhibition at these two points results in equal inhibition of both molecules and thus the least perturbation of the circadian clock. This would explain why entrainment of the circadian clock by the cell division cycle always occurs at these two phase points.
Differences between Transcriptional Inhibition Induced Perturbations at Different Phases of a Light-Dark Cycle (LD Condition)
Our simulation so far studied the effect of M-phase transcriptional inhibition in DD condition. In reality, light cycle and cell cycle always influence the circadian cycle simultaneously. Furthermore, experiments studying circadian entrainment of cell cycle phases are all conducted under the condition of a light-dark cycle. To directly compare experimental results with our simulation, we have to introduce a LD cycle into our model. Our working hypothesis is that entrainment of cell cycle phases, especially of the M-phase, to certain circadian phases is meant to minimize circadian perturbation induced by cell cycle progression, in particular by M-phase global transcriptional inhibition. Our objective is to determine whether, in the presence of a LD cycle, one or more circadian phase(s) can be identified, where the imposition of transient transcriptional inhibition does not significantly alter the circadian cycle. To this end, we conducted simulations with a model incorporating both a light-dark cycle and transcriptional inhibition cycle effects. There are three ways to conduct such a simulation study. Two different effects can be introduced either simultaneously or sequentially. Since mammals normally live under light-dark cycle conditions, we assume a light cycle factor intrinsic to the mammalian circadian clock and that a LD cycle is the background condition of other molecular processes. Thus, we first introduced a light cycle into the model, and the transcriptional inhibition cycle was introduced after the system reached a new equilibrium state. Since human and mouse cells in vivo normally show proliferation with a periodicity of 24 h or longer, we began with a 24 h transcriptional inhibition cycle. The results show that, as under the DD condition, the transcription inhibition cycle altered phase and period of the circadian clock. The magnitude of change depends on the phase of the circadian cycle at which transcriptional inhibition is imposed. Transcriptional inhibition initiated at some circadian phases induced large changes of the system, which took a long time to relax into a new equilibrium state. In these cases, systems normally do not return to the previous equilibrium state. On the other hand, imposing transcriptional inhibition at certain other circadian phases induced relatively small changes of the system, which rapidly returned to the previous equilibrium. At still other circadian phases, transcriptional inhibition induced no system changes at all. Some aspects of our results are shown in Figure 6. It is apparent that at a circadian phase close to 14.5 and 19.5 (phase 0 corresponds to onset of light, CT0), little perturbation was induced by transcriptional inhibition (middle and bottom panels of left Figure 6), while at other phases, larger deviations were observed (right side Figure 6). At phase 1, the system simply transits into quasi-periodicity (top panel of left Figure 6)When simulations were performed with transcriptional inhibition cycles of periods other than 24 hours, phases where transcriptional inhibition induced minimum or no changes can not be detected.
We further did similar simulation study in the mammalian circadian model with 19 equations published by Goldbeter et al. [30] and a Drosophila circadian model published by Udea et al. [31] to see whether this kind of phase specific difference also exists in other circadian models. Our results clearly indicated that these different models also exhibit this phase specific difference in transcriptional inhibition induced perturbation although the exact phases where transcriptional inhibition induced lest perturbations in Drosophila model are different from the two mammalian models (see Figure S1 and Figure S2).
Noise Has Little Effect on the Entrainment of the Circadian Clock by Cell Division
It has been demonstrated that circadian systems are robust to molecular noise and entrainment of circadian clock by light cycles can occur in the presence of molecular noise [32,33]. To study the effect of noise on the entrainment of circadian clock by transcriptional inhibition cycles, noises were introduced into the differential equations of the mammalian circadian model. System trajectories of the model were then simulated as above mentioned. Simulation results showed that the model exhibits robust periodic behavior in the presence of noise (see Figure S3) and such periodic behavior remained when either light cycles or transcriptional inhibition cycles is imposed onto the model (data not shown). For transcriptional inhibition cycles, those with periods close to 24 hours are easier to entrain the model, reflected by more focused distribution of the circadian phases where inhibition pulses occur and more centered distributions of entrained circadian periods to values identical to transcription inhibition cycle (Figure 7). When transcriptional inhibition cycles and light cycles of 24 hour are imposed onto the model, inhibition cycles fluctuating with specific phasing relationships with light cycles will induce lest rhythms changes in the model system ( Figure 8). These results are compatible with the previous results in the absence of noise.
Discussion
Interactions between the circadian clock and the cell cycle engine have been suggested by many experimental observations in various organisms [11,15,20,[34][35][36][37][38][39][40][41]. However, the interaction and communication structure between these two systems remain to be revealed. In this study, we applied a computational simulation approach to this problem. Our results show that global transcriptional inhibition during the cell cycle M phase can shift the circadian phase and serve as entrainment cue for the circadian clock.
Experimental observations suggesting an interaction between the circadian clock and the cell cycle are, in most cases, simply the non-random distribution of certain cell cycle events across circadian phases or fluctuations of cell cycle regulatory gene expression with circadian periodicity. Mechanistic details of this interaction are so far not known, yet in some instances, specific molecular links have been proposed [35,42]. In 2003, Matsuo et al. provided the first evidence in mouse that Wee1, an important cell cycle regulator kinase, is under direct control of circadian clock genes and that both Wee1 expression and mitosis follow a circadian rhythm. This report provides support for the idea that the circadian clock must have a direct influence on cell cycle progression. Based on this assumption, Calzone et al. created a coupled model of circadian clock and cell cycle (https://hal.ccsd. cnrs.fr/docs/00/07/01/91/PDF/RR-5835.pdf). Since a potential influence of the cell cycle on circadian clock was not considered in their coupled model, it exhibited a bias towards the effects of the circadian clock on the cell cycle, while any reverse effect was neglected.
To simulate the effects of the cell cycle on the circadian clock, appropriate molecular links have to be identified and corresponding parameters have to be determined. Compared to the evidence for a dependence of the cell cycle on the circadian clock, evidence for the reverse effect is rare. The most pertinent evidence came from fluorescent imaging of gene expression in individual NIH3T3 mouse fibroblasts with circadian rhythm [23]. It was found that cell division shifted the period length of the circadian clock. Although there is no direct evidence of the molecular mechanism underlying this phenomenon, the period length change after cell division was attributed to global transcription inhibition during cell division. Interestingly, transient transcriptional inhibition by chemicals has been demonstrated by Eskin et al. to be able to alter circadian phases and periods [10]. Considering these observations and the fact that the most prominent transcriptional change during cell cycle progression is global transcriptional inhibition associated with cell division, it is reasonable to assume that cell cycle events, in particular cell division at M-phase, exert direct effects on circadian clock.
We thus focused here on the potential effects of M-phase global transcriptional inhibition on the circadian clock. One has to bear in mind, however, that cell cycle progression involves complicated transcriptional, translational and post-translational regulations. Consistent with Eskin's experimental observation, our simulation study confirmed that transcriptional inhibition changed both phase and period of the circadian clock.
Two interesting points emerge from our computational simulation. The first one is the entrainment of the circadian period by the cell cycle. This entrainment occurs only at cell cycle periods close to one half, twice or equal to the intrinsic circadian model period of 23.85 h, namely 11,22,23,24,46, 47 and 48 h. At other cell cycle periods, entrainment rarely occurred. The second point is that when the circadian clock system reaches a new equilibrium state after perturbation by periodic transcriptional inhibition, the circadian phase(s) where transcriptional inhibition pulses are locked, is (are) focused rather than randomly distributed across the whole circadian clock period. For the 22 hour period, transcriptional inhibition remains at the circadian phase following the Per mRNA peak, for the 23 hour period, two steady state phases exist, one equivalent to that of the 22 hour period, the other one close to the middle between two Per mRNA peaks. For the 24-hour period, one unique steady state appears again, in this case close to the middle between two Per mRNA peaks.
Further inspection showed that these positions are close to phases where the synthesis rate curves of the Per and Bmal1 mRNAs intersect. It is evident that at the intersection points, the difference between the synthesis rates of these two molecules is zero and transcriptional inhibition pulses influence their synthesis to the same extent. According to the accepted mechanism of circadian clock regulation, Per exerts a negative feedback on itself, but positively affects Bmal1 expression. Similarly, Bmal1 regulates itself negatively, but regulates Per positively. This regulation regime causes an anti-phasic oscillation of these two molecules with respect to each other. When transcriptional inhibition is imposed on the circadian system, several different responses occur, depending on the circadian phase where transcriptional inhibition happens. At circadian phases where Bmal1 synthesis rate reaches maximum and Per synthesis rate is zero, transcriptional inhibition induces maximum delay of accumulation of Bmal1 mRNA, but does not affect Per mRNA synthesis. At these circadian phases, transcriptional inhibition causes maximal perturbation of the circadian system. At other phases, transcriptional inhibition delays the accumulation of one of these two mRNAs, while accumulation of the other is accelerated. The effects are also quantitatively different, depending on the exact circadian phase of transcriptional inhibition. In some phases, transcriptional inhibition delays Per mRNA accumulation but accelerates Bmal1 mRNA accumulation, while in other phases the reverse is observed. The influence on one mRNA is always associated by a simultaneous influence on the other mRNA. The magnitude of counterbalance is determined by the difference between the synthesis rates of the two molecules at that phase. The more the disturbances are balanced, the less is the circadian system affected by the transcriptional inhibition at that circadian phase. It is obvious that near the intersection points of Figure 5, the influences are more balanced than at all other points and thus, the circadian system is less perturbed by transcriptional inhibition at phases near those points. For stable entrainment of the circadian clock, two conditions must be satisfied. One is that the circadian system must not be drastically perturbed. The other one is that the phase shift induced by the entraining cue equals the difference between the unperturbed period and the entraining cycle period. At phases near the intersection points, transcriptional inhibition induced perturbations and phase shifts satisfy these two conditions for steady entrainment, while at other phases they are less likely to be met. We assume that these special characteristics of phases near intersections may explain the fact that in most cases of the steady entrainment of the circadian clock by periodic transcriptional inhibition, inhibition pulses were, without exception locked at these unique circadian phases.
Still, at cell cycle periods other than those mentioned above, transcriptional inhibition pulses were also found locked to other phases, e.g. circadian phase distribution for 10 and 43 hours in Figure 3. We cannot yet explain this complex pattern. Further work has to be undertaken to unravel this complexity. In mouse fibroblasts cultures, it was found that cell division mainly occurred at three phases with an interval of roughly 8 hours. The reason for this discrepancy between observations in fibroblasts and our simulation is not clear. It may reflect differences between the endogenous fibroblasts circadian clock and the circadian model we used and/or differences between in vitro and in vivo conditions. In the physiological context, a circadian clock is always under the influence of a light-dark cycle. To place our simulation in a more physiological context, we also simulated the cell cycle and circadian clock interaction in the presence of a light-dark cycle. To this end, we incorporated both a light-dark cycle and the transcriptional inhibition cycle into the mammalian model. Our simulation results revealed two windows in the circadian cycle, where transient transcriptional inhibition induced only transient and small alterations to the circadian clock regulatory system. With the beginning of the light cycle taken as the 0 reference phase (CT0), one window is close to 15 h, and the other window is close to 19 h, corresponding to the middle and late night respectively. Although there is to our knowledge no experimental evidence for mammals supporting the entrainment of cell cycle M-phase to circadian phases close to the first window in our simulation, evidence from a mouse liver regeneration study revealed indeed the entrainment of hepatocyte cell cycle mitosis to phases close to this second window [42]. There are also reports on a circadian rhythm of the cell cycle M-phase in mouse and human skin and mouth mucosa epithelia [40,43]. According to one of these studies, mitosis occurs mainly at a phase roughly corresponding to the time before sunset [40]. This is in contrast to proliferating hepatocytes and the results of our simulation. Considering that cells of different tissue origin display distinct physiological circadian rhythms, the differences in occurrence of cell cycle M-phase between skin and mucosa epithelia and hepatocytes and our simulation study are not surprising. We do similar simulations with the mammalian circadian model of 19 equations from Goldbeter et.al. The results are similar to those of the 16 equation model. More interestingly, simulations with a Drosophila circadian model also revealed the existence of minimum perturbation at certain circadian phases. This indicates that circadian phase specific minimum perturbation by transcriptional inhibition is general to circadian systems from different species. The partial overlap between the simulated circadian phases with the smallest impact of transcriptional inhibition on the circadian clock and those experimentally observed circadian phases where mitosis most frequently occurs, suggests that the principle of minimal circadian perturbation might, at least partially, contribute to the phenomena of circadian entrainment of cell cycle mitosis in mammals. We also performed simulations with transcription cycle periods other than 24 hours. In these cases, steady entrainment can not be detected. This clearly means that cell cycles with periods different from circadian period can not result in steady entrainment and have to be gated by circadian clock to obtain steady coupling between circadian clock and cell cycle.
The current view of circadian entrainment of the cell cycle is that the circadian clock helps to maintain genome stability by timing mutation sensitive cell cycle phases to circadian phases with least exposure to mutagens. Our simulation suggests that circadian entrainment of the cell cycle could also help to maintain circadian clock stability by minimizing cell division induced perturbation of the circadian clock. These two notions are not mutual exclusive. They complement each other and in combination provide for a fuller picture of an elusive phenomenon.
In summary, highly regulated transcriptional processes are critical for normal functioning of the circadian clock. Global transcriptional inhibition during M-phase of the cell cycle might perturb normal progression of the circadian clock, and there might be circadian windows where transcriptional inhibition has little influence on normal circadian progression. One could therefore expect to find (a) molecular mechanism(s) which places the Mphase of the cell cycle in such windows to minimize or eliminate cell cycle induced perturbation. Our study is the first attempt to tackle this problem by computational simulation, and our results support this hypothesis.
Circadian Model
The circadian model used in this study is from the mammalian model published by Leloup and Goldbeter in 1993 [30]. There are two versions of this model. One version is composed of 16 differential equations and, the other one is composed of the same 16 equations plus three additional equations. The 16 shared equations describe the dynamics of the Per, Cry, and Bmal1 mRNAs and their corresponding proteins. The additional 3 equations in model 2 describe the dynamics of the Rev-erbalpha mRNA (NM_145434.3) and proteins. The two models gave similar simulation results. These models reflects mRNA transcriptional regulation, protein phosphorylation regulation and protein compartmental transportation dynamics (see Figure S4 for details). The dynamic behaviors of these models are generally in agreement with characteristic features of mammalian circadian clocks. For details of the equations and descriptions, we refer the readers to the original publication by Leloup and Goldbeter [30]. A Matlab ODE file for the modified model is also provided (see Text S1).
Incorporation of the Effect of M-Phase Transcriptional Inhibition into the Circadian Model
We did most of our simulations with the 16 To incorporate the effects of cell cycle M-phase global transcriptional inhibition on the circadian clock, we modified Leloup's mammalian circadian model by letting parameters v sP , v sC , v sB oscillate between the optimized values of the original model and zero (or other values below optimum). The oscillation of these parameters reflects the periodic cell cycle M-phase. The periods of oscillation of these parameters mimic the cell cycle period, and the differences between the two oscillating values reflect the degree of M-phase transcriptional inhibition.
Although it is well known that chromosomes are highly condensed and transcription is globally inhibited during M-phase, there is no quantitative experimental result concerning the duration and extent of transcription inhibition in M-phase. Because the M-phase of the mammalian cell cycle lasts roughly 1-2 hours and is relatively constant compared to other cell cycle phases, we assume that the variation of these three parameters follows a square wave with a trough phase of relatively constant length of 30 minutes corresponding to the M-phase transcriptional inhibition pulse. We assume that transcription inhibition of circadian clock genes occurs at least at the middle part of Mphase. Based on this assumption, a duration of 30-60 minutes (roughly half the mammalian cell cycle M-phase length) of transcription inhibition is introduced into the model.
To implement this modification, we introduced a new parameter v into the original model, whose value is governed by the following formula: in which square is a square wave function, period denotes period of transcriptional inhibition, representing cell cycle period, t denotes time and p denotes the circadian phase with which we can control where the inhibition pulse begins.
To simulate oscillation of Per, Cry and Bmal1 mRNAs, v sP , v sC and v sB are all multiplied with the parameter v. The three equations governing the dynamics of the three mRNAs are thus modified as follows: In this way, the decline of v sP , v sC and v sB mimics transcriptional inhibition, and the period of variation reflects the cell cycle period. We treat the two terms of transcriptional inhibition and cell cycle M-phase global inhibition as interchangeable in this study.
Introduction of Noise into the Circadian Model
To study the effect of noise on the entrainment properties of periodic transcriptional inhibition, we introduced a white noise term into the differential equations of the original model as follows: dx dt~f x,t ð ÞzdW where dW = d * G, with d controlling the magnitude of the noise and Grepresenting the Gaussian process. Noise terms were added into one or several different equations to find a proper way to introduce noise into the model. In this study, we just add a noise term into the third equation governing the dynamics of Bmal1 mRNA concentration, which functions as an important regulatory factor for circadian clock. The equation with noise term is as follows:
Lists of Genes and Proteins Included in the Mammalian Circadian Models
Although the mammalian circadian models we used in this study reflect general properties of mammalian circadian clock, the parameters are basically estimated from data collected from mouse experiments. So we just list mouse Refseq accession numbers for the genes and proteins. The three Per genes and proteins are collectively represented as one Per gene and protein respectively in the model and the two Cry genes and proteins are treated as is.
Genes Figure S1 Transcriptional inhibition induced changes under LD cycle conditions in the Goldbeter mammalian circadian model with 19 equations. The LD cycle is first introduced into the circadian model, and the resulting model is simulated. When the model reaches equilibrium, transcriptional inhibition is then introduced into the model. The system changes after inhibition imposition is depicted by the difference in Per mRNA level at light onset between pre-and post-inhibition imposition. ''+'' denotes Per mRNA level at light onset before inhibition imposition; ''.'' denotes that Per mRNA level at light onset after inhibition perturbation. Figure S4 Molecular processes included in the mammalian circadian models we used in this study (adapted from [30]). Ovals represent proteins and rectangulars represent mRNA transcription. Black elements denote protein degradation. cyto(-) and nuc(-) represents cytoplasmic and nuclear proteins respectively. -P denotes protein phosphorylation. Lines with arrows means protein phosphorylation and dephosphorylation activation or transcriptional activation, while lines with bars means inhibition. The green colored molecules at the upper-left corner are only included in the 19 equation models, while the light blue colored molecules are included in both mammalian models.
|
v3-fos-license
|
2017-11-07T16:28:44.215Z
|
2016-08-02T00:00:00.000
|
267405
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-016-0995-3",
"pdf_hash": "37b6b790432003d8c683ff7e20c8504007f3f7be",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1183",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "37b6b790432003d8c683ff7e20c8504007f3f7be",
"year": 2016
}
|
pes2o/s2orc
|
Three siblings with familial non-medullary thyroid carcinoma: a case series
Background In 2015, thyroid carcinoma affected approximately 63,000 people in the USA, yet it remains one of the most treatable cancers. It is mainly classified into medullary and non-medullary types. Conventionally, medullary carcinoma was associated with heritability but increasing reports have now begun to associate non-medullary thyroid carcinoma with a genetic predisposition as well. It is important to identify a possible familial association in patients diagnosed with non-medullary thyroid carcinoma because these cancers behave more destructively than would otherwise be expected. Therefore, it is important to aggressively manage such patients and screening of close relatives might be justified. Our case series presents a diagnosis of familial, non-syndromic, non-medullary carcinoma of the thyroid gland in three brothers diagnosed over a span of 6 years. Case presentations We report the history, signs and symptoms, laboratory results, imaging, and histopathology of the thyroid gland of three Pakistani brothers of 58 years, 55 years, and 52 years from Sindh with non-medullary thyroid carcinoma. Only Patients 1 and 3 had active complaints of swelling and pruritus, respectively, whereas Patient 2 was asymptomatic. Patients 2 and 3 had advanced disease at presentation with lymph node metastasis. All patients underwent a total thyroidectomy with Patients 2 and 3 requiring a neck dissection as well. No previous exposure to radiation was present in any of the patients. Their mother had died from adrenal carcinoma but also had a swelling in the front of her neck which was never investigated. All patients remained stable at follow-up. Conclusions Non-medullary thyroid carcinoma is classically considered a sporadic condition. Our case report emphasizes a high index of suspicion, a detailed family history, and screening of first degree relatives when evaluating patients with non-medullary thyroid carcinoma to rule out familial cases which might behave more aggressively.
Background
According to the American Cancer Society, in 2015 the number of people with thyroid cancer in the USA was approximately 63,000 and it caused 2000 deaths [1]. Approximately 1.1 % of people will be diagnosed with it at some point in their life, yet it remains one of the most treatable cancers with a median 5-year survival rate of 98 % [1]. Thyroid cancer is divided into two chief types: medullary, which arises from parafollicular C cells, and nonmedullary, which arises from follicular epithelial cells. Non-medullary thyroid carcinoma (NMTC) includes papillary thyroid carcinoma, follicular thyroid carcinoma, Hürthle cell carcinoma, and anaplastic thyroid carcinoma and these make up 95 % of thyroid malignancies of which papillary carcinoma is the most common [2]. Although medullary carcinoma is traditionally associated with a genetic predisposition and a susceptibility gene, RET, has been identified, increasing evidence is now accumulating about the heritability of NMTC as well [2]. Familial NMTC (FNMTC) is defined as two or more first-degree relatives affected by thyroid cancer without another familial syndrome; this familial clustering has been reported in 3.5 to 10.0 % of cases [1,3]. Heritability is usually in the form of syndromes such as familial adenomatous polyposis, Cowden syndrome, and Werner syndrome where the majority of tumors are in organs other than the thyroid. Non-syndromic FNMTC is a rare entity which most likely follows an autosomal dominant path with incomplete penetrance and variable expression [3].
Patient 1
A 58-year-old Muhajir Pakistani man presented to our surgery clinic with a swelling in his neck of 5 days' duration, which he had noticed while shaving. On physical examination he had a left-sided thyroid nodule, approximately 6×4 cm with no lymphadenopathy. He was advised to have a thyroid function test, a thyroid ultrasound, and fine-needle aspiration (FNA) of the suspicious nodule. Ultrasound-guided FNA of the left lobe of his thyroid showed a follicular lesion. According to American Thyroid Association (ATA) classification he was classified as an intermediate risk patient. A left lobectomy was planned for him but perioperative frozen section examination of the left lobe revealed a follicular carcinoma (Fig. 1a, b); therefore, a total thyroidectomy was performed and the tumor was completely resected. Surprisingly, histopathology of the thyroid specimen (right lobe) showed thyroid parenchyma infiltrated by a neoplastic lesion which had a papillary architecture (Fig. 2a, b). The papillary carcinoma measured 2×1.5×1 cm and was 0.2 cm away from the capsule; the follicular carcinoma measured 6×6 cm with no capsular breech. The cancer had a non-aggressive histology and no lymph nodes were involved. Well-formed papillary fronds were identified with prominent fibrovascular cores. In addition, psammoma bodies were also seen. After the surgery, he received 5550 MBq (150 mCi) radioactive iodine 131 (RAI 131 ) for remnant thyroid tissue ablation. His postoperative stimulated thyroglobulin levels were 19.10 (1.6 to 59.9) with a TSH of 39.12 (0.4 to 4.2). At 6-month follow-up, his stimulated thyroglobulin had increased to 56.56 ng/dl (1.6 to 59.9) with TSH of 94.66 (0.4 to 4.2). An ultrasound of his neck was normal and a whole body scan was negative; therefore, no distant metastasis was present. Considering the above laboratory values a second dose of 3700 MBq (100 mCi) iodine 131 was given. He has been on regular follow-ups for the last 6 years without any evidence of recurrence.
Patient 2
Patient 2 is a younger brother of Patient 1; Patient 2 is a 55-year-old man from Karachi, Pakistan who underwent a thyroid ultrasound for screening purposes although he was asymptomatic. No abnormality was noted on physical examination. His ultrasound showed an enlarged right lobe as compared to the contralateral side measuring 20.2×24.3×39.5 mm. At least three hypoechoic nodules with predominant solid components were seen in his right lobe with tiny calcification present within. The largest nodule measured 20.8×17.0 mm. Ultrasound-guided FNA revealed clusters and groups of follicular cells with architectural atypia, with a few of the cells forming papillary structures. Some nuclear grooving and intranuclear inclusions were seen with a group of Hürthle cells against a background of hemorrhage. He was classified as ATA intermediate risk and underwent total thyroidectomy with central neck dissection at another tertiary care facility. His histopathology report revealed a classic papillary thyroid carcinoma, 5.0 cm in diameter, with minimal extra thyroidal extension (right thyroid lobe). In addition, papillary microcarcinoma, Hürthle cell variant (0.5 cm), and follicular adenoma (left thyroid lobe) were reported as well with level VI lymph node micrometastasis. However, no capsular invasion was seen. He received postoperative 5550 MBq (150 mCi) RAI 131 for remnant tissue ablation and 23 to 3). Considering the strong family history of papillary thyroid carcinoma, he was advised to have a thyroid ultrasound which showed a multinodular goiter. Fine-needle aspiration cytology (FNAC) revealed papillary carcinoma of the thyroid. A few clinically suspicious lymph nodes were also present bilaterally and he was classified as a high risk patient according to ATA guidelines. He underwent total thyroidectomy with bilateral selective neck dissection: level II, III, IV and VI. Histopathology confirmed papillary carcinoma, classic variant, which was 7×5.5×3 cm with capsular invasion and lymph node metastasis to level II, II, IV and IV bilaterally with no distant metastasis (Fig. 3a, b). He received postoperative 6660 MBq (180 mCi) RAI 131 for remnant thyroid tissue ablation; he was started on suppressive thyroid hormone therapy. His follow-up ultrasound at 6 months showed a 14×11 mm heterogeneous area in his right paratracheal region with few lymph nodes and preserved hilum. The largest lymph node was on the right side and measured 15×5 mm. His thyroglobulin level was 46 ng/dl (1.6 to 59.9) whereas a whole body RAI 131 scan was negative for residual disease. He underwent a positron emission tomography (PET) scan which showed a hypermetabolic, 8 mm right level II node: standardized uptake value (SUV) of 5.2. He therefore underwent a second surgery for residual disease and right-sided neck dissection in 2015. Histopathology showed metastatic lymph nodes. He has kept regular follow-ups for 2 years.
Discussion
To the best of our knowledge this is the first case series reported of familial non-medullary carcinoma of the thyroid from Pakistan. Because the number of affected patients was more than two it is highly unlikely that the NMTC was due to sporadic mutations. No specific gene has been associated with heritability; therefore, no genetic testing is available to check for the specific gene. Therefore, clinicians have to rely on a strong family history of this variant of thyroid cancer to diagnose familial cases. In our case series, three brothers were affected; Charkes stated that when three or more family members are affected the probability of this due to sporadic mutations is less than 6 %; thus, we believe that FNMTC in our patients due to sporadic mutations is highly unlikely [4]. Although some researchers argue that familial clustering could be due to environmental exposure and bias due to more aggressive screening in asymptomatic family members, increasing evidence is now accumulating on the hereditability of NMTC. In our study, Patient 1 had a follicular carcinoma in the left and papillary carcinoma in the right lobe, respectively. In addition, Patient 2 had a classic papillary thyroid carcinoma along with papillary microcarcinoma, Hürthle cell variant, and follicular adenoma (left thyroid lobe), suggesting similar genetic mutations in the pathogenesis of FNMTC. As yet, the underlying genetic mutation involved in FNMTC has not been identified, although it has been suggested that FNMTC is a polygenic cancer syndrome as several susceptibility genes and candidate chromosomal loci have been reported [5][6][7].
All three of our patients had minimal symptoms but advanced disease at presentation. Lymph nodes metastasis was seen in Patients 2 and 3 whereas capsular invasion was present in Patient 3 only. This aggressive picture is supported by a meta-analysis by Wang et al. which showed that FNMTC is more aggressive at presentation with a higher degree of recurrence due to increased multifocality, extrathyroid invasion, bilateral presentation, and lymph node involvement and is associated with less disease-free survival as compared to sporadic NMTC [8]. It is also associated with anticipation, widespread disease at presentation and a worse outcome when compared to the first generation [9]. The best predictors of prognosis are the number of family members affected and metastasis at presentation, both of which increase mortality [10]. However, one study suggested that if treated early, FNMTC does not decrease life expectancy of patients [11].
Total thyroidectomy was performed for all patients with additional neck dissection for Patients 2 and 3. Furthermore, all three patients were given RAI 131 . This signifies the role of aggressive treatment in the face of FNMTC. Sippel et al. recommend this approach followed by RAI and thyroid hormone suppression therapy to prevent recurrence and decrease mortality [3].
Conclusions
As specific gene testing is not available, identification of cases of FNMTC relies on a good family history and detailed pedigree analysis. In cases where clinical data suggest the presence of FNMTC, ultrasound should be used for the screening of close relatives for earlier diagnosis and better outcomes. Since FNMTC is known to be particularly aggressive, patients should be a treated with total thyroidectomy and neck dissection and kept under close follow-up with regular evaluations to detect recurrences.
|
v3-fos-license
|
2021-05-10T00:03:17.093Z
|
2021-02-02T00:00:00.000
|
234038487
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4701/11/2/248/pdf?version=1612343161",
"pdf_hash": "32b9ad9feffc30ca392a64e7a4b03aafc40b3034",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1184",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science",
"Engineering"
],
"sha1": "6fd5eabb8ce7ba784abc374372cb8747777e85cc",
"year": 2021
}
|
pes2o/s2orc
|
Enhanced Cementation of Co 2+ and Ni 2+ from Sulfate and Chloride Solutions Using Aluminum as an Electron Donor and Conductive Particles as an Electron Pathway
: Cobalt and nickel have become important strategic resources because they are widely used for renewable energy technologies and rechargeable battery production. Cementation, an electrochemical deposition of noble metal ions using a less noble metal as an electron donor, is an important option to recover Co and Ni from dilute aqueous solutions of these metal ions. In this study, cementation experiments for recovering Co 2+ and Ni 2+ from sulfate and chloride solutions (pH = 4) were conducted at 298 K using Al powder as electron donor, and the effects of additives such as activated carbon (AC), TiO 2 , and SiO 2 powders on the cementation efficiency were investigated. Without additives, cementation efficiencies of Co 2+ and Ni 2+ were almost zero in both sulfate and chloride solutions, mainly because of the presence of an aluminum oxide layer (Al 2 O 3 ) on an Al surface, which inhibits electron transfer from Al to the metal ions. Addition of nonconductor (SiO 2 ) did not affect the cementation efficiencies of Co 2+ and Ni 2+ using Al as electron donor, while addition of (semi)conductors such as AC or TiO 2 enhanced the cementation efficiencies significantly. The results of surface analysis (Auger electron spectroscopy) for the cementation products when using TiO 2 /Al mixture showed that Co and Ni were deposited on TiO 2 particles attached on the Al surface. This result suggests that conductors such as TiO 2 act as an electron pathway from Al to Co 2+ and Ni 2+ , even when an Al oxide layer covered on an Al surface.
Introduction
Cementation, an electrochemical deposition of noble metal ions by a less noble metal as an electron donor, is usually applied to remove/recover metal ions from dilute aqueous solutions [1][2][3][4]. The advantages of cementation are (1) recovery of metals in zero-valent form, (2) simple methods, and (3) low-energy consumption [2,5]. In this method, the overall reaction of cementation is given by Equation (1) [6][7][8]: The cementation reaction is divided into anodic (Equation (2)) and cathodic reactions (Equation (3)): Anodic mN 0 → mN n+ + nme − Cathodic nM m+ + nme − → nM 0 The noble metal ions (M m+ ) are deposited on the surface of a less noble elemental metal (N 0 ) spontaneously, and the driving force of this reaction is mainly determined by differences in the standard electrode potentials for M n+ /M 0 and N n+ /N 0 redox pairs, and it increases when the electrode potential of N 0 is low.
Aluminum (Al) can be considered as a strong reductant (electron donor) used for cementation because of its extremely low standard electrode potential (i.e., E 0 Al3+/Al = -1.67 V vs. standard hydrogen electrode (SHE)) [7,[9][10][11]. The practical application of Al for cementation, however, is limited due to the presence of a dense Al oxide layer (Al 2 O 3 ) on the Al surface, which inhibits electron transfer from Al 0 to metal ions [9,12,13]. When the Al oxide layer is removed from the surface, Al can be used as an electron donor for cementation. To remove the Al oxide layer, however, high temperatures, acid/alkaline solutions, or high concentration of chloride ions are needed [2,5,9,14,15], and these extreme conditions make it difficult to use Al as an electron donor in the practical cementation processes.
Recently, the authors investigated the effects of activated carbon (AC) addition on the efficiency of cementation using Al as an electron donor for recovering gold ions from ammonium thiosulfate solution [16,17], and heavy metal ions (Co 2+ , Ni 2+ , Zn 2+ , and Cd 2+ ) from acidic sulfate and chloride solutions. The results showed that cementation efficiencies of the metal ions were significantly enhanced by the addition of activated carbon (AC) even when an insulating Al oxide layer covered on the Al surface [16,17]. This "enhanced cementation using AC/Al-mixture" can be operated under mild conditions; i.e., it does not require extreme operating conditions such as high temperatures, and high concentrations of chemical reagent such as acid, base, and chloride ions. This new method may, therefore, provide a practical way to use Al, one of the strongest reductants (electron donor) for cementation to recover metal ions from dilute solutions.
Although the details of the mechanism of enhanced cementation using the AC/Almixture are not yet fully understood, the results of surface analysis for the cementation products have suggested that AC attached on the Al surface acted as an electron pathway from Al to noble metal ions, even in the presence of a surface Al oxide layer [17]. If this is the case and the essential role of AC is just as an electron pathway, enhanced cementation would occur even when AC is replaced by other (semi)conductors. On the other hand, as AC is a porous material and has a very large specific surface area [18], not only the electroconductivity but also large adsorption capacity of AC for metal ions may play an important role in the enhanced cementation using the AC/Al-mixture. If this is the case, replacing AC to another conductor with a low specific surface area cannot enhance the cementation using Al as an electron donor.
Cobalt (Co) and nickel (Ni) represent important strategic resources in the world market and their use is rapidly growing for renewable energy technologies and rechargeable battery productions, and the importance of the development of technologies for recovering and purifying Co and Ni is continuously increasing [19][20][21][22][23][24]. Therefore, this study aims to investigate whether the AC could be replaced with other (semi)conductors for recovery of Co and Ni from sulfate and chloride solutions. Titanium dioxide (TiO 2 ) was selected for a semiconductor because of its nontoxic, nonreactive, and high chemical stability, while silicon dioxide (SiO 2 ) was chosen for a nonconductor to clarify the mechanism(s) of the enhanced cementation using the mixture of conductor and Al [25,26].
In the present study, batch-type cementation experiments were conducted using Al as an electron donor to recover Co 2+ and Ni 2+ from sulfate and chloride solutions and the effects of the addition of AC, TiO 2 , or SiO 2 on the recoveries of these metal ions were investigated. Surface analysis (Auger electron spectroscopy (AES)) for the cementation products were also conducted to elucidate the cementation mechanism.
Cementation Tests
The cementation tests were carried out in a 50 mL Erlenmeyer flask using a thermostat water bath shaker (Cool bath shaker, ML-10F, Taitec Corporation, Saitama, Japan) with 40 mm of shaking amplitude and 120 min −1 of shaking frequency at 25 • C for 24 h. (Note that these parameters were selected based on our preliminary experiments). Ten milliliters of the prepared solution were added to the flask, then ultrapure nitrogen gas (99.9%) was introduced for 15 min before experiments to maintain an oxygen-free environment. One-tenth gram of Al powder and/or a predetermined amount (0.01, 0.05, 0.1, 0.2, 0.4 g) of additive (i.e., AC, TiO 2 , and SiO 2 ) were added to the solution. Ultrapure nitrogen gas (99.9%) was further introduced to the flask for 5 min, then the flask was tightly capped with a rubber cap and sealed with parafilm, and an experiment was conducted. After 24 h, the suspension was filtered using a syringe-driven membrane filter (pore size: 0.2 µm, LMS Co., Ltd., Tokyo, Japan); final pH of the filtrate was measured. The filtrate was diluted with 0.1 M HNO 3 , and the concentrations of metal ions were analyzed by inductively coupled plasma atomic emission spectroscopy (ICP-AES, ICPE-9820, Shimadzu Corporation, Kyoto, Japan). The recovery efficiency of Co 2+ and Ni 2+ was calculated based on Equation (4): where C i and C f are the initial and final concentrations of metal ions, respectively.
Surface Analysis
The solid products obtained by filtration were washed 5 times with DI water, dried in a vacuum oven at 40 • C for 24 h, and then analyzed by Auger electron spectroscopy (AES) using JAMP-9500F (JEOL Co., Ltd., Tokyo, Japan). The dried residue was mounted on an AES holder using conductive carbon tape. The analysis was conducted under the following conditions: ultrahigh vacuum condition,~1 × 10 −7 Pa; probe energy, 10 kV; and probe current, 19.7 nA. The spectra were analyzed by using Spectra Investigator AES software.
Recovery of Co 2+ and Ni 2+ from Sulfate Solution
Cementation experiments for recovering Co 2+ and Ni 2+ from sulfate solutions (initial pH = 4) were conducted for 24 h using Al powder as an electron donor, and the effects of the dosage of additives (AC, TiO 2 , and SiO 2 ) on the efficiency of Co and Ni recoveries were investigated. To access the adsorption of Co 2+ and Ni 2+ on the additives, experiments without Al were also conducted.
Figures 2a-c and 3a-c show the Co and Ni recovery efficiencies and final pH as a function of SiO 2 , AC, and TiO 2 dosages, respectively. In all experiments, final pH was in the range from 5.1 to 5.6, at which Co 2+ and Ni 2+ do not precipitate as their hydroxide ( Figures S1 and S2).
As shown in Figures 2a and 3a, without Al, the efficiencies of Co and Ni recovery were almost 0% at any dosage of SiO 2 , suggesting that there was no adsorption of Co 2+ and Ni 2+ on the SiO 2 surface. Even with Al, the Co and Ni recovery efficiencies were also almost 0% regardless of SiO 2 dosage, suggesting that cementation of Co 2+ and Ni 2+ using Al as an electron donor did not occur. This may be due to the presence of an Al oxide layer covering the Al surface, which inhibits the electron transportation from Al to Co 2+ and Ni 2+ [2,27]. Because the cementation did not occur regardless of SiO 2 addition, the results also confirm that physical breakage of the Al oxide layer due to the collision of SiO 2 to Al powder in the shaking flask did not cause enhanced cementation.
As shown in Figures 2b and 3b, even without Al, the recovery efficiency of Co 2+ and Ni 2+ increased with increasing AC dosage, suggesting that these metal ions adsorbed on the AC surface. It has been reported that there are functional groups such as carboxyl and carbonyl groups on the surface of the activated carbon and they act as adsorption sites to metal ions through the reaction described by Equation (5) [18,28,29]. Increase in final pH indicates that not only Co 2+ and Ni 2+ , but also proton (H + ) adsorbed on AC [30,31].
-C-COOH + M 2+ → -C-COOM + 2H + (M = Co or Ni) In the range between 0.05 to 0.2 g AC dosage, recovery efficiency was much higher with Al than without Al; at 0.1 g AC dosage, the efficiency was 56% for Co and 61% for Ni with Al, while it was 31% for Co and 43% for Ni without Al. The difference of metal recovery efficiency between either with or without Al was 25% for Co and 18% for Ni, which cannot be ignored as an experimental error. This suggests that the addition of AC enhances Co and Ni cementation using Al as an electron donor (Equations (6) and (7)), even though the Al oxide layer remained on the Al surface. 3Co 2+ + 2Al 0 → 3Co 0 + 2Al 3+ (6) 3Ni 2+ + 2Al 0 → 3Ni 0 + 2Al 3+ Following these equations, it is expected that the stoichiometric amount of Al dissolves when cementation occurs; however, the dissolved Al concentration after cementation was less than 3 ppm (Tables S1 and S2), which means that most of the Al 3+ was precipitated as Al-(oxy)hydroxide [7,32]. Figures 2c and 3c, the recovery efficiency of Co 2+ and Ni 2+ without Al was almost 0% regardless of TiO 2 dosage, indicating that TiO 2 has no ability to adsorb Co 2+ and Ni 2+ . When 0.1 g of Al was used together with TiO 2 , the recovery efficiency continuously increased with increasing TiO 2 dosage and reached the maximum value of 52% for Co and 71% for Ni with 0.4 g TiO 2 . As already discussed, Co 2+ and Ni 2+ do not precipitate as hydroxides at the pH ranges observed in this series of experiments; the enhanced recovery of Co 2+ and Ni 2+ with TiO 2 and Al suggests that the addition of TiO 2 enhanced the cementation of Co 2+ and Ni 2+ by Al (Equations (6) and (7)). It was also confirmed that the dissolved Ti concentrations were below detection limit, indicating that TiO 2 is stable enough to be used as an agent to enhance cementation of Co 2+ and Ni 2+ with Al in the sulfate solution (Tables S1 and S2).
Recovery of Co 2+ and Ni 2+ from Chloride Solution
Cementation experiments for recovering Co 2+ and Ni 2+ from chloride solutions (initial pH = 4) were conducted for 24 h using Al powder as an electron donor, and the effects of the dosage of additives (AC, TiO 2 , and SiO 2 ) on the efficiency of Co and Ni recovery were investigated. To access the adsorption of Co 2+ and Ni 2+ on the additives, experiments without Al were also conducted. Figures 4a-c and 5a-c show the Co and Ni recovery efficiencies and final pH as a function of AC, TiO 2 , and SiO 2 dosages, respectively.
Similar to the sulfate system (Figures 2 and 3), final pH values of the chloride solutions (Figures 4 and 5) were less than 5.5 for Co and 6.1 for Ni (Tables S3 and S4), which means that removal of Co 2+ and Ni 2+ from the solutions by the formation of cobalt and nickel hydroxide precipitation does not need to be considered in this series of experiments ( Figures S3 and S4).
It has been reported that in the presence of high concentrations of Cl − , the Al oxide layer was dissolved and removed from the Al surface [13,[33][34][35]. If the Al oxide layer is dissolved, a high concentration of dissolved Al would be detected in the solutions, but the observed results (Tables S3 and S4) showed that concentrations of Al were less than 5 ppm under all conditions. This implies that removal of the Al oxide layer did not occur under the experimental condition used here.
As shown in Figures 4a and 5a, when SiO 2 was used as an additive, the recovery efficiencies of Co 2+ and Ni 2+ both with and without Al were almost 0%. This indicates that in chloride solutions, Co 2+ and Ni 2+ were not adsorbed on SiO 2 , and the cementation of Co and Ni with Al did not occur.
The results shown in Figures 4b and 5b suggest that adsorption of Co 2+ and Ni 2+ on AC occurred in chloride solutions, because in the absence of Al, recovery efficiencies of these ions increased with increasing AC dosage. As in sulfate solutions, in the presence of AC, enhancement of metal ion recovery by Al addition was confirmed (Figures 4b and 5b); e.g., at 0.1 g AC dosage, by adding Al, the efficiency increased from 57% to 70% for Co, and it increased from 57% to 70% for Ni. This suggests that enhanced cementation of these metal ions with AC occurred in chloride solutions. Figures 4c and 5c show that the efficiencies of Co 2+ and Ni 2+ recovery in the absence of Al were almost 0% at any dosage of TiO 2 , suggesting that adsorption of these ions on TiO 2 can be ignored. In the presence of Al, the efficiencies of Co 2+ and Ni 2+ recovery increased with increasing TiO 2 dosage; without TiO 2 , the efficiencies were almost 0% for both Co and Ni while they increased to 61% for Co 2+ and 99.9% for Ni 2+ when 0.4 g TiO 2 was added. These results suggest clearly that addition of TiO 2 enhanced the cementation of Co and Ni by using Al as an electron donor, and indicated that AC can be replaced with TiO 2 even if its surface area is lower than AC [18,36,37].
Surface Analysis of Deposited Co and Ni
To investigate the elemental compositions of the deposited Co and Ni, residues obtained from the Co 2+ and Ni 2+ recovery experiment from chloride solutions using 0.4 g of TiO 2 and 0.1 g of Al were analyzed by AES. Figures 6 and 7 show the AES photomicrographs (Figures 6a and 7a) and scan results of Co (Figure 6b,c) and Ni (Figure 7b,c). In both AES photomicrographs, many small gray particles and light particles are attached together onto the surface of the dark particle. The wide AES spectra of the dark particle (point 1 in Figures 6b and 7b) show strong signals of Al and O, indicating that these particles are assigned to Al powder. The small gray particles correspond to TiO 2 because of Ti and O signals observed at point 2 in Figures 6b and 7b. Meanwhile, light particles are observed at point 3 in Figures 6b and 7b are most likely the deposited Co and Ni, respectively.
To identify the elemental composition of the deposited Co and Ni, the narrow AES spectra in the range of 750-785 eV for Co and 830-858 eV for Ni were analyzed (Figures 6c and 7c). These spectra were fitted using reference spectra of Co, CoO, and Co 3 O 4 for Co composition, and Ni and NiO for Ni composition. Fitting results indicate that the deposited Co consisted of metallic Co (93.1%) and CoO (6.9%), while the deposited Ni was composed of metallic Ni (86.2%) and NiO (13.8%). The analysis range of Auger is 0.3-5 nm, which is a near-surface analysis [38], so it is speculated that only the outermost surfaces of deposited Co and Ni were oxidized due to the oxidation of metallic Co and Ni during the dry process.
These results suggest that Co and Ni were deposited on TiO 2 particles attached to the Al surface and TiO 2 can act as an electron pathway from Al to Co 2+ and Ni 2+ , even if the Al oxide layer remains on the Al surface. These results showed that physical separation (i.e., ultrasonification) could be applied as the postcementation process for Co/Ni-TiO 2 particle and Al separation. Afterward, it is expected that only Co and Ni would be dissolved in aqueous solutions, while TiO 2 would not be dissolved because TiO 2 is more stable than Co and Ni.
Conclusions
This study investigated whether activated carbon (AC) could be replaced with other additives such as TiO 2 and SiO 2 for the enhanced cementation of Co 2+ and Ni 2+ using aluminum (Al) in sulfate and chloride solutions. In summary, the Co 2+ and Ni 2+ recovery efficiencies using Al in sulfate and chloride solutions were almost 0% because of the presence of an Al oxide layer on an Al surface. The adsorption of Co 2+ and Ni 2+ occurred when using only AC, while it did not occur when using only TiO 2 and SiO 2 . When using an AC/Al-mixture or TiO 2 /Al-mixture, the Co 2+ and Ni 2+ recovery efficiencies from sulfate and chloride solutions were enhanced compared to using Al, AC, TiO 2 , and SiO 2 /Almixture. From the results of AES analysis, Co and Ni were mostly deposited as zero-valent forms on TiO 2 attached to Al surface. This work establishes that using a conductor (AC) or a semiconductor (TiO 2 ) could enhance the recovery of Co 2+ and Ni 2+ by Al-based cementation even under mild conditions (e.g., low temperature, 25 • C; mild pH conditions, pH 4-5; no Cl − or a low concentration). Moreover, it is expected that other conductive materials could also be used for the removal and/or recovery of metal ions using Al.
Supplementary Materials: The following are available online at https://www.mdpi.com/2075-4 701/11/2/248/s1, Table S1: The concentration of Al, Ti, and Si ions after cementation experiment of Co 2+ in sulfate solution at initial pH 4.0 at 25 • C for 24 h, Table S2: The concentration of Al, Ti, and Si ions after cementation experiment of Ni 2+ in sulfate solution at initial pH 4.0 at 25 • C for 24 h, Table S3: The concentration of Al, Ti, and Si ions after cementation experiment of Co 2+ in chloride solution at initial pH 4.0 at 25 • C for 24 h, Table S4: The concentration of Al, Ti, and Si ions after cementation experiment of Ni 2+ in chloride solution at initial pH 4.0 at 25 • C for 24 h, Figure S1: The activity-pH diagram for 1 mM Co 2+ species with 0.1 M SO 4 2− at 25 • C (created using the GWB Professional Ver. 12.0.3 software), Figure S2: The activity-pH diagram for 1 mM Ni 2+ species with 0.1 M SO 4 2at 25 • C (created using the GWB Professional Ver. 12.0.3 software), Figure S3: The activity-pH diagram for 1 mM Co 2+ species with 0.1 M Clat 25 • C (created using the GWB Professional Ver. 12.0.3 software), Figure S4: Funding: This study was financially supported by the Japan Society for the Promotion of Science (JSPS) grant-in-aid for Research Activity start-up (grant numbers: 19K24378).
Data Availability Statement: Data available on request due to restrictions, as the research is ongoing.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2015-07-27T00:00:00.000
|
17792603
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0131798&type=printable",
"pdf_hash": "0b2d0cefb3ea057dfa7fe5a0733f42a6278e9106",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1186",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "0b2d0cefb3ea057dfa7fe5a0733f42a6278e9106",
"year": 2015
}
|
pes2o/s2orc
|
Planckian Power Spectral Densities from Human Calves during Posture Maintenance and Controlled Isometric Contractions
Background The relationship between muscle anatomy and physiology and its corresponding electromyography activity (EMGA) is complex and not well understood. EMGA models may be broadly divided in stochastic and motor-unit-based models. For example, these models have successfully described many muscle physiological variables such as the value of the muscle fiber velocity and the linear relationship between median frequency and muscle fiber velocity. However they cannot explain the behavior of many of these variables with changes in intramuscular temperature, or muscle PH acidity, for instance. Here, we propose that the motor unit action potential can be treated as an electromagnetic resonant mode confined at thermal equilibrium inside the muscle. The motor units comprising the muscle form a system of standing waves or modes, where the energy of each mode is proportional to its frequency. Therefore, the power spectral density of the EMGA is well described and fit by Planck’s law and from its distribution we developed theoretical relationships that explain the behavior of known physiological variables with changes in intramuscular temperature or muscle PH acidity, for instance. Methods EMGA of the calf muscle was recorded during posture maintenance in seven participants and during controlled isometric contractions in two participants. The power spectral density of the EMGA was then fit with the Planckian distribution. Then, we inferred nine theoretical relationships from the distribution and compared the theoretically derived values with experimentally obtained values. Results The power spectral density of EMGA was fit by Planckian distributions and all the theoretical relationships were validated by experimental results. Conclusions Only by considering the motor unit action potentials as electromagnetic resonant modes confined at thermal equilibrium inside the muscle suffices to predict known or new theoretical relationships for muscle physiological variables that other models have failed to do.
Introduction
Electromyography activity is the electrical manifestation of neuromuscular activation. Muscle fibers are provided by end branches of the motor neuron axon, whose cell body is located in the anterior horn of the spinal grey matter [1]. The nerve cell body, its long axon, and its end branches constitute a motor unit. The ending of the axon on the muscle fiber defines an area known as the endplate. These endplates are usually, but not always, located near the middle of the muscle fibers. An action potential descending along the motor neuron activates almost simultaneously all the fibers of a motor unit (MU) [1]. When the postsynaptic membrane of a muscle is depolarized, the depolarization propagates in both directions from one of the fiber to the other. The movement of ions across the membrane during depolarization generates an electromagnetic field in the vicinity of the muscle fibers whose time excursion can be measured and the radiated power spectral density can be obtained.
The EMGA signal is usually measured by means of a bipolar electrode and constitutes the addition from many MUs signals. The electrical potentials produced by distinct MUs occur randomly and, as a consequence, a noise-like signal is produced. This random signal is often studied in the frequency domain by using the power spectrum density (PSD), for example, using summary statistics such as the median or mean frequency of the PSD [2,3]. Since these variables reflect the whole state of a muscle, they are known as global variables [4]. The PSD of the motor neuron action potential (MUAP) depends, among other factors, on the location and configuration of the electrodes [5] and the muscle fiber velocity, c m . According to two models [6,7], the relationship between c m and PSD can be described as PðnÞ ¼ P 0 ðn=c m Þ=c 2 m , where p 0 (v) is the PSD of the original waveform and v is the linear frequency. Thus, changes in muscle fiber velocity lead to compression or stretching of the PSD. For example, when muscular fatigue occurs the muscle fiber velocity decreases and consequently the EMGA power spectrum shifts in the direction of lower frequencies [4].
EMGA models may be generally divided in stochastic and MU-based models. Stochastic models relate the global variables to hidden anatomical and physiological parameters in the muscle [8]. They consider the EMGA to be an aleatory or stochastic process whose amplitude correlates with the level of muscle activation [4]. Stochastic models are considered successful depending how the stochastic process is mathematically described. For instance, the existing amplitude of EMGA signals measured with electrodes located on the skin can be considered as a Gaussian random variable, however, it is inadequate for describing the whole EMGA signal as a Gaussian random process [7]. This assumption is usually made because it significantly makes easier the modeling while maintaining reasonable results. Moreover, it is frequently assumed that the PSD of the EMGA signal can be written as a rational function of frequency. That is to say, the PSD is suppose to be a ratio of two polynomials in frequency, where only odd powers of frequency have zero coefficients [7]. This final supposition permits the modeled signal to be treat as filtered white noise or colored noise. Determination of the coefficients and order of the polynomials is done empirically from the PSD of EMGA signals [7].
MU-based models of the MUAP consider a muscle consisting of individual muscle fibers arranged in motor units and assembled in the same direction [6]. The fibers and the extracellular fluid comprise the substance in which the EMGA signals are distributed by volume conduction [6]. They also consider the fibers and the extracellular fluid as an isotropic, homogeneous and ohmic substance [6], conditions that are not always fulfilled. These models are capable of explaining the spatio-temporal distribution of single MUAPs throughout the muscle, which gives detailed information about MUs' architecture. However, motor-unit based models have not completely succeeded for obtaining data concerning firing rates and recruitment across the full span of contraction [4]. This is basically due to the increase in the number of MUAPs involved in the contraction and the resulting difficulty to treat the problem mathematically.
These models have successfully described many muscle physiological variables such as the value of the muscle fiber velocity c m and the linear relationship between median frequency v med and c m [7,9]. However they cannot explain the behavior of many of these parameters with changes in intramuscular temperature T M , muscle PH acidity, or to clearly identify the proportionality constants between v med and c m . For example, if the intramuscular temperature decreases then v med [10], the initial percentage median frequency %Iv med [11], and c m [12] decreases linearly as well. There is also no EMGA model that describes that the PH of the extracellular fluid is the dominant factor in reducing c m and v med [13,14]. Studies in rats showed that both the initial muscle fiber velocity and initial median frequency Ic m and Iv med decreased linearly when the PH of a nerve-muscle preparation was decreased.
Another important topic where there is a need for a theoretical model is muscle aging. Muscle aging is marked by a decrease in execution and fit changed circumstances. Progressive inability of regeneration machinery to replace damaged muscles is a sign of aging related to sarcopenia or muscle loss [15]. This characteristic is shared with other conditions that involve muscle declining, such as AIDS, amyotrophic lateral sclerosis and cancer, all distinguished by physiological and metabolic alterations [15]. The metabolism of a person is the result of all chemical reactions that the human body can make and spend. The rate of decrease of the internal energy in the body is known as metabolic rate. The basal metabolic rate (BMR) is defined similarly to the metabolic rate but it is measured in the morning when a subject is awake, at ambient temperature, lying down in a bed and without any meals ingested. The resting skeletal muscles account for approximately one fifth of the BMR [16]. Moreover, the BMR per surface area (BMRS) decreases as we age [17]. An intriguing experimental result is presented in a recent aging study [18] where neuromuscular performance in young and aged subjects was studied by tracking the changes of the EMGA PSD v med and c m while the subjects performed maximal voluntary contractions (MVC). Both v med and c m in aged subjects decreased when compared with young subjects. Clearly a decreasing metabolism due to aging may decrease T M , having as a consequence the decrease in v med and c m . A new EMGA PSD theoretical model should include these metabolic effects. Currently there is no model that explains the shift in v med and c m with aging.
What is more, in 1912, Piper [1] discovered that the PSD of the EMGA compresses and shifts towards low frequencies during a sustained contraction. Present-day models predict both the compression in the EMGA PSD during sustained contractions, and the shift in the spectrum towards lower frequencies [6,7]. Therefore any new EMGA PSD model should also describe muscular fatigue if it is going to be considered as a good model. So clearly, there is a need to improve the current EMGA PSD theoretical models to explain the aforementioned experimental relationships and also describe what it is already known such as muscle fiber fatigue.
Planck's energy radiation law reports the spectral distribution of energy in a cavity that wholly absorbs all radiant energy impinging upon it, reaches thermal equilibrium, and then reemits that energy as quickly as it absorbs it (blackbody or cavity radiation). The radiated energy can be considered to be the product of standing waves or resonant modes of the radiating cavity. The occurrence of standing waves in muscular fiber is well known. From the pioneering work of Eben in 1936 [19], it has been shown that the spiking rate of motor nerve endplates regulates the arrangement of emanated mechanical pressure waves through the muscle fibers, conveyed as cross-striations, and then these waves are reflected at both fibers' ends. The interaction of incident and reflected waves form a complex stationary wave system in both the transversal and perpendicular directions of the fiber. Therefore, the energy state of a muscle can be characterized by specifying the different possible types of standing waves. As the energy increases, the frequency of muscle fiber oscillations increases and the wavelengths become shorter. When the energy decreases, there is a corresponding decrease of wave frequency accompanied by longer wavelengths. The standing waves in muscle fibers can be related to resonant modes in a radiative cavity through Planck's Law. Therefore we will consider that EMGA power spectral densities may obey the Planckian distribution. Also, Planck's radiation law does not depend on any random process or variable, and it is not the result of any ratio of two arbitrary polynomials in frequency, where the order and coefficients of the polynomials are determined empirically. It does not require the assumption that muscle fibers are organized in one direction or that the medium where the EMGA signals are conducted be homogeneous, isotropic, homogeneous, and ohmic. It only requires thermal equilibrium of resonant modes within the cavity. So, it does not matter how complex the cavity is or what it is made of. These properties would help explore different levels of isometric contractions in muscle fibers by just tracking distribution changes with each contraction.
For isometric contractions the Planckian distribution can be obtained then by considering that the MUAPs behave as electromagnetic resonant modes confined at thermal equilibrium in a muscle temperature range from 10 up to 37°C [11], where they form a system of standing waves and where the energy of each mode is proportional to the standing wave frequency. The theoretical formula (see S1 File) for the radiated power through a surface of certain area at some temperature is given by the Planckian distribution where S is the area of the emission surface in m 2 ,h is the equivalent of Planck's constant in V 2 / Hz, c m is the muscle fiber velocity in m/s, given by 2ladT, a is a constant in Hz/°C or Hz/K,v is the frequency in Hz, the parameter l represents the average fiber length. Notice that Eq 1 is inversely proportional to c 2 m as in the models presented in [6,7]. The term adT represents the muscle characteristic frequency v 0 . When this characteristic frequency is multiplied by constant h, the product hadT is proportional to the muscle thermal energy E TH , where dT is the change between T M in°C or K and the intramuscular absolute initial temperature T M0 in°C or K, i.e. dT = (T M -T M0 ). The value for T M0 can be taken as 0°C or 273 K (see S1 File). Given that most of the physiological experiments that relate to temperature use degrees Celsius, in the following sections we will use only degrees Celsius instead of Kelvin.
The present work is divided in six phases: first, we show that the EMGA power spectral density in the Gastrocnemius medial muscle is fitted very well by a Planckian distribution during posture maintenance and controlled isometric contractions. The distribution requires only a few parameters (S,h, a, dT) whose meaning is relatively straightforward. Second, we experimentally obtained c m by hypothesizing the existence of standing waves or modes and then compared its value with known results presented in [2,7]. We also showed the relationship between c m vs. dT and compared it with data found in [12]. Third, we experimentally obtained v max , v med , %Iv med , and the irradiance I then performed multiple regression analyses to test which parameters S,h, a, dT were significant predictors of v max , v med , %Iv med , and I. We found dT is the best predictor and performed a linear fit between v max , v med ,%Iv med vs. dT or a power fit for I vs. dT. We also performed multiple regression analyses to test which parameters S,h, a, c m were significant predictors of v med . We found c m is the best predictor and generated a linear fit between v med vs. c m . Fourth, from the Planckian distribution we developed theoretical relationships between v max , v med , %Iv med and I vs. dT, v med vs. c m , Iv med and Ic m vs. muscle PH, v med , c m vs. human aging, and finally v med , c m and PSD amplitude at v med versus muscle fatigue. Fifth, we compared the regression models against the theoretical relationship for v max , v med , % Iv med , and I vs. dT, and v med vs. c m . The relationship v med vs. dT was also compared with the experimental relationship found in [10]. The relationship %Iv med vs. dT was also compared with data extracted from [11]. The relationship v med vs. c m was also compared with data found in [9]. Sixth, we used the Planckian distribution to develop theoretical relationships for Iv med and Ic m vs. muscle PH, v med , c m vs. human aging and v med , c m and PSD amplitude at v med vs. muscle fatigue and compared these theoretical results against experimental studies found in [14], [18] and [1,13] respectively. If the agreement is good in all these comparisons then a Planckian distribution not just fits EMGA power spectral densities well in muscles but it will extend the current knowledge we have on isometric contractions because this distribution predicts new theoretical relationships that other models have failed to do. Moreover, previous known theoretical relationships obtained from models other than a Planckian distribution are also well described by a Planckian distribution.
Materials and Methods General
An EMGA electrode was inserted into the right Gastrocnemius medial muscle of N = 9 participants. The electrode was placed far from the innervation zones which are located either at the perimeter of the muscle or at one end of the muscle [13,20]. In the first condition (Sharpened Romberg Position), participants were required maintain balance while standing in a tandem heel-to-toe position with eyes open and with arms folded against the chest [21]. This position is inherently unstable and allowed us to examine different contraction ranges of the muscle. In the second condition we asked N = 2 participants to execute isometric contractions of their right gastrocnemius medial at 10% of the maximum voluntary contraction (MVC) while sitting comfortably with eyes open. This allowed us to have better control on the contraction range of a muscle.
The Bagnoli (DELSyS, Inc.) handheld EMGA tracking system was used to measure EMGA. For the Sharpened Romberg Position task, EMGA was tracked for 10 trials each lasting 35 seconds (one-minute pause between trials) and for the maximum voluntary contraction task, EMGA was tracked for 10 trials, each consisting of 10 contractions of 1 second with one-minute pauses between trials.
The study obtained ethics approval from the CERES (Comité d'éthique de la recherche en santé) of Université de Montréal, where all the testing took place. Informed written consent was obtained from all participants of the study.
Planckian distribution fittings (I)
The PSD was calculated by means of a FFT algorithm for each trial and then averaged for each subject. All the averaged PSDs were then fitted with parameters S,h,a, and dT from Eq 1. The parameter l that represents the average gastrocnemius medial fiber length was taken from reference [22]. For the fitting we used the non-linear least squares method implemented in Matlab's curve-fitting tool. We use a parametric nonlinear regression model of Eq 1, the dependent variable or the response is the EMGA PSD, the independent variable is the linear frequency v (predictor) and the non-linear parameters were S,h,a, and dT. To determine the nonlinear parameter estimates, we used the function y i = P(v i ,S,h,a,dT)+ε i , where y i ,v i and ε i represent the i-th numerical PSD value, frequency and residual or error. The function that is minimized is given by ðy i À Pðn i ; S; h; a; dTÞÞ 2 , where x is a vector given by x = (S,h,a, dT). Nonlinear models are more difficult to fit than linear models because the coefficients cannot be estimated using simple matrix techniques. Instead, an iterative approach is required as follows: Start with an initial estimate for each coefficient. Produce the fit for the current set of coefficients. Adjust the coefficients and determine whether the fit improves. The direction and magnitude of the adjustment depend on the fitting algorithm. Here, we have used the Trustregion algorithm [23,24] (see S1 File).
c m values and theoretical relationship between c m vs. dT (II) c m was obtained experimentally from the EMGA PSD for each subject. c m was estimated by working in the frequency region where v adT. There, we can consider that all the MUAP frequencies fluctuate around an average value of v 0 = adT. We can then assume that v 0 corresponds to the frequency of a standing wave with wavelength λ = 2l where l is the average muscle fiber length. From this assumption, we can calculate the muscle fiber velocity as c m = 2ladT. The parameter l that represents the average gastrocnemius medial fiber length was taken from reference [22]. Clearly the expression c m = 2ladT represents a linear relationship between c m and dT with a theoretical sensitivity or slope of 2la in m/s°C.
Regression Models (III)
v max vs. S,h, a, dT. v max was obtained experimentally from the EMGA PSD for each subject. At this frequency the EMGA PSD reaches its maximum. Then we performed multiple regression analyses using SPSS to test which parameters S,h, a, dT of the Planckian distribution are significant predictors of v max . Pearson correlations and ANOVA methods were used to determine the best predictor (dT). Once found, we performed a linear fit between v max and best predictor. v med vs. S,h, a, dT. v med was obtained experimentally from the EMGA PSD for each subject. At this frequency the EMGA PSD area under the curve is half (see S1 File). Then we performed multiple regression analyses using SPSS to test which parameters S,h, a, dT of the Planckian distribution are significant predictors of v med . Pearson correlations and ANOVA methods were used to determine the best predictor (dT). Once found, we performed a linear fit between v med and best predictor. %Iv med vs. S,h, a, dT. v med was obtained experimentally as explained above. Without loss of generality we can choose any value for v med as reference value v medI from the EMGA PSD for each subject, here we choose the highest value found in Table 1 (154 Hz). Then we divide each v med value with v medI and multiply the result by 100 to obtain the initial percentage median frequency %Iv med . Then we performed multiple regression analyses using SPSS to test which parameters S,h, a, dT of the Planckian distribution are significant predictors of %Iv med . Pearson correlations and ANOVA methods were used to determine the best predictor (dT). Once found we performed a linear fit between %Iv med and best predictor. v med vs. S,h, a, c m . v med and c m were obtained experimentally as described above. Then we performed multiple regression analyses using SPSS to test which parameters S,h, a, c m of the Planckian distribution are significant predictors of v med . Pearson correlations and ANOVA methods were used to determine the best predictor (c m ). Once found we performed a linear fit between v med and best predictor. I vs. S,h, a, dT. I was obtained experimentally from the EMGA PSD for each subject. To obtain the irradiance I, we performed a numerical integration on every subject's average PSD and divided the result by its corresponding emission surface area S. Multiple linear regression analysis was used to develop a model for predicting the logarithm of irradiance from the logarithm of parameters S,h, a, dT. Pearson correlations and ANOVA methods were used to determine the best predictor (dT). Once found we performed a power fit between I and best predictor.
Theoretical relationships inferred from Planck's distribution (IV)
v max vs. dT. The Planckian distribution has its maximum at a frequency determined by v max = 2.821adT (see S1 File), which represents a linear relationship with a sensitivity or slope of 2.821a Hz/°C. v med vs. dT. The Planckian distribution has its median at a frequency given by v med = 3.503adT (see S1 File), which represents a linear relationship with sensitivity or slope of 3.503a Hz/°C. %Iv med vs. dT. Theoretically, %Iv med is given by 100dT/(T MI −T M0 ) (see S1 File), where T MI is the initial reference temperature and the sensitivity (slope) is then given by 100/ (T MI −T M0 ).
v med vs. c m . We can substitute the relationship c m = 2ladT into the relationship v med = 3.503adT to obtain, v med = 3.503c m /2l, thus revealing that the median frequency v med depends linearly on the muscle fiber velocity c m with a slope of 3.503/2l in Hz/m/s. I vs. dT. The irradiance is obtained by integrating the Planck distribution across the range of all possible frequencies and then divide the result by the surface of area S (see S1 File). The final result is Iv med and Ic m vs. muscle PH. Now, by assuming the parametera is not constant anymore but a PH function, we can write v med = 3.503a(PH)dT and c m = 2la(PH)dT. From these two expressions, we obtain that the initial median frequency and the initial muscle fiber velocity depends on the muscle PH as, Iv med = Ic m = a(PH)/a(PH I ) where PH I represents the initial PH condition (see S1 File). If we assume a simple linear approximation for a(PH) equal to αPH, where α is a constant, we should expect that Iv med = Ic m = PH/PH I , which represent two linear relationships with identical slopes given by 1/PH I . v med and c m vs. muscle aging. If we assume that the thermal energy E TH is proportional to the BMR (see S1 File), so any relative change δE TH /E TH of this energy should be equal to δBMR/BMR and because E TH , c m and v med are proportional to dT, therefore δBMR/BMR = δv med /v med = δc m /c m . Since BMR = (BMRS)BSA, where BSA is the human body surface area, therefore δBMR/BMR = δBMRS/BMRS+δBSA/BSA, so the relative change or BMR due to aging is given by the BMRS relative change plus the BSA relative change (related to sarcopenia or muscle loss). By knowing these relative changes with age we can know the relative changes with age of v med and c m . That is, any relative change on the basal metabolic rate due to aging should be reflected exactly in the same proportion in v med and c m .
v med , c m and the PSD amplitude evaluated at v med vs. muscle fatigue. The Planckian distribution can predict the fatigue effects on muscle fibers by considering 1) that the EMGA PSD median frequency and the muscle fiber velocity values decrease with muscle fatigue and 2) that the compression of the EMGA PSD results in an increase of the PSD amplitude evaluated at the median frequency. Mathematically (see S1 File), the changes δc m of the muscle fiber velocity, δv med of the median frequency, and δP med of the Planckian distribution amplitude evaluated at the median frequency can be used to describe fatigue effects in muscle fibers whenδP med 0, δc m 0 and δP med !0. This last inequality gives 3|δv med /v med | −2δc m /c m , where the brackets denote the absolute value.
Results
Planckian distribution fits (I) Fig 1 shows the fit results for the SR position task (four participants) and the MVC task (two participants) respectively. All parameters and the R-squared coefficients are shown in Table 1. We can observe that the fits are very good, a fact that is corroborated by the high R-squared coefficients' values. Moreover, it can be seen that the best-fit values for h and a are very similar across participants, indicating that these parameters are in fact constants.
In order to show that this distribution is capable of reproducing known electrophysiological results or predict new ones, we now take all the theoretical relationships inferred from the Planckian distribution and compare them against either the fit obtained with multiple regression analysis or experimental results obtained elsewhere. A summary of these results is shown in Table 2.
Comparison of c m values obtained here vs. other studies and comparison between the theoretical relationship between c m vs. dT and other studies (II) Experimentally, c m values ranges from 2 to 6 m/s [2,7] and average value of 4 m/s [1,2]. In Table 1 we observe the values we obtained for c m . The values ranged from 3.28 to 5.54 m/s with average of 4.13±0.35 m/s, which is close to the average value that is normally used. Working with the human Vastus medialis muscle, Morimoto and colleagues [12] found a linear relationship between the muscle fiber velocity and the intramuscular temperature, with a sensitivity of 0.2 m/s°C in the range of 17-31°C. By using a length for the human Vastus medialis fibers [25] of 9.53±0.63 cm and the average value of 1.1891 for a (see Table 1) we found a theoretical value for the slope 2la of 0.23±0.02 m/s°C, which is close to the experimental values obtained by Morimoto and colleagues.
Comparison between regression models vs. theoretical relationships inferred from Planck's distribution (V) v max vs. dT. To evaluate the relationship between model parameters S, h, a, dT and v max , we conducted multiple linear regression analysis with v max as the dependent variable and Table 3 shows the results of these analyses. Only dT had a significant (p < 0.01) Pearson correlation with v max and only dT was found to be a significant (p <0.01) predictor in the full linear regression model. The predictor model was able to account for 92.7% of the variance in v max , F(1,8) = 21.27, p = 0.003, R 2 = 0.927. Thus, v max depends largely on dT. We performed a linear fit between v max and dT and found a slope value of 3.21±0.09 Hz/°C. This result supports the theoretical equation v max = 2.821adT. If we use the average value of 1.1891 for a (see Table 1) we obtain a slope value of 3.36 Hz/°C which is of the same order of magnitude as 3.21±0.09 Hz/°C. The experimental data and the linear fit are shown in Fig 2. v med vs.dT. Multiple linear regression analysis was used to develop a model for predicting v med from parameters S, h, a, and dT. Table 4 shows the results of these analyses. Only dT had a significant (p < .01) Pearson correlation with S, h, a, and dT and only dT predictor had a significant effect (p <0.01) in the full model. The predictor model was able to account for 96.5% of the variance in v med , F(3,8) = 45.63, p<0.001, R 2 = 0.965. As a consequence of this result, we performed a linear fit between v med and dT only and obtained a slope of 4.37±0.11 Hz/°C. This result supports the theoretical equation v med = 3.503adT. If we use the average value of 1.1891 for a (see Table 1) we obtain a slope value of 4.17Hz/°C that is of the same order of magnitude as 4.37±0.11 Hz/°C. The experimental data and the linear fit are shown in Fig 3. In a recent study [10] Petrofsky and Laymon studied the effect of temperature on the EMGA PSD amplitude. The EMGA was measured over several muscles, including the one we have studied here (Gastrocnemius medial). Short (3s) isometric contractions were executed at different tensions ranging between 20 and 100% of each subject's maximum strength. Placing the arm or leg of the subjects in water. The water bath temperature was controlled at 24, 27, 34, and 37°C during those contractions. The results showed that the median frequency of the EMGA PSD was directly proportional to the temperature of the Gastrocnemius medial muscle amid the succinct isometric contractions [10] with sensitivities of 4.05±0. 16 %Iv med vs. dT. Multiple linear regression analysis was used to develop a model for predicting %Iv med from parameters S, h, a, and dT. Table 5 shows the results of these analyses. Only dT had a significant (p < .01) Pearson correlation with %Iv med and only dT predictor had a significant effect (p <0.01) in the full model. The predictor model was able to account for 96.5% of the variance in %Iv med , F(3,8) = 45.63, p<0.001, R 2 = 0.965. As a consequence of this result, we performed a linear fit between %Iv med and dT only and obtained a slope of 2.84±0.07%/°C. This result supports the theoretical equation 100dT/(T MI -T M0 ). If we use the value for dT MI associated with v medI = 154Hz that equals to36.97°C (see Table 1) we obtain a slope value of 100/(36.97−0) = 2.71%/°C that is of the same order of magnitude as 2.84±0.07%/°C. The experimental data and the linear fit are shown in Fig 4. We also used the data reported by Merletti and colleagues [11] from experiments where they cooled down the first dorsal interosseous muscle of human subjects down to 10°C. Subjects were asked to perform isometric constantforce abduction contractions of the index finger at 20% and 80% MVC. The relationship of the initial median frequency percentage, %Iv med , versus intramuscular temperature was found to be linear with sensitivities (slopes) of 3.03%/°C and 3.48%/°C for 20% and 80% MVC, respectively [11]. Since %Iv med is given by 100dT/(T MI −T M0 ). Therefore, using the reference temperature value of 33°C as in [11] and absolute initial temperature of 0°C, we obtain a sensitivity or slope of 3.03%/°C, consistent with the experimental results [20].
v med vs. c m . Multiple linear regression analysis was used to develop a model for predicting v med from parameters S, h, a, and c m . Table 6 shows the results of these analyses. Only c m had a significant (p < .01) Pearson correlation with v med and only c m predictor had a significant effect (p <0.01) in the full model. The predictor model was able to account for 96.5% of the variance in v med , F (3,8) = 45.63, p<0.001, R 2 = 0.965. As a consequence of this result, we performed a linear fit between v med and c m only. We found a slope value of 29.19±0.75 Hz/m/s. This result supports the theoretical equation v med = 3.503c m /2l. If we use in v med = 3.503c m /2l the average value for the gastrocnemius medial fiber length of 6.3±1.2 cm as reported in [22], we obtain a [26] in the expression v med = 3.503c m /2l we obtained v med = (25.8Hz/m/s)c m . In reference [9] the experimental relationship found is v med = (23.1Hz/m/s)c m for the same muscle, which is of the same order of what we found theoretically. I vs. dT. Multiple linear regression analysis was used to develop a model for predicting the logarithm of irradiance from the logarithm of parameters S, h, a, and dT, Table 7 shows the results of these analyses. Only dT had a significant (p < .01) Person correlation with irradiance and only the dT and a predictors had significant (p < .05) partial effects in the full model. The Table 4. Person correlation and beta values for predicting the median frequency v med from parametersS,h,a, and dT.
Person correlation coefficients
Linear model . Substituting the fitted parameters h, a and the average value for the gastrocnemius medial fiber length of 6.3±1.2 cm as reported in [22] we obtained a value of 2.88×10 −10 ±1.10×10 −10 , which is of the same order of magnitude as 8.99×10 −10 . Fig 6 shows the experimental data and the power fit. Comparison between experimental studies on muscle PH, aging and fatigue vs. theoretical relationships inferred from Planck's distribution (VI) Iv med and Ic m vs. muscle PH. Experimentally the initial median frequency Iv med decreased from 1.000±0.008 (PH 7.4) to 0.948±0.027 (PH 7.0) to 0.854±0.029 (PH 6.6) [14]. The initial muscle fiber velocity Ic m , calculated similarly to Iv med , decreased from 1.000±0.012 (PH 7.4) to 0.947±0.033 to 0.863±0.035 (PH 6.6) [14]. The plot of the experimental values Iv med vs. Ic m gave a straight line with slope of 1.07 (r>0.99) [14]. Now, in methods we showed v med and c m vs. muscle aging. We have shown in methods that the relative change for the BMR is given by the addition of the BMR per surface area (BMRS) relative change plus the body surface (BSA) relative change, or δBMR/BMR = δBMRS/BMRS+δBSA/BSA. Firstly, the relative change of BSA due to aging can be estimated as follows: The body surface BSA formula is given by [17] BSA(cm 2 ) = WEIGHT 0.425 (Kg)×HEIGHT 0.725 (cm)×71.84. We can estimate the [18]. v med , c m and the PSD amplitude evaluated at v med vs. muscle fatigue. As it was discussed in the methods section, fatigue effects can be described by the Planckian distribution when δv med 0, δc m 0 and 3|δv med /v med | −2δc m /c m , where δv med , and δc m represent the physiological changes on the median frequency and velocity due to fatigue. With these conditions, v med and c m decreases, the EMGA PSD evaluated at v med should increase and the PSD should shift towards low frequencies as well due to a sustained contraction of the muscle. Fig 8 shows the comparison between experimental data extracted from [1,13] and Eq 1. As can be seen in the Fig 8, the Planckian distribution provides a very good fit to the experimental results.
Discussion and Conclusions
From all the above results we can conclude that the power spectral density of isometric contractions in muscle fibers can be described by a Planckian distribution. As we mention earlier in the introduction, Planck's distribution describes the electromagnetic radiation emitted by a black body in thermal equilibrium at a definite temperature and the radiated energy can be considered to be the product of standing waves or resonant modes of the radiating cavity.
The standing waves in muscle fibers can be related to resonant modes in a radiative cavity through the phenomenon known as the size principle in muscles [27][28][29]. The size principle describes the observation that, as more force is demanded, MUs are recruited in a specific order according to the amplitude of their force output. Henneman's group [27][28][29] first showed that the size of a MU is proportional to the amount of muscle fibers it innervates. They also found that larger impulses represented the firing of larger MUs, and that the smallest MUs represented by the smallest impulse amplitude fired first and had lower thresholds for stretch, while larger MUs fired last and had higher thresholds [30]. Another demonstration of this principle was shown in a study of 78 subjects asked to perform isometric maximal voluntary contractions of the Quadriceps femoris muscle [31]. Results showed that as the force generated by the muscle increased, the MUAP's amplitude and firing frequency increased, while the number of recruited MUs decreased. This indicates that slower, low force MUs were being recruited first, while fast, high force MUs were being recruited later.
We propose that the firing frequency of any given MU can be thought of as an electromagnetic resonant mode, whose energy is proportional to its firing frequency. A muscle can be likened to a radiating cavity, where its set of frequency transitions is defined by the resonant modes of its MUs. Just as in the radiating body case, where the probability of exciting the upper modes is less likely, it is less probable to excite MU with high frequencies. Fig 9 demonstrates this relationship. Fig 9 (top) shows the firing-rate behavior of 8 motor units as a function of the size of an isometric voluntary contraction from an experiment conducted with 60 motor units by Monster and Chan [32]. The isometric voluntary contraction level is represented as force measured with a strain gauge assembly placed perpendicularly to the dorsal surface of the middle finger on the distal end of the first phalanx. The activation of 8 motor units shown in Fig 9 (top) represent a force of 10 gram-force. As more force is exerted, either a single MU increases its firing rate to higher levels or, alternatively, additional slow-firing MUs are solicited. The diagram in Fig 9 (bottom) represents the data from Fig 9 (top) in terms of resonant modes. The force of 10 gram-force can be described as having 11 frequency levels v i and every MU can be seen as an oscillator that reaches different frequency levels, where each frequency level represents a resonant mode. Thus, the first MU (brown) makes 4 transitions from level v 1 to v 11 , a second MU (green) makes 3 transitions, the third MU (red) only makes two transitions, etc. The numbers on the circles indicate the temporal sequence for the appearance of each transition. Following the blackbody analogy, we can hypothesize that the energy should be proportional to the respective characteristic oscillation frequency v of the oscillator.
In summary, the fact that a Planckian distribution successfully describes many physiological relationships that involve physiological variables and the fitting parameter dT means that the MUAPs may be seen as electromagnetic resonant modes confined at thermal equilibrium inside the muscle, where they form a system of standing waves and the energy of each mode is proportional to its frequency. The temperature range where we expect the Planckian distributions can be used is from 10 up to 37°C. The size principle establishes that as the force generated in a muscle increases, the MUAP's amplitude and its firing frequency increase, meaning that MUs are being recruited from slow, low force to fast, high force with the number of MUs decreasing as the frequency increases. Moreover, the individual MU frequency response is not continuous but is rather a series of discrete values. Every discrete value should represent a mode with energy proportional to hv. The current results show that it is less probable to excite upper MU frequencies or modes because the probability of occupying these modes is not the same and requires an extra energy proportional to hv, making the process for exciting upper modes less probable, just like the Planckian distribution predicts. It is interesting to note that from thermodynamical arguments, Eq 1 represents a body that radiates the largest quantity of energy per frequency [33].
Finally the Planckian distribution needs five parameters named S, h, a, l and dT. Our fitting and multiple regression analyses revealed that h and a can be considered as constants, because of the level of variance they presented compared with the rest of parameters. The constant h value we obtained here is 1.59×10 −13 V 2 /Hz 2 but it can be estimated from references [31] h = 2.759×10 −13 ±0.347×10 −13 and [32] h = 10.48×10 −13 ±1.92×10 −13 (see S1 File and S1 Fig).
Both values are of similar order of magnitude as the value we estimated with our experimental data. The parameter l value can be determined either by the fitting process or by using its averaged value, given that it represents the fiber length. Here we have followed the latter approach because averaged fiber lengths are well known for all human muscles. The parameter dT can be measured by using a thermocouple needle inside the muscle or can also be obtained from the Fig 9. (top). MU firing frequencies versus the force measured with a strain gauge assembly placed perpendicular to the dorsal surface of the middle finger on the distal end of the first phalanx. As more force is exerted a particular MU can increase its firing rate from a low firing rate or more MUs are solicited at low firing rate. Only 8 MUs of 60 are shown (adapted from [32]). (Bottom) Frequency diagram for the 8 MUs shown at the top. In this diagram every MU can be seen as an oscillator with different frequency levels v i . Here we can see all the transitions involved up to a force of 10 gram-force. The numbers on the circles indicate the temporal sequence for the appearance of each transition. The frequency v i ranges from 8 Hz up to 19 Hz.
doi:10.1371/journal.pone.0131798.g009 fitting process. Only the parameter S cannot be measured directly and needs to be inferred from the fitting process.
In conclusion, the current study uncovers the physical fundamentals of isometric muscle contractions. This is important to the scientific community because considering MUAPs as electromagnetic resonant modes confined at thermal equilibrium inside the muscle not only accounts for previously known theoretical relationships, but also provides explanations to previously observed phenomena and makes novel predictions. However, more elaborate theoretical models would be needed to explain more complex movements such as dynamic contractions and gait.
|
v3-fos-license
|
2018-05-08T18:29:56.465Z
|
0001-01-01T00:00:00.000
|
8287980
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/1477-7827-3-28",
"pdf_hash": "a3baefb4ea49f94e7b5c90176dd329c9e7a70bb8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1187",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "a3baefb4ea49f94e7b5c90176dd329c9e7a70bb8",
"year": 2005
}
|
pes2o/s2orc
|
Reproductive Biology and Endocrinology Role of Oxidative Stress in Female Reproduction
In a healthy body, ROS (reactive oxygen species) and antioxidants remain in balance. When the balance is disrupted towards an overabundance of ROS, oxidative stress (OS) occurs. OS influences the entire reproductive lifespan of a woman and even thereafter (i.e. menopause). OS results from an imbalance between prooxidants (free radical species) and the body's scavenging ability (antioxidants). ROS are a double-edged sword – they serve as key signal molecules in physiological processes but also have a role in pathological processes involving the female reproductive tract. ROS affect multiple physiological processes from oocyte maturation to fertilization, embryo development and pregnancy. It has been suggested that OS modulates the age-related decline in fertility. It plays a role during pregnancy and normal parturition and in initiation of preterm labor. Most ovarian cancers appear in the surface epithelium, and repetitive ovulation has been thought to be a causative factor. Ovulation-induced oxidative base damage and damage to DNA of the ovarian epithelium can be prevented by antioxidants. There is growing literature on the effects of OS in female reproduction with involvement in the pathophsiology of preeclampsia, hydatidiform mole, free radical-induced birth defects and other situations such as abortions. Numerous studies have shown that OS plays a role in the pathoysiology of infertility and assisted fertility. There is some evidence of its role in endometriosis, tubal and peritoneal factor infertility and unexplained infertility. This article reviews the role OS plays in normal cycling ovaries, follicular development and cyclical endometrial changes. It also discusses OS-related female infertility and how it influences the outcomes of assisted reproductive techniques. The review comprehensively explores the literature for evidence of the role of oxidative stress in conditions such as abortions, preeclampsia, hydatidiform mole, fetal embryopathies, preterm labour and preeclampsia and gestational diabetes. The review also addresses the growing literature on the role of nitric oxide species in female reproduction. The involvement of nitric oxide species in regulation of endometrial and ovarian function, etiopathogenesis of endometriosis, and maintenance of uterine quiescence, initiation of labour and ripening of cervix at parturition is discussed. Complex interplay between cytokines and oxidative stress in the etiology of female reproductive disorders is discussed. Oxidant status of the cell modulates angiogenesis, which is critical for follicular growth, corpus luteum formation endometrial differentiation and embryonic growth is also highlighted in the review. Strategies to overcome oxidative stress and enhance fertility, both natural and assisted are delineated. Early interventions being …
Oxidative Stress 1.1 Free radicals
Free radical species are unstable and highly reactive. They become stable by acquiring electrons from nucleic acids, lipids, proteins, carbohydrates or any nearby molecule causing a cascade of chain reactions resulting in cellular damage and disease [1][2][3][4], figure 1) . There are two major types of free radical species: reactive oxygen species (ROS) and reactive nitrogen species (NOS).
Reactive oxygen species
The three major types of ROS are: superoxide (O 2 •-), hydrogen peroxide (H 2 O 2 ), hydroxyl (OH • ). The superoxide radical is formed when electrons leak from the electron transport chain [5]. The dismutation of superoxide results in the formation of hydrogen peroxide. The hydroxyl ion is highly reactive and can modify purines and pyrimidines and cause strand breaks resulting in DNA damage [6]. Some oxidase enzymes can directly generate the hydrogen peroxide radical.
ROS have been implicated in more than 100 diseases [7][8][9][10]. They have a physiological and pathological role in the female reproductive tract. Numerous animal and human studies have demonstrated the presence of ROS in the female reproductive tract: ovaries, [11][12][13][14][15], fallopian tubes [16] and embryos [17]. ROS is involved in the modulation of an entire spectrum of physiological reproductive functions such as oocyte maturation, ovarian steroidogenesis, corpus luteal function and luteolysis [11,12,18]. ROS-related female fertility disorders may have common etiopathogenic mechanisms. ROS may also originate from embryo metabolism and from its surroundings.
Reactive nitrogen species
Nitric oxide (NO) is synthesized during the enzymatic conversion of L-arginine to L-citrulline by nitric oxide synthase (NOS) [19][20][21]. With an unpaired electron, NO, which is a highly reactive free radical, damages proteins, carbohydrates, nucleotides and lipids and, together with other inflammatory mediators, results in cell and tissue damage, low-grade, sterile inflammation and adhesions [20]. NO potently relaxes arterial and venous smooth muscles and, less strongly, inhibits platelet aggregation and adhesion. NO donors, acting as vasodilating agents, are therefore a possible therapeutic approach [22]. NO acts in a variety of tissues to regulate a diverse range of physiological processes, but excess of NO can be toxic [1,20,21,23].
Reactive nitrogen species have been associated with asthma, ischemic/reperfusion injury, septic shock and atherosclerosis [24][25][26][27]. The two common examples of reactive nitrogen species are nitric oxide (NO) and nitrogen dioxide [1,3]. NO is produced by the enzyme NO synthase. There are 3 types of nitric oxide synthase (NOS) isoenzymes in mammals involving endothelial NO synthase (NO synthase 3), neuronal NO synthase (NO synthase 1) and inducible NO synthase (NO synthase 2). Neuronal NO synthase (nNOS) and endothelial NO synthase (eNOS) are constitutive NO synthases, and responsible for the continuous basal release of NO. Inducible NO synthase (iNOS) is present in mononuclear phagocytes (monocytes and macrophages) and produces a large amount of NO. This is expressed in response to proinflammatory cytokines and lipopolysaccharides [21,23,28]. Inducible NO synthase is activated by cytokines such as, interleukin-1, and TNF-α and lipopolysaccharides. Endothelial NO synthase is expressed in thecal cells, granulosa cells, and the surface of oocyte during the follicular development. In pathological conditions, inducible NO synthase might play a major role in NO production. In most organs, inducible NO synthase is expressed only in response to immunological stimuli [29].
Antioxidants
Under normal conditions, scavenging molecules known as antioxidants convert ROS to H 2 O to prevent overproduction of ROS. There are two types of antioxidants in the human body: enzymatic antioxidants and non-enzymatic antioxidants [1,3].
Enzymatic antioxidants
Enzymatic antioxidants are also known as natural antioxidants, they neutralize excessive ROS and prevent it from damaging the cellular structure. Enzymatic antioxidants are composed of superoxide dismutase, catalase, glutathione peroxidase and glutathione reductase, which also causes reduction of hydrogen peroxide to water and alcohol.
Non-enzymatic antioxidants
Non-enzymatic antioxidants are also known as synthetic antioxidants or dietary supplements. The body's complex antioxidant system is influenced by dietary intake of antioxidant vitamins and minerals such as vitamin C, vitamin E, selenium, zinc, taurine, hypotaurine, glutathione, beta carotene, and carotene [1][2][3]30]. Vitamin C is a chain breaking antioxidant that stops the propagation of the peroxidative process. Vitamin C also helps recycle oxidized vitamin E and glutathione [31]. Taurine, hypotaurine and transferrin are mainly found in the tubal and follicular fluid where they protect the embryo from OS [17]. Glutathione is present in the oocyte and tubal fluid and has a role in improving the development of the zygote beyond the 2-cell block to the morula or the blastocyst stage [32].
Mechanisms of oxidative stress-induced cell damage Figure 1 Mechanisms of oxidative stress-induced cell damage.
Oxidative stress in female reproduction
Cells have developed a wide range of antioxidants systems to limit production of ROS, inactivate them and repair cell damage [1][2][3]33]. OS influences the entire reproductive span of women's life and even thereafter (i.e. menopause). It has been suggested that the age-related decline in fertility is modulated by OS [34]. It plays a role during pregnancy [35] and normal parturition [36,37] and in initiation of preterm labor [38,39]. The pathological effects are exerted by various mechanisms including lipid damage, inhibition of protein synthesis, and depletion of ATP [40]. There is some understanding of how ROS affect a variety of physiologic functions (i.e. oocyte maturation, ovarian steroidogenesis, ovulation, implantation, formation of blastocyst, luteolysis and luteal maintenance in pregnancy) [14,15,18,19,41].
ROS are a double-edged sword -they serve as key signal molecules in physiological processes but also have a role in pathological processes involving the female reproductive tract. Since the balance is maintained by the presence of adequate amounts of antioxidants, measuring levels of the antioxidants, individually or as total antioxidant capacity (TAC), has also been examined [15,18,42,43]. Superoxide dismutase (SOD) enzymes, Copper-Zinc SOD (Cu-Zn SOD) and Manganese superoxide dismutase (MnSoD) have been localized in the granulose and thecal cells of the growing follicle. Selenium dependent glutathione peroxidase activity has been demonstrated in the follicular fluid and serum of patients undergoing IVF. The expression profiles of the transcripts of the antioxidant enzymes such as superoxide dismutase, glutathione peroxidase and gamma-glutamylcysteine synthetase in both human and mouse oviducts and oocytes have also been examined [16]. There is growing literature on the effects of OS in the female reproduction with involvement in the pathophsiology of pre-eclampsia [44,45], hydatidiform mole [46][47][48], free radical-induced birth defects [49] and other situations such as abortions [50].
Metabolites of NO (nitrite and nitrate) in peritoneal fluid are determined by nitrate reductase and the Griess reaction [20,23]. Total NO (nitrite and nitrate) levels in the serum and follicular fluid assay of NO are measured via a rapid-response chemiluminescence analyzer [29]. Various biomarkers of oxidative stress have been determined in the placenta by immunohistochemistry or western blot analysis (Table 2). Oxidative DNA adducts 8-hydroxy 2deoxyguanosine-have been studied by immunostaining in placenta [45], in patients with IUGR (intrauterine growth retardation) and patients with preeclampsia and IUGR [45]. The basal levels of ROS in the leukocytes in whole blood can be determined using the dihydroethidium and dichlorodihydrofluorescein-diacetate probes ( Table 2).
Oxidative Stress & Female Infertility
Infertility is a disease defined as "the inability to conceive following 12 or more months of unprotected sex before an investigation is undertaken unless the medical history and physical findings dictate earlier evaluation and and treatment [67]. The prevalence of female infertility ranges from 7% to 28%, depending on the age of the woman. In general, an estimated 84% of couples conceive after 1 year of intercourse, and 92% of the couples conceive after 2 years [68]. Although the frequency and origin of different forms of infertility varies, 40%-50% of the etiology of infertility studied is due to female causes [69]. A primary diagnosis of male factor infertility is made in 30% of infertile couples. Combined female and male factor infertility is responsible for 20%-30% of cases. Finally, unexplained infertility affects 15% of couples [70]. If the results of a standard infertility examination are normal, a diagnosis of unexplained or idiopathic infertility is assigned [70]. Data from the National Survey for Family Growth indicate that the number of women with impaired fecundity has increased from 1982 to 1995, an increase of 35% in the number of women. Approximately 1.3 million American couples receive medical advice or treatment for infertility every year [71]. OS has a role in etiopathogenesis of endometriosis, tubal factor infertility, and unexplained infertility. Impact of OS on ART is discussed in further sections.
Pathophysiology of oxidative stress in female reproduction
Oxygen toxicity is an inherent challenge to aerobic life [72]. ROS can modulate cellular functions, and OS can impair the intracellular milieu resulting in diseased cells or endangered cell survival. The role of ROS in various diseases of the female reproductive tract has been investigated. ROS can affect a variety of physiological functions in the reproductive tract, and excessive levels can result in precipitous pathologies affecting female reproduction. The oxidant status can influence early embryo development by modifying the key transcription factors and hence modifying gene expression [73]. Concentrations of ROS may also play a major role both in the implantation and fertilization of eggs [72]. There is an increased interest to examine the role of OS in female reproduction because it may be a major link in the infertility puzzle as well as in some reproductive organ diseases such as endometriosis.
Recently, OS has been reported to have an important role in the normal functioning of the female reproductive system and in the pathogenesis of female infertility [33,74].
Cytokines, oxidative stress and female reproduction
The control of ovarian stromal cells and germ cell function is a diverse paradigm and oxidative stress may be one of the modulators of ovarian germ cell and stromal cell physiology. A number of autocrine and paracrine factors affect the modulation of various ovarian functions and steroidogenesis. Cytokines are polypeptides or glycoproteins secreted into the extra cellular compartment by the leukocytes [75]. Mammalian ovulation or follicular rupture was proposed to result from the vascular changes and the proteolytic cascade [54]. The cross talk between these two cascades is mediated by cytokines, vascular endothelial growth factor (VEGF), and ROS (both reactive nitrogen and oxygen radicals). Interleukin-1β causes nitrite to accumulate in rat ovarian dispersates, demonstrating the close interaction between cytokines and NOS [76]. OS and cytokines are proposed to be interlinked and act as intercellular and intracellular messengers in the ovary. A [66] Plasma and red blood cell glutathione content Colorimetric assay Nanomoles/mgm of haemoglobin number of investigators have investigated the synthesis of NOS and ROS in the ovaries [21,55,58].
Defective placentation leads to placental hypoxia and reperfusion injury due to ischemia and the resultant OS triggers the release of cytokines and prostaglandins, which results in endothelial cell dysfunction and plays an important role in the development of pre-eclampsia [77,78]. TNF-α a plasma cytokine, has been demonstrated to cause the endothelial cell injury [79]. A link between OS and expression of cytokine receptors in the cytotrophoblast, vascular smooth muscle cells and endometrial cells has also been proposed, further establishing that hyperactivation of ROS may result in pre-eclampsia [80].
The activation of mononuclear phagocytes can be triggered in endometriosis by a number of factors including damaged red blood cells and the apoptotic endometrial cells. A positive correlation between concentrations of tumor necrosis factor (TNF)-α in the peritoneal fluid and endometriosis has been reported [75]. Cytokines released by the macrophages influence the redox status of the ectopic endometrium in patients with endometriosis [81]. Superoxide dismutase, glutathione peroxidase activity and lipid peroxidation levels were measured in ectopic endometrial tissue obtained from ovarian endometriomas. Superoxide dismutase activity was found to be significantly higher in the ectopic endometrium than in eutopic endometrium, and a positive correlation was seen between malondialdehyde levels and plasma 17-beta estradiol levels. TNF-α has been shown to cause up regulation of expression of Manganese (Mn) superoxide dismutase in the endometrium in vitro [82]. The antioxidant MnSOD neutralizes superoxide anions generated by cytokine TNF-α. This is a self protective mechanism against TNF-α induced oxidative stress. Estrogen and progesterone withdrawal leads to stimulation of prostaglandin F2α production via ROS-induced NFkappa β activation [83]. The mechanism of menstruation is unclear, and activation of the transcription factor NFkappa β may be a piece in the puzzle.
Ovarian epithelial cancer is the most common type of ovarian cancer. Ovarian epithelial inflammation has been suggested as an etiological factor in ovarian epithelial cancer [11,84]. The mechanisms that bring about follicular rupture result in the exposure of the ovarian surface epithelial cells to deleterious agents (e.g. free radicals and TNF-α) [85,86]. Thus, incessant ovulation and its complex articulation by OS, inflammation and cytokines repeated cyclically may be involved in the etiopathogenesis of ovarian cancer [86]. Factors that inhibit ovulation such as oral contraceptives reduce the risk of epithelial ovarian cancer [87,88]. Recent studies point towards a role of genes active in the process of metabolism of oxidation products, in the etiology of ovarian cancer [89].
Reactive oxygen species and mediators of angiogenesis
Angiogenesis is a pathophysiological process involving formation of blood vessels from preexisting vessels. The induction of angiogenesis occurs when there is a deficiency of oxygen in tissues. This process of neovascularization results from hypoxia and induction of various angiogenic factors, and it has a role to play in physiological processes such as follicular development, endometrial growth, embryo development, growth of placental vessels and wound repair [90,91]. Angiogenesis is important for cyclical regeneration of endometrium in the menstrual cycle. A complex cytokine influence at the maternal-fetal interface creates the conditions that are necessary to support embryo implantation in the endometrium [92,93]. Any imbalance between the cytokines and angiogenesis factors could result in implantation failure and pregnancy loss [94]. Critical changes occur in the vascular system, and these changes accompany follicular growth. Follicular growth, selection of dominant follicle, corpus luteum formation, endometrial differentiation and embryo formation are key processes dependent on neovascularization [90,95]. As the endometrium grows in the menstrual cycle, vessel regeneration occurs (i.e. spiral arterioles and capillaries) [96]. Estrogens promote angiogenesis in the endometrium by controlling the expression of factors such as VEGF [97]. ROS generated from NADP (H) oxidase is critical for VEGF signaling in vitro and angiogenesis in vivo [98]. Small amounts of ROS are produced from endothelial NADP (H) oxidase activated by growth factors and cytokines.
ROS that are generated in and around the vascular endothelium may play a role in normal cellular signaling mechanisms. They may also be an important causative factors in endothelial dysfunction that leads to the development of atherosclerosis, diabetes complications and ischemia perfusion injury [98,99]. The molecular mechanism by which the oxidant status of cells modulates angiogenesis is not completely understood. As our understanding the role ROS-induced angiogenesis plays in atherosclerosis and myocardial angiogenesis grows, future studies should investigate the role ROS plays in the angiogenesis in the female reproductive tract.
Reactive oxygen species and the endometrium
There is a cyclical variation in the expression of superoxide dismutase (SOD) in the endometrium. SOD activity decreases in the late secretory phase while ROS levels increase [100]. These changes have been hypothesized to be important in the genesis of menstruation and endometrial shedding. The levels of prostaglandin F2 α increase towards the late secretory phase and ROS triggers the release of prostaglandin F2 α in vitro [101]. Stimulation of the cyclooxygenase enzyme is brought about by ROS via activation of the transcription factor NFKappa β, suggesting a mechanism for menstruation [83].
ROS and endometriosis
Increased generation of ROS by activated peritoneal macrophages has been reported in the peritoneal fluid [102].
Conflicting results were reported in further studies with large patient numbers, which failed to demonstrate an antioxidant or oxidant balance [74,103]. ROS levels in peritoneal fluid of patients with endometriosis were not significantly higher than controls.
An increased titer of autoantibodies related to OS has been reported in women with endometriosis resulting in an increase in serum autoantibody titers to oxidatively modified low density lipoproteins [104]. An OS-induced increase in autoantibody titers in the peritoneal fluid has been demonstrated in women with endometriosis. Elevated levels of the marker of lipid peroxidation lysophophatidyl choline, a potent chemotactic factor for monocytes/T-lymphocytes, were seen in the peritoneal fluid of women with endometriosis [105]. Non-terminal oxidation may have a role in the pathophysiology of endometriosis. Minimally oxidized low density lipoprotein (LDL) (M-LDL) is present in peritoneal fluid of women with endometriosis in place of the terminally oxidized LDL (Ox-LDL) [106]. The ratio of lysophosphatidyl choline, a breakdown product of Ox-LDL, to phosphatidyl choline suggests M-LDL rather than Ox-LDL. Modest levels of OS induced proliferation of endometrial stromal cells in vitro, was inhibited by antioxidants [107]. RU486, a potent antiprogestational agent with antioxidant activity also decreased proliferation of epithelial and stromal cells [108].
Reactive oxygen species and the ovary
Markers of oxidative stress such as superoxide dismutase, Cu-Zn superoxide dismutase, Mn superoxide dismutase, glutathione peroxidase, γ glutamyl synthetase and lipid peroxides have been investigated by immunohistochemical localization, m-RNA expression and thiobarbituric acid method [4,14,41]. The expression of various biomarkers of OS has been demonstrated in normal cycling human ovaries [13,14]. All follicular stages have been examined for SOD expression including primordial, primary, preantral and nondominant antral follicles in follicular phase, dominant follicles and atretic follicles [14]. ROS may have a regulatory role in oocyte maturation, folliculogenesis, ovarian steroidogenesis and luteolysis. There is a delicate balance between ROS and antioxidant enzymes in the ovarian tissues. The antioxidant enzymes neutralize ROS production and protect the oocyte and embryo.
The presence of superoxide dismutase in the ovary, revealed intense staining by immunohistochemistry in the theca interna cells in the antral follicles [13]. Antibody to Ad4-binding protein (Ad4BP) was utilized to localize Ad4BP in the nuclei of theca and granulosa cells. Ad4BP is a steroidogenic transcription factor that induces transcription of the steroidogenic P450 enzyme. Thus, it controls steroidogenesis in the ovaries. The correlation between Ad4BP and superoxide dismutase expression suggests an association between OS and ovarian steroidogeneis [14].
Both human granulosa and luteal cells respond to hydrogen peroxide with an extirpation of gonadotropin action and inhibition of progesterone secretion [11]. The production of both progesterone and estradiol hormones is reduced when hydrogen peroxide is added to a culture of human chorionic gonadotropin-stimulated luteal cells.
Hydrogen peroxide lowers both cAMP dependent and non-cAMP dependent steroidogenesis [109]. The role of hCG (human chorionic gonadotropin) in the expression of the antioxidant enzyme superoxide dismutase (SOD) has been investigated. Corpora lutea collected from patients at hysterectomy and surgery for ectopic pregnancy were studied [14]. The Cu-Zn SOD expression in the corpora lutea paralleled levels of progesterone and these levels rose from early to mid luteal phase and decreased during the regression of the corpus luteum. However, in the corpus luteum from pregnant patients, the mRNA expression for Cu-Zn superoxide dismutase was significantly higher than that in midcycle corpora lutea. This enhanced expression of luteal Cu-Zn SOD may be due to hCG and hCG may have an important role in maintenance of corpus luteal function in pregnancy.
Levels of three oxidative stress biomarkers, conjugated dienes, lipid hydroperoxide and thiobarabituric acid were determined in preovulatory follicles. Concentration gradient was found to exist as levels of all three markers were significantly lower in the follicular fluid compared with serum levels [15]. The preovulatory follicle has a potent antioxidant defense, which is depleted by the intense peroxidation [15]. Glutathione peroxidase may also maintain low levels of hydroperoxides inside the follicle and thus play an important role in gametogenesis and fertilization [42].
Nitric Oxide synthase in female reproduction
The production of a viable oocyte is modulated by a complex interaction of endocrine, paracrine and autocrine factors leading to follicular maturation, granulosa cell maturation, ovulation and luteinization. Many hormonal and paracrine factors determine oocyte competence and embryo quality. Steroid hormones and local autocrine and paracrine factors influence ovarian stromal cells. The gonadotropins act through complex, multiple local signaling pathways. Cyclic AMP is thought to be the second messenger to bring about the effect of luteinizing hormone and follicular stimulating hormone [110]. In turn, cyclic AMP may activate other signaling pathways. Cyclic guanosine monophosphate (cGMP)-a cyclic nucleotide has also been proposed as a second messenger pathway. The effects of NO are proposed to be mediated through cGMP as a second messenger or by generation of ROS resulting from interaction of NO with superoxide radicals [111].
NO generated by macrophages in response to invading microbes acts as an antimicrobial agent [21]. Neurons, blood vessels and cells of the immune system are integral parts of the reproductive organs, and in view of the important functional role that NO plays in those systems, it is highly likely that NO is an important regulator of the biology and physiology of the reproductive system. NO has established itself as a polyvalent molecule that plays a decisive role in regulating multiple functions within the female as well as male reproductive system [21]. As a final immune effector, NO generated by inducible NO synthase, kills pathogens and abnormal cells but may play a detrimental role by damaging normal host tissue and cells, especially when inducible NO synthase is persistently expressed [20].
Nitric oxide synthase and fallopian tubes
The presence of NO synthase enzymes, both the constitutive and the inducible forms was delineated by immunhistochemistry and the presence of NADPH-diaphorase activity in human tubal cells [112,113]. The production of NO was demonstrated by positive NADPH diaphorase activity in the human fallopian tube. An endogenous NO system exists in the fallopian tubes [114]. NO has a relaxing effect on smooth muscles and it has similar effects on tubular contractility. Deficiency of NO may lead to tubal motility dysfunction, resulting in retention of the ovum, delayed sperm transport and infertility. Infertility associated with urogenital tract infections is associated with diminished sperm motility and viability. Increased NO levels in the fallopian tubes are cytotoxic to the invading microbes and also may be toxic to spermatozoa [114].
Nitric oxide synthase, endometrium, and myometrium
Expression of endothelial and inducible NO synthase have been demonstrated in the human endometrium [115], and the endometrial vessels [116]. Endothelial NO synthase, originally identified in vascular endothelial cells, is distributed in glandular surface epithelial cells in the human endometrium. NO also regulates the microvasculature of the endometrium and is important in menstruation.
Endothelial NOS like immunoreactivity has been reported in the endothelial cells lining the vessels, endometrium and endometrial glandular epithelial cells and myometrium [117]. Inducible NOS like immunoreactivity was detected in decidualised stromal cells and also expressed in tissues from first trimester of pregnancy.
Thus, NO has a role to play in decidualisation of the endometrium and preparation of the endometrium for implantation.
Highest NO plays a significant role in pregnancy and labor. Expression of inducible NOS was highest in patients with preterm pregnancy and not in patients in term labor.
Nitric oxide synthase and endometriosis
Endometriosis is found in about 35% of infertile women who have laparoscopy as part of their infertility workup [71]. Production of ROS by peritoneal fluid mononuclear cells was reported to be associated with endometriosis [75]. Low levels of NO are important in ovarian function and implantation and cause relaxation of oviduct musculature [112]. High levels of NO are reported as having deleterious effects on sperm motility, toxic to embryos and inhibit implantation [124,125]. In vitro fertilization, a process that avoids contact of gametes and embryos with potentially toxic peritoneal and oviductal factors associated with endometriosis (e.g., NOS, ROS) improves the chances of conception in these women. NO is a free radical with deleterious effects and is an important bioregulator of apoptosis [126]. Activation of polymorphonuclear leucocytes and macrophages leads to increased production of ROS [102]. Increase in number and activity of macrophages is accompanied by release of more cytokines and other immune mediators, such as NO. This was initially implicated in low-grade inflammation, while elevated peritoneal NO is consistent with the increased number and activity of macrophages [20]. High levels of NO, such as those produced by macrophages, can negatively influence fertility in several ways. Changes in peritoneal fluid, an environment that hosts the process of ovulation, gamete transportation, sperm oocyte interaction, fertilization, and early embryonic development, might affect all these steps of reproduction [2,20,127]. Studies investigating the association of nitric oxide levels and lipid peroxides and reactive oxygen species in peritoneal fluid did not find any significant difference in patients with or without endometriosis [103,128,129] Conflicting results were obtained in studies conducted by Szczepanska et al [2]. The total antioxidant capacity reduced and the individual antioxidant enzymes like superoxide dismutase were significantly lower in the peritoneal fluid of women with endometriosis-associated infertility. The lipid peroxide levels were the highest amongst patients with endometriosis suggesting role of ROS in the development of the disease process [2]. There is a cyclical expression of mRNA of NOS in the epithelial glands of the human endometrium. Higher amounts of NO and NOS are seen in the endometrium of women with endometriosis [28,130,131]. NOS expression in the ectopic endometrium of patients with adenomyosis is continuous throughout the menstrual cycle [132].
Peritoneal fluid NO levels, peritoneal macrophage NOS activity, and peritoneal macrophage inducible NOS protein expression has been examined in women with endometriosis-associated infertility. Peritoneal macrophages express higher levels of NOS, have higher NOS enzyme activity, and produce more NO in response to immune stimulation in vitro [23]. High levels of NO adversely affect sperm, embryos, implantation, and oviductal function, indicating that reduction in the peritoneal fluid NO production or blocking NO effects may improve fertility in women with endometriosis [23].
However, generation of peroxynitrite by ectopic endometrium has been reported in patients with adenomyosis. Expressions of endothelial and inducible NO synthase and peroxynitrite generation was markedly reduced after GnRH agonist therapy, supporting their potential role in the pathophysiology of adenomyosis [132]. Serum NO levels are suppressed by GnRH agonist and upregulated by gonadotropin stimulation during controlled ovarian stimulation in female partners from couples with male factor infertility [133]. Maximal levels were measured at the time of ovulation in the same study. Elevated NO production was not demonstrated in patients with ovarian hyperstimulation.
Increased levels of NO were demonstrated in the peritoneal fluid of patients with endometriosis [20,23]. It has also been hypothesized that ROS may have a role in formation of adhesions associated with endometriosis [134].
Patients with endometriosis show a higher 8-hydroxy 1deoxyguanosine index compared to patients with other causes of infertility, such as tubal, male factor or idiopathic causes [52].
Expression of NOS is elevated in patients with endometriosis, and a common polymorphism of exon 7 at nucleotide 894 in the endothelial NOS gene may be associated with endometriosis [135]. Hence variations in the expression of the eNOS gene may be involved in endometrial angiogenesis and thus modulate the process of endometriosis.
Expression of endothelial NO synthase in the endometrium of patients with endometriosis or adenomyosis is persistently marked throughout the menstrual cycle [132]. Many investigators have reported increased expression of endothelial NOS in the glandular endometrium in patients with endometriosis [28,130]. Inducible NOS isoform is elevated in tissues of patients with endometriosis [131]. The endometrial development affects embryo implantation, and inconsistent development between endometrium and embryo could impede embryo implantation. Nitric oxide affects fecundity in endometriosis and adenomyosis [136]. Significant differences are seen in the uterine hyperperistalsis and dysperistalsis in patients with endometriosis compared with the control groups, and this may be responsible for disturbed sperm transport and reduced fertility [137].
Various cytokines secreted from endometrial cells, immune cells, or macrophages stimulate endothelial NO synthase to release NO [3,28,136]. These abnormal immune responses might eventually stimulate macrophages and/or endometrial cells to persistently produce a large amount of NO and inhibit implantation [138]. Increased expression of endothelial NO synthase has been reported throughout the menstrual cycle in the endometrium of women with endometriosis [139].
Nitric oxide synthase and the ovary
Ovarian follicullogenesis not only involves gonadotropins and the steroids, but it also involves local autocrine and paracrine factors. Nitric oxide radical is one of the local factors involved in ovarian follicullogeneis and steroidogenesis. Nitric oxide acts through activation of various iron containing enzymes. It binds to the heme containing enzyme guanylate cyclase, which activates the cyclic nucleotide cyclic-GMP [110]. Plasma concentrations of nitrate monitored during the follicular cycle, have revealed peak levels at ovulation [133,140]. Nitric oxide inhibits ovarian steroidogenesis [52]. The presence of endothelial NO synthase in human corpora lutea and the expression has been reported in the mid and early luteal phase and to a lesser extent in the late luteal [53]. Nitric oxide inhibits steroidogenesis in the corpus luteum and has luteolytic action mediated through increased prostaglandins and by apoptosis [53,141].
Follicular fluid NO seems to be produced by either endothelial NO synthase or induced NO synthase. However, in the normal physiological conditions follicular fluid NO seems to be synthesized from granulosa cells by endothelial NO synthase, since in isolated human follicular cells at least 90% of cells are granulosa cells even though macrophages and lymphocytes are present as well.
In patients undergoing IVF, a positive correlation was determined between follicular fluid nitrate/nitrite levels and the follicular volume as well as the serum estradiol concentration [142]. In contrast to these findings, Manau et al, found no association between follicular fluid nitrite/ nitrate levels and parameters of ovarian response [143]. Biomarkers like serum nitric oxide measurements cannot be used as predicting success with IVF [143,144]. Serum nitrate concentration may not be a good biomarker because of the short half-life of NO. Follicular blood flow was found to be a better prognostic factor for predicting successful outcomes with IVF than follicular NO levels [138]. Follicular fluid NO levels were altered in patients with infertility associated diseases. NO follicular fluid levels were significantly higher in patients with endometriosis or hydrosalpinx compared to patients with tubal obstruction [29]. No correlation was reported between the follicular NO levels and follicle maturity or follicle quality.
Some studies have demonstrated the relationship between NO concentrations in follicular growth and programmed follicular cell death (apoptosis). Folliculogenesis involves the participation of both growth of the follicle and apoptosis. Nitric oxide regulates both of these processes [21]. Sugino et al studied the role of nitric oxide in follicular atresia and apoptosis, in patients undergoing IVF and found that the smaller follicles had significantly elevated percentage of apoptotic granulosa cells with nuclear fragmentation [58]. Low concentrations of NO may prevent apoptosis, however pathologically high concentrations of NO, as well as increased superoxide generation by NO synthase due to lack of arginine, may promote cell death by peroxynitrite generation [21]. Nitric oxide involvement in various ovarian functions has been suggested. The presence of NO in the follicular fluid and the expression of NO synthase in follicles and corpus luteum have been reported [19,141,143,145].
Plasma concentration of NO was shown to increase in the follicular phase compared with the secretory phase and peaked at midcycle [140]. Nitric oxide elicited a positive effect on women with poor ovarian response compared to controlled ovarian stimulation [146]. Upregulated NO is harmful to implantation and pregnancy among patients with tubal factor infertility after controlled ovarian stimulation [147]. Serum NO levels were elevated amongst nonpregnant patients with tubal or peritoneal factor infertility [124].
Follicular fluid NO level is not associated with maturity or quality of oocyte and no significant differences were seen in concentrations of NO of follicular fluid among large, medium, or small follicle size. Higher TNF-α concentrations in follicular fluid correlated with poor oocyte quality [29]. Whereas, follicular fluid nitrite or nitrate levels were significantly lower in follicles containing mature oocytes that fertilized compared with those that did not [148]. Follicular NO has been reported to correlate negatively with embryo quality and the rate of embryo cleavage [124, 147,148]. The beneficial effects of NO donors in patients with intrauterine growth retardation (IUGR) and inhibition of pre-term labor has been studied [149,150]. Using a nitroglycerine (NTG) patch, which is a NO donor, did not significantly affect the final outcome in patients undergoing in-vitro fertilization. In addition, neither placebo nor the nitroglycerine patch improved the flow resistance in the uterine artery [22]. NO donors and elevated serum NO was associated with implantation failure resulting in decreased fertility [138].
Assisted reproduction
Assisted reproductive technique (ART) involves the direct manipulation of the human oocytes, sperm or embryos outside the body, to establish a pregnancy. A variety of causative factors of infertility can be indications for ART, i.e. tubal factor, endometriosis, male factor and unexplained infertility [151,152]. Assisted reproductive techniques offer excellent opportunities to infertile couples for achieving pregnancy. There may be multiple sources of ROS in an IVF setting including the oocytes, cumulus cell mass, or spermatozoa used for insemination [153].
Oxidative stress and its impact on ART
The follicular fluid microenvironment has a crucial role in determining the quality of the oocyte. This in turn impacts the fertilization rate and the embryo quality. Oxidative stress markers have been localized in the follicular fluid in patients undergoing IVF/embryo transfer (ET) [4,51,154,155]. Low intrafollicular oxygenation has been associated with decreased oocyte developmental potential as reflected by increasing frequency of oocyte cytoplasmic defects, impaired cleavage and abnormal chromosomal segregation in oocytes from poorly vascularised follicles [156]. ROS may be responsible for causing increased embryo fragmentation, resulting from increasing apoptosis [157]. Thus increasing ROS levels are not conducive to embryo growth and result in impaired development. Current studies are focusing on the ability of growth factors to protect in vitro cultured embryos from the detrimental effects of ROS such as apoptosis. These growth factors are normally found in the fallopian tubes and endometrium. The factors being investigated are: Insulin growth factor (IGF)-1, and Epidermal growth factor (EGF) in mouse embryos, which in many respects are similar to human embryos [158].
Exogenous gonadotropin has a stimulatory effect on the follicular content of iron, which is a potent oxidant, catalyses generation of free radicals in Haber-Weiss reaction. Iron overload in thalassemia acts as a redox-active center and there is resultant increase in the production of free radicals [159]. Increase in free radicals was reported in follicular fluid of patients with thalassemia. The spectrum of initial hypogonadism and later gonadal failure in thalassemia, results from the injury mediated by free radicals.
Increase in TAC was seen in follicular fluid of oocytes that later were successfully fertilized. Therefore, lower total antioxidant capacity is predictive of decreased fertilization potential [154]. Lower levels were associated with increased viability of the embryos until the time of transfer, and the fertilization potential decreased with decreasing concentrations of total antioxidants. Similarly mean glutathione peroxidase levels were increased, in follicles yielding oocytes that were subsequently fertilized [42]. Levels of ROS were reported to be significantly lower in patients who did not become pregnant compared with those who became pregnant [4]. Thus intrafollicular ROS levels may be used as a potential marker for predicting success with IVF. Studies determining normal TAC levels of the follicular fluid in unstimulated cycles are lacking.
In addition levels of selenium in follicular fluid of women with unexplained infertility were lower than those in women with tubal factor or male factor infertility [42]. Higher levels of superoxide dismutase activity were present in fluid from follicles whose oocytes did not fertilize compared with those that did [12]. These discrepancies may be due to the fact that the studies measured different parameters. The effects of follicular OS on oocyte maturation, fertilization and pregnancy have also been studied [51]. Patients who became pregnant following IVF or ICSI had higher lipid peroxidation levels and TAC. Both markers were unable to predict embryo quality. Pregnancy rates and levels of lipid peroxidation and TAC demonstrated a positive correlation.
OS in follicular fluid from women undergoing IVF was inversely correlated with the women's age [160]. Using a thermochemiluminescence assay, the slope was found to positively correlate with maximal serum estradiol levels, number of mature oocytes and number of cleaved embryos and inversely with the number of gonadotropin ampoules used. The pregnancy rate achieved was 28% and all pregnancies occurred when the thermochemiluminescence amplitude was small. This is in agreement with another study that reported minimal levels of OS were necessary for achieving pregnancy [51]. Follicular fluid ROS and lipid peroxidation levels may be markers for success with IVF.
Oocyte quality is a very important determining factor in the outcome of IVF/ET. 8-hydroxy-2-deoxyguanosine is a reliable indicator of DNA damage caused by oxidative stress. This compound is an indicator of OS in various other disease processes i.e. renal carcinogenesis, and diabetes mellitus. Higher levels of 8 hydroxy 2-deoxyguanosine were associated with lower fertilization rates and poor embryo quality [52]. Higher levels of 8-hydroxy 2deoxyguanosine are also seen in granulosa cells of patients with endometriosis, and this may impair the quality of oocytes.
Other OS markers such as thiobarbituric acid-reactive substances, conjugated dienes and lipid hydroperoxides have been studied in the preovulatory follicular fluid [15]. No correlation was seen between these markers and IVF outcome (fertilization rates or biochemical pregnancies) [15]. A potent antioxidant system may be present in the follicular fluid as indicated by low levels of all 3 biomarkers of oxidative stress in the follicular fluid. A recent chemiluminescence study examined the follicular fluid obtained from patients undergoing IVF. Hydrogen peroxide was utilized for the induction of chemiluminescence. The study found that a delicate balance was maintained by pro-oxidant/antioxidants in the follicular fluid [161].
Smoking has been associated with prolonged and dosedependent adverse effects on ovarian function [162]. According to a meta-analysis, the overall value of the odds ratio for the risk of infertility associated with smoking was 1.60 [95% confidence interval (CI) 1.34-1.91]. ARTs, including IVF, are further shedding light on the effects smoking has on follicular health. Intrafollicular exposure to cotinine increases lipid peroxidation in the follicle [155]. Carotenoids have gained attention because they are similar to vitamin E in that they are very potent antioxidants and react with ROS; the presence of carotenoids has been demonstrated in the follicular fluid [163]. Concentrations of carotenoids, retinol and alpha tocopherol were found to be significantly higher in follicular fluid than in plasma.
Melatonin was investigated as a drug to improve oocyte quality in patients failing to get pregnant in earlier IVF cycles because of poor quality oocytes [164]. A significant reduction in the number of degenerate oocytes was reported, and the number of fertilized embryos increased. Increased follicular concentrations of melatonin reduced lipid peroxide concentration and may have prevented DNA damage.
Redox and early embryo development
Physiological levels of redox may be important for embryogenesis. Overproduction of ROS is detrimental for the embryo, resulting from impaired intracellular milieu and disturbed metabolism [17,165]. Superoxide anion, hydrogen peroxide and hydroxyl radical, can have detrimental effects on the fetus. Oxidative stress can be generated by spermatozoa, leucocytes, and by events such as sperm mediated oocyte activation and activation of the embryonic genome [165]. The ROS generation can result from oxidative phosphorylation occurring in the mitochondria. Electrons leak from the electron transport chain at the inner mitochondrial membranes. These electrons are transferred to the oxygen molecule, resulting in an unpaired electron in the orbit. This leads to the generation of the superoxide molecule. The other points of generation of ROS are the cytoplasmic NADPH-oxidase, cytochrome p450 enzymes and the xanthine oxidoreductase enzymes. Excessive OS can have deleterious effects on the cellular milieu and can result in impaired cellular growth in the embryo or apoptosis resulting in embryo fragmentation. Thus OS mediated damage of macromolecules plays a role in fetal embryopathies. Deficient folate levels in the mother result in elevated homocysteine levels. Homocysteine induced oxidative stress has been proposed as a potential factor for causing apoptosis and disrupting palate development and causing cleft palate [166]. Oxidative stress mediated damage of the macromolecules has been proposed as a mechanism of thalido-mide induced embryopathy and other embryopathies [167,168].
Hyperglycemia/diabetes induced down regulation of cycloxygenase-2 gene expression in the embryo results in low PGE 2 levels and diabetic embryopathy [169]. The protective role of the enzyme G6PD (Glucose 6-phophate dehydrogenase) against oxidative stress has been demonstrated in an animal study and embryopathies were prevented by protecting the embryos against oxidative stress [170].
Effect of oxygen concentration on in-vitro embryo development
Early embryo development in mammals occurs from fertilization through differentiation of principal organ systems in a low oxygen environment [168]. A marginal improvement in preimplantation embryonic viability has been reported under low oxygen concentrations in patients undergoing IVF and ICSI [171]. Lower concentrations of oxygen in in-vitro culture of porcine embryos decreased the H 2 O 2 content and resulted in reduced DNA fragmentation, which thereby improved developmental ability [172]. The higher oxygen concentration of 20% have been associated with lower developmental competence. Accelerated development was seen under low (5%) oxygen concentrations.
Role of ROS in IVF media
ROS may be generated endogenously or exogenously, but either way it can affect the oocyte and embryo. IVF culture media may be the exogenous site of ROS generation affecting the oocytes and the preimplantation embryo. There are some specific events in embryo development associated with a change in the redox state. It has been suggested that redox may have a causative role in sperm mediated oocyte activation, embryonic genome activation and embryonic hatching from the zona pellucida [165]. Higher day 1 ROS levels in culture media were associated with delayed embryonic development, high fragmentation and development of morphologically abnormal blastocysts after prolonged culture. A significant correlation was reported between increased ROS levels in Day1 culture media and lower fertilization rates in patients undergoing ICSI [153]. Lower ROS levels were associated with higher fertilization rates, indicating the physiological relevance of low levels of ROS.
Incubation of poor quality embryos was associated with a decline in TAC in the preimplantation embryo culture medium after 24 hours incubation. Poor quality embryos may be associated with increased generation of ROS [173]. HEPES [(2-hydroxyethyl) piperazine-1-ethanesulfonic acid] was found to be the most potent protector compared to human tubal fluid media and polyvinyl alcohol against DNA damage occurring in spermatozoa, as determined by plasmid relaxation assay which measures the plasmid DNA damage [174]. IVF media supplemented with human serum albumin (10 mgm/mL), glucose (2.78 Mmol), 1% polyvinyl alcohol, 5% polyvinylpyrrolidone, sucrose (100 mM), 60% Percoll, human tubal fluid, human tubal fluid media, catalase (1 and 10 IU), and HEPES (21 mMol) scavenge ROS and confer protection from DNA damage [174].
Strategies to overcome oxidative stress in infertility/ART
Considerable interest has been generated in the use of antioxidants to overcome the adverse and pathological results of OS. Oxidative stress leads to luteal regression, [43] resulting in a lack of luteal support to a pregnancy [33]. OS can damage oocytes in developing follicles, oocytes and spermatozoa in the peritoneal cavity, or embryo in fallopian tube [17,153] or through redox (prooxidant and antioxidant) imbalance. OS can be overcome by reducing generation of ROS or increasing the amounts of antioxidants available. The literature contains studies that used nutritional supplements and antioxidants like vitamin C supplementation to protect against ROS and OS. However, there is lack of consensus on the type and dosage of antioxidants to be used. Clinical evidence on the benefits of antioxidant supplementation is equivocal.
Current evidence supports the use of systemic antioxidants for management of selected cases of male infertility [175]. Randomised controlled trials investigating antioxidants in female infertility are few and lack power because of the small patient numbers. In a recent randomized controlled, multi-center study, the effect of vitamin C supplementation (750 mg/day) in patients with a luteal phase defect was reported; pregnancy rates were higher in the treatment group than in the controls [176]. Similarly, concentrations of antioxidants were found to be significantly lower in women with a history of recurrent miscarriages and luteal phase defects than in healthy women [177]. Vitamin C concentrations were higher in the follicular fluid of patients supplemented with vitamin C than that of the controls. Pregnancy rate was higher in the supplemented group than in the control group although the difference was not statistically significant [178].
In a double blinded, placebo-controlled pilot study, the impact of a nutritional supplement containing vitamin E, iron, zinc, selenium, L-arginine was examined [179]. The mean mid-luteal progesterone levels increased from 8.2 ng/mL to 12.8 ng/mL, (p = 0.08), and the patients had a significant increase in ovulation and pregnancy rates (33% pregnant, p < 0.01) [179]. In a study looking at short-term supplementation with high doses of ascorbic acid during the luteal phase in IVF, the clinical pregnancy rate and implantation rate did not improve [180]. There is lack of consensus on antioxidant supplementation in idi-opathic infertility and randomized controlled trials need to be designed with sufficient power and patient numbers to investigate this issue.
Fertilization and embryo development in vivo occurs in an environment of low oxygen tension [168]. During ART, it is important to avoid conditions that promote ROS generation and expose gametes and embryos to ROS. During culture, low oxygen tension is more effective at improving the implantation and pregnancy rate than higher oxygen tension [181]. Similarly, higher implantation and clinical pregnancy rates are reported when antioxidant supplemented media is used rather than standard media without antioxidants. Metal ions can sometimes result in the production of oxidants. Metal ions can also increase the production of ROS directly through the Haber-Weiss reaction. It may be useful to add metal ion chelating agents to the culture media to decrease the production of oxidants [181].
Amino acids added to the IVF media also have antioxidant properties. Adding ascorbate during cryopreservation reduces hydrogen peroxide levels and thus the oxidative distress in mammalian embryos [182]. As a consequence, the embryo development improved with enhanced blastocyst development rates. A significant negative association has been reported between duration of smoking and fertilization rates in IVF procedures. Eliminating the smoking factor would help improve fertility and ART outcomes [178]. Because a history of smoking is associated with high concentrations of OS, in-vivo antioxidants can be recommended in infertile women who smoke [155].
Follicular vascularity determines the intrafollicular oxygen content and the developmental potential of the oocyte [156,183]. Intrafollicular hypoxia results in chromosomal segregation disorders and deleterious mosaicisms in the embryo. Sildenafil, an inhibitor of phosphodiesterase enzyme, prevents the breakdown of cGMP and potentiates the effects of NO on vascular smooth muscle. Vaginal Sildenafil and L-arginine have been investigated for their potential to improve intrafollicular blood flow by potentiating the actions of NO on vascular smooth muscle. It augments the effect of NO in inducing vasodilatation and thus improving uterine blood flow. A recent study reported that Sildenafil, administered on day 3 of the menstrual cycle, appeared effective in improving uterine artery blood flow and endometrial development [184]. The same group in a subsequent cohort of 105 patients with infertility and previous failures at IVF were able to achieve higher implantation and pregnancy rates with vaginal Sildenafil [185] Oral L-arginine supplementation in poor responder patients, during controlled ovarian stimulation may improve ovarian response, endometrial receptivity and pregnancy rate by increasing the flow around the follicles, and uterine flow [146]. Follicular fluid concentrations of nitrite/nitrate inversely correlated with embryo quality. Although the embryo quality was poor, L-arginine supplementation in normally responding patients resulted in higher follicular fluid arginine levels compared to the poor responders and increased follicular recruitment [147]. NO derivatives in higher doses in follicular fluid may cause cytostatic and cytotoxic effects and may have detrimental consequences on embryo quality, implantation and pregnancy rate.
Mechanical removal of ROS in IVF/ET has been examined [186]. Cumulus oophorus rinsing is performed to overcome the deleterious effects of ROS in patients with ovarian endometriosis [186]. ROS has deleterious effects on both the oocyte and the embryo quality. The deleterious effects of TNF-α cytokines and reactive oxygen species, which were increased in the peritoneal fluid of patients with endometriosis and unexplained infertility, were prevented, by the rinsing procedure.
Critical review of OS, ovary and ART
A comprehensive review of the published literature reveals that the role of oxidative stress is controversial due to the differences in the nature of materials examined, (i.e. follicular fluid, embryos, and culture mediums). We can conclude the number of articles on oxidative stress in the last 5 years have significantly increased compared to the previous 5 years indicating that more studies are being conducted to understand the role of oxidative stress in female reproduction. The effects of ROS studied and its ability to influence female reproduction have been studied on various endpoints in terms of the oocyte, fertilization, embryo and pregnancy. Different markers of oxidative stress are reported in various studies and the sensitivity and specifity of the various biomarkers are not known. While some research is focused on studying the antioxidant capacity others focus on studying and determining the levels of oxidative stress markers. Also, there has been an assumption in the studies measuring the amount and type of antioxidants that there is an inverse correlation between oxidative stress markers and antioxidants. These studies have also variations because some have measured the total antioxidant capacity and some have measured individual enzymes like superoxide dismutase. Further studies need to be designed to validate the results of the earlier studies, with elimination of various factors leading to bias. Eliminating the bias will make the comparison of different studies acceptable and provide support to the evidence based approach. The biomarkers of oxidative stress that are studied should be similar across different studies to make the results comparable. Prospective, randomized controlled trials with stringent inclusion criteria are needed to determine the effects of antioxidants in overcoming redox in infertility patients.
Age related fertility decline, Menopause and ROS
There is an age related decline in the number and quality of follicles in females. ROS may damage the oocytes [187]. The age related decline in oocyte quality also results in increased incidence of congenital anomalies in children. The ageing of the oocytes affects many biochemical pathways which have a deleterious effect on pre-and post implantation development of the embryo [188]. The preand postovulatory ageing of the oocytes have also been associated with congenital anomalies, behavorial alterations, and learning disabilities in later life and constitutional diseases such as diabetes mellitus, and schizophrenia. Oxidative stress occurs at menopause because of loss of estrogens, which have antioxidant effect on low-density lipoproteins. Estrogens confer cardioprotection by lowering protein oxidation and antioxidant properties [189]. Diminished antioxidant defense is associated with osteoporosis in post-menopause. Modulation of the estrogen receptors α and β has been reported to be effected in vitro by oxidative stress [190].
Oxidative stress and pre-eclampsia
Pre-eclampsia is associated with severe maternal and fetal morbidity and mortality [191]. Overall pre-eclampsia complicates 5% of all pregnancies and 11% of all first pregnancies. Recent evidence suggests the role of oxidative stress in pre-eclampsia. There is a reduced antioxidant response inpatients with pre-eclampsia [192,193] and reduced levels of antioxidant nutrients [194] and increased lipid peroxidation [45,194].
Placental oxidative stress and Pre-eclampsia
Incomplete trophoblast invasion leads to failure of conversion of thick walled tortous spiral arteries to low resistance flaccid sinusoidal vessels [195,59]. The incomplete invasion results in impaired placental perfusion. The hypoxia/reperfusion injury leads to increased expression of xanthine oxidase and NADP (H) oxidase and resultant increased generation of superoxide anion. The increased generation of pro-oxidants tilts the balance in favor of oxidative stress, which results in increased lipid peroxidation. Biomarkers of lipid peroxidation are elevated in the placenta [45,60].
Interventions to overcome oxidative stress in pre-eclampsia
There is currently no accepted method of prevention of pre-eclampsia. Antioxidants vitamin C and vitamin E have been studied in some trials for preventing pre-eclampsia. Early intervention at 16-22 weeks of pregnancy with supplementation of vitamin E and C resulted in significant reduction of pre-eclampsia in the supplemented group [196]. Supplementation in women with established preeclampsia did not result in any benefit [197]. Recent report of a randomized trial failed to find beneficial effects of vitamin C and E supplementation in preventing preeclampsia [198].
Redox and miscarriage
Human placenta is classified as hemomonochorial. Maternal blood directly bathes the fetal trophoblast. Establishment of the maternal placental circulation is influenced by the trophoblastic invasion. Extravillous trophoblastic invasion transforms the small caliber high resistance spiral arteries into large caliber, low resistance, and high capacitance uteroplacental arteries. Abnormal placentation has been implicated in the pathogenesis of pre-eclampsia and miscarriage [199]. Pre-eclampsia is unique to human species and miscarriage is very rare in other species [200]. Abnormal placentation leads to placental oxidative stress with resultant detrimental effects on the syncitiotrophoblast and it has been proposed as a mechanism involved in the etiopathogenesis of abortion. A sharp peak in the expression of the markers of oxidative stress in the trophoblast was detected in normal pregnancies and this oxidative burst if excessive was speculated to be a cause of early pregnancy loss [168].
The etiology of recurrent pregnancy loss remains unclear and is a scientific challenge. Oxidative stress may have a role in the etiology of recurrent pregnancy loss with no known etiology. Glutathione and glutathione transferase family of enzymes have been investigated in patients who experience recurrent abortions [201,202]. Glutathione and glutathione peroxidase are both antioxidants that neutralize the free radicals and lipid peroxides to maintain the intracellular homeostasis and redox balance.
The etiology of recurrent pregnancy loss is multifactorial and involves genetic and environmental factors [203]. In a large case controlled study, gene polymorphisms of enzymes of the glutathione family, glutathione S-transferase class mu (GSTM1) were studied. Elevated risk of recurrent pregnancy loss was found to be associated with the GSTM1 genotype null polymorphism, in patients with recurrent pregnancy loss. Elevated glutathione levels in pregnant patients with history of recurrent pregnancy loss were associated with poor outcomes (i.e. abortion) [201].
Term labor and the role of oxidative stress
There is increased generation of free radicals superoxide and nitric oxide in pregnancy, which results in oxidative stress [35]. Term labor induces increased lipid peroxidation, as evidenced by increased levels of the biomarker, malondialdehyde [37]. In a case controlled study, the serum levels of hydroperoxides were higher in patients in labor, compared to the controls, who were not in labor [36]. Term labor was demonstrated to cause an up regulation of the antioxidant reserve in the fetal compartment [66]. The role of oxidative stress in initiation of labor is not known.
F2-isoprostanes, reliable biomarkers of oxidative stress were shown to be significantly elevated in plasma of neonates compared to adults [204]. The study also demonstrated an inverse correlation between gestational age and plasma isoprostane levels.
Interventions to overcome oxidative stress during pregnancy
Based on the understanding of the pathophysiological role of NO in the female reproductive tract, NO donors have been studied for cervical ripening at term. In a randomized controlled study, vaginal administration of isosorbide dinitrate induced cervical ripening at term [205]. Oxidative stress leads to focal collagen damage in the fetal membranes and result in preterm labor [39,206]. Antioxidant supplementation has been investigated in preterm labor and pre-eclampsia for beneficial effects [196,207]. Another randomized double-blinded placebo controlled trial initiated in 2003 will examine women with type-1 diabetes. These women were randomized to receive antioxidant supplementation with vitamin C and vitamin E. Benefits of antioxidant supplementation will be investigated in patients with type-1 diabetes and the incidence of pre-eclampsia in this group of patients will be studied [208]. In addition the secondary outcomes of birth weight centile and the endothelial activation indicated by PAI-1/ PAI-2 (plasminogen activator-1/plasminogen activator-2) ratio will also be studied [208].
Future perspectives in antioxidant therapy
Antioxidants prevent the actions of the free radicals of oxidizing the substrate. Studies conducted in humans aimed at delineating the association of TAC content of food with incidence of chronic diseases [209]. The nutrients that are being studied for their effects on chronic diseases are vitamin C, Vitamin E, carotenoids and selenium. Pregnant women with HIV infection, selenium deficiency or micronutrient deficiencies like vitamin C and vitamin A, were found to have adverse clinical outcomes in large prospective studies [210,211]. There is increasing argument for increasing the selenium intake in these patients. There is emerging enthusiasm in the use of antioxidants, natural or synthetic. Small molecules that mimic antioxidant enzymes are the new tools being developed in the antioxidant armamentarium [212]. These are cell membrane permeable unlike the natural superoxide dismutase. Antioxidants targeting cellular organelles like mitochondria are also being investigated. Gene polymorphisms of the glutathione S-transferase family and myeloperoxides and their association with endometriosis, is an area of recent interest, which is promising [26].
Conclusion
The literature provides some evidence of oxidative stress influencing the entire reproductive span of a woman, even the menopausal years. OS plays a role in multiple physiological processes from oocyte maturation to fertilization and embryo development. There is burgeoning literature on the involvement of OS in the pathoysiology of infertility, assisted fertility and female reproduction. Infertility is a problem with a large magnitude. In this review we attempted to examine the various causes of female infertility and the role of OS in various etiologies of infertility. OS can arise as result of excessive production of free radicals and/or impaired antioxidant defense mechanism. An increasing number of published studies have pointed towards increased importance of the role of OS in female reproduction. Clearly, we have much to learn, but what we do know is that the role of OS in female reproduction cannot be underestimated. There is evidence that OS plays a role in conditions such as abortions, pre-eclampsia, hydatidiform mole, fetal embryopathies, preterm labor and pre-eclampsia and gestational diabetes, which lead to an immense burden of maternal and fetal morbidity and mortality. The review addresses the issue that both NOS and ROS species can lead to infertility problems and a spectrum of female reproductive disorders. We emphasize that free radicals have important physiological functions in the female reproductive tract as well as excessive free radicals precipitate female reproductive tract pathologies.
Reference values for ROS and NOS, minimum safe concentrations or physiologically beneficial concentrations have yet not been defined. Patients should be assessed according to the etiological factors and analyzed separately. Most of the published studies on oxidative stress are either observational or case control studies. Newer studies should be designed with more patient numbers; similar outcome parameters and uniform study populations so that results can be more easily compared. Measurement of OS in vivo is controversial. The sensitivity and specificity of various oxidative stress markers is not known. Measurement of biomarkers of OS is subject to interlaboratory variations, and interobserver differences. A uniform method with comprehensive assessment of the OS biomarkers should be used so that the results can be compared across the studies. Treatment strategies of antioxidant supplementation, directed toward reducing OS need to be investigated in randomized controlled trials. Antioxidants maybe advised when specific etiology cannot be identified as in idiopathic infertility as there is no other evidence based treatment for idiopathic infertility and reports indicate the presence of OS. Strategies to overcome OS in-vitro conditions and balancing between in vivo and in vitro environments can be utilized in ART, to successfully treat infertility. Interventions for overcoming oxidative stress in conditions such as abortions, preec-lampsia, preterm labor and gestational diabetes and intrauterine growth retardation are still investigational with various randomized controlled trials in progress.
Legend
Reprinted from an article in Reproductive BioMedicine Online by , with permission from Reproductive Healthcare Ltd [33].
Publish with Bio Med Central and every scientist can read your work free of charge http://www.rbej.com/content/3/1/28
|
v3-fos-license
|
2019-08-17T13:04:17.574Z
|
2019-08-16T00:00:00.000
|
201019037
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7554/elife.47088",
"pdf_hash": "be0afc3b422c2d0e392f28b2a70a8f292a42b512",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1189",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "be0afc3b422c2d0e392f28b2a70a8f292a42b512",
"year": 2019
}
|
pes2o/s2orc
|
Rapid decline of bacterial drug-resistance in an antibiotic-free environment through phenotypic reversion
Antibiotic resistance typically induces a fitness cost that shapes the fate of antibiotic-resistant bacterial populations. However, the cost of resistance can be mitigated by compensatory mutations elsewhere in the genome, and therefore the loss of resistance may proceed too slowly to be of practical importance. We present our study on the efficacy and phenotypic impact of compensatory evolution in Escherichia coli strains carrying multiple resistance mutations. We have demonstrated that drug-resistance frequently declines within 480 generations during exposure to an antibiotic-free environment. The extent of resistance loss was found to be generally antibiotic-specific, driven by mutations that reduce both resistance level and fitness costs of antibiotic-resistance mutations. We conclude that phenotypic reversion to the antibiotic-sensitive state can be mediated by the acquisition of additional mutations, while maintaining the original resistance mutations. Our study indicates that restricting antimicrobial usage could be a useful policy, but for certain antibiotics only.
Introduction
Steady antibiotic overuse has led to the rise and spread of multidrug-resistant bacteria, and can potentially reduce the number of therapeutic options against several dangerous human pathogens. Resistance seriously impacts the effectiveness of treatment, and increases the risk of complications and fatal outcome (Mensah Abrampah et al., 2018). Strict policies that aim to restrict antimicrobial usage in clinical settings may offer a solution to this problem (Guay, 2008). Such strategies implicitly presume that resistance leads to reduced bacterial fitness in an antibiotic-free environment, and therefore these resistant populations should be rapidly outcompeted by antibiotic-sensitive variants. In theory, the extent of fitness costs determines the long-term stability of resistance, and consequently, the rate by which the frequency of resistant bacteria decreases in an antibiotic-free environment. Resistance mutations frequently incur fitness costs in the laboratory (Andersson and Levin, 1999;Levin et al., 1997), as such mutations impair essential cellular processes, such as transcription, translation, or cell-wall biogenesis. There is a negative correlation between measured fitness costs and prevalence in clinical settings (Basra et al., 2018;Praski Alzrigat et al., 2017;Trindade et al., 2009). This would suggest that fitness costs shape the propagation of antibiotic resistant bacteria in the clinics. However, in other cases, such deleterious side effects of resistance mutations are undetectable, and resistance can even confer benefits in specific, antibiotic-free environmental settings (Maharjan and Ferenci, 2017).
It is frequently assumed that such compensatory mutations mitigate the fitness costs of resistance mutations without affecting the level of resistance. As the range of targets for compensation is much broader, compensatory mutations are more likely than the reversion of resistance mutations. If compensatory mutations are indeed widespread, pathogens can reach both high level of resistance and high fitness. For these reasons, reversion to the original antibiotic-sensitive state under prudent antibiotic use may proceed so slowly that it will have no practical importance in the clinic (Andersson, 2006;Andersson and Hughes, 2010;Nicoloff et al., 2019;Schrag et al., 1997).
Several prior laboratory studies support the argument above. It has been reported that antibiotic resistance is stably maintained as a result of compensatory mutations (Bjö rkman et al., 1999;Johanson et al., 1996;Marcusson et al., 2009), however, instances of compensatory mutations have not been generalized to clinical settings. Several published clinical studies indicate that limited antibiotic use can cause rapid changes in the frequency of resistance. In certain cases, resistance tends to decline both in individual patients and at the community level following restricted antibiotic use (Butler et al., 2007;Dagan et al., 2008;Gottesman et al., 2009;Kristinsson, 1997). For example, the frequency of erythromycin-resistant Streptococcus pyogenes steadily declined as a result of reduced use of macrolides in Finnish hospitals (Seppälä et al., 1997). By contrast, months of reduction of sulfamethoxazole clinical treatment in the United Kingdom failed to reach reduced resistance levels (Enne et al., 2001).
This disagreement between clinical observations and laboratory studies could have multiple reasons. First, antibiotic treatments frequently fail to completely eradicate antibiotic-sensitive bacteria from the population. Following treatment, antibiotic sensitive bacteria with high fitness could rapidly spread in the population, leading to rapid loss of resistance. Second, compensatory evolution may be limited in nature and in clinical settings (Brandis et al., 2012;Hall and MacLean, 2011;Vogwill and MacLean, 2015). Indeed, most laboratory studies focused on resistance to a single drug (Brandis et al., 2012;Maisnier-Patin et al., 2002;Qi et al., 2016), thus compensatory evolution in multidrug-resistant bacteria has remained largely unexplored (Vogwill and MacLean, 2015; but see Moura de Sousa et al., 2017). This latter shortcoming is especially noteworthy, because several multidrug-resistant bacteria are difficult to treat with current antibiotic treatment, and therefore the issue of compensatory evolution is especially relevant. Indeed, there is an urgent need to understand the mechanisms that mitigate the costs of multiple -potentially epistatically interactingmutations in multidrug-resistant pathogens (Trindade et al., 2009;Wong, 2017), which clearly cannot be achieved in studies focusing on single mutants only.
In this work, we have studied 23 drug-resistant E. coli strains that carry 2 to 13 mutations (Lázár et al., 2014). We have found that 60 days of laboratory evolution under antibiotic-free conditions has led to a rapid decline of resistance to certain, but not all antibiotics. This decline in resistance has not resulted from strict reversion mutations that recapitulate wild-type bacteria at the molecular level. Rather, the mutated genes are functionally related, but not identical to those conferring antibiotic resistance. Notably, these mutations perturb membrane-permeability and the activity of regulons involved in defense against drugs and related stresses. They partially restore wild-type fitness in the antibiotic-free medium, and also reduce the level of antibiotic-resistance, at least against certain antibiotics. These considerations could help to identify antibiotics where restricting antimicrobial usage could be a useful policy.
Evolution of resistance and fitness cost
In a previous work we had performed adaptive laboratory evolution under gradually increasing antibiotic dosages (Lázár et al., 2014). Parallel evolving populations of E. coli K-12 BW25113 were exposed to 1 of 12 antibiotics (Table 1). Adapting populations were found to display an up to 328fold increment in their minimum inhibitory concentrations (MICs) relative to the wild-type strain. To elucidate the underlying molecular mechanisms of resistance, 60 antibiotic-resistant strains (one clone per each population) were previously isolated and subjected to whole genome sequencing analysis. The resistance mutations identified generally affected the drug targets, efflux pumps, porins and proteins involved in cell envelope biogenesis.
In the current study, we estimated fitness by measuring individual fitness of each of the 60 antibiotic-resistant strains and the corresponding wild-type strain in an antibiotic-free medium (Figure 1figure supplement 1). In total, 38% of the antibiotic-resistant strains (N = 23) showed significantly reduced growth compared to the ancestral wild-type strain (Figure 1-source data 1 . Fitness cost varied substantially across antibiotics ( Figure 1). Most notably, laboratory-adapted aminoglycosideresistant strains displayed especially low fitness in the antibiotic-free medium, but the reasons are unclear. However, we note that akin to clinically observed small-colony variants, the major targets of selection under aminoglycoside stress were the translational machinery and a broad class of genes that shape the electrochemical potential of the outer membrane (Lázár et al., 2013;Vestergaard et al., 2016).
Laboratory evolution of antibiotic-resistant strains in an antibiotic-free medium
To investigate potential changes in resistance phenotypes upon evolution in an antibiotic-free environment, we initiated parallel laboratory evolutionary experiments with 23 out of the 60 antibioticresistant strains (1 to 4 strains per antibiotic, see Materials and methods). All these strains exhibited significant fitness costs in antibiotic-free environment. Six parallel populations per antibiotic-resistant strain were cultivated in a standard, antibiotic free-medium, resulting in 138 independently evolving populations. The populations were propagated for 60 transfers (approximately 480 generations) by diluting 1% of the saturated cultures into fresh medium every 24 hr. From each evolved population, we selected a single, representative clone for further analysis (see Materials and methods). Throughout the paper, T0 and T60 refer to the 23 antibiotic-resistant strains and the corresponding 138 evolved lines from the final day of the antibiotic-free evolutionary experiment, respectively. Table 1. Antibiotics employed and their modes of actions. Functional classification (molecular target) is based on previous studies (Girgis et al., 2009;Yeh et al., 2006). These antibiotics are widely used in clinical practice, are well-characterized and cover a wide range of modes of actions. For three-letter codes (abbreviations), the AAC's standard (https://aac.asm.org/content/abbreviations-and-conventions) was used. As in our previous work, the fitness of each of the 23 T0 strains and the 138 T60 lines was measured by estimating the area under the growth curve recorded in the same liquid antibiotic-free medium. 51% of the T60 lines were found to exhibit significantly improved fitness, and some of these lines even approximated wild-type fitness (Figure 2-source data 1). The analysis has revealed 0% 10% 20% 30% 40% 50% CHL DOX ERY FOX NAL NIT TET AMP CIP TMP KAN TOB Figure 1. Fitness cost of antibiotic-resistant T0 strains. The fitness of each strain was measured as the area under the bacterial growth curve recorded in an antibiotic-free medium. Fitness cost was calculated from the absolute fitness of T0 and wild-type strains using the following equation: 1-W T0 /W wildtype , where W T0 and W wild-type indicate fitness of the T0 and wild-type strains, respectively. Strains adapted to two aminoglycosides -kanamycin (KAN) and tobramycin (TOB) -generally exhibit an especially high fitness cost, while adaptation to erythromycin (ERY) and nalidixic acid (NAL) evoked no measurable fitness cost. For all further analyses only the T0 strains exhibiting a significant fitness cost were used. For antibiotic abbreviations, see Table 1. Boxplots show the median, first and third quartiles, with whiskers showing the 5th and 95th percentiles of the fitness cost per antibiotic. Individual data points represent the median of three biological replicates (five technical measurements) for each of the 60 antibiotic-resistant strains (4-6 strain per antibiotic). Source file is available as Figure 1-source data 1. major differences in relative fitness across strains adapted to different antibiotics (Figure 2), indicating that adaptation is mainly driven by the set of resistance mutations present in the T0 strains.
Rapid loss of drug resistance in an antibiotic-free medium Next, we investigated how laboratory evolution in an antibiotic-free medium shapes antibiotic resistance. For this purpose, we first measured the minimum inhibitory concentrations (MIC) in 71 T60 lines showing significant fitness improvement, as well as in the corresponding 20 T0 strains, against a set of 11 antibiotics (see Table 1 and Materials and methods). Using the CLSI resistance break- Table 1. Boxplots show the median, first and third quartiles, with whiskers showing the 5th and 95th percentiles of the relative fitness of all T60 evolved lines originally adapted to different antibiotics. Individual data points represent the median relative fitness of each of the 138 T60 evolved lines (3 biological and five technical replicates per each). Source file is available as Figure 2- point cut-offs (CLSI Approved Standard M100, 29th Edition), we categorized each strain as being resistant (R), intermediate (IM) or susceptible (S) to each investigated antibiotic. As expected, the T0 strains generally displayed reduced susceptibility to multiple antibiotics (MIC > R or MIC > IM, see Figure 3-source data 1). Based on this categorization, we next examined whether resistance to these antibiotics was maintained or reduced during the laboratory evolution in the antibiotic-free medium. For this purpose, we compared the resistance levels of the 71 T60 lines that displayed significant fitness improvement in the antibiotic-free environment to that of the corresponding T0 strains. We focused on antibiotics to which the corresponding T0 strain exhibited resistance, leading to a total of 195 antibiotic-T60 line combinations. We have found that resistance declined in as high as 54.8% of the antibiotic-T60 line combinations following evolution under antibiotic stress-free conditions ( Figure 3A and B, Figure 3-source data 1). However, the extent of resistance decline depended on the antibiotic considered. For example, doxycycline and tetracycline resistance was frequently lost, while aminoglycoside resistance was generally maintained in the T60 lines ( Figure 3C and D). Overall, 64.7% of the T60 lines displayed significant decline in resistance to at least one antibiotic, and many displayed loss of resistance to multiple drugs ( Figure 3C and D, Figure 3-source data 1). We found a significant negative correlation between relative fitness and resistance level of T60 lines (Spearman's correlation test, r = À0.35, p=0.0031). This indicates that fitness compensation was partly associated with a decline in the original antibiotic resistance level ( Figure 4A and B, see also Figure 4-figure supplement 1, Figure 4-source data 1). In summary, approximately 480 generations of evolution in an antibiotic-free medium had a considerable impact on the levels of resistance to multiple antibiotics.
Phenotypic reversion via compensatory mutations dominates
To gain insights into the underlying molecular mechanisms of resistance loss, 15 independently evolved T60 strains displaying increased fitness were subjected to whole-genome sequencing (see Materials and methods for selection criteria). Using the Illumina platform and established bioinformatics protocols (see Materials and methods), we aimed to identify mutations relative to the genome of the corresponding T0 strains. Altogether, 43 independent mutational events were identified, including 16 single nucleotide polymorphisms (SNPs), 16 deletions and 13 insertions (Supplementary file 1). We screened the full bacterial genome to identify resistance-conferring SNPs in the T0 population that revert back to the wild-type sequence in the corresponding T60 strains, but found no such cases. Therefore, the fitness gain in the T60 strains is not due to the molecular reversion of the antibiotic-resistance mutations (Durão et al., 2018). Rather, compensatory mutations elsewhere in the genome contribute to the rapid fitness improvement in the evolved strains. A rigorous statistical analysis to test functional relationship between the mutations detected in T0 and T60 was not feasible due to the low number of mutations that have accumulated during the course of laboratory evolution. Nevertheless, we noted several examples on functional relatedness between resistance genes mutated in T0 and genes mutated during lab evolution (i.e. found in T60 strains only, Figure 5-figure supplement 1, Figure 5-source data 1).
Pleiotropic side effects of a compensatory mutation in marR
Do compensatory mutations simultaneously shape fitness in antibiotic free medium and antibiotic resistance level? To investigate this issue, we have conducted a genetic analysis to explore the potential fitness costs of compensatory mutations. In particular, we studied the impact of selected mutations on growth rate and antibiotic resistance in multiple genetic backgrounds. Several putative compensatory mutations were found to be accumulated in functionally-related transcriptional regulatory proteins involved in anti-drug defense (Supplementary file 1). In particular, some of, these proteins control efflux pumps (marR, Alekshun and Levy, 1999;Duval et al., 2013;Ferenci and Phan, 2015;Seoane and Levy, 1995), lipopolysaccharide biosynthesis (Seo et al., 2015), and outer membrane diffusion pores in response to changes in medium osmolarity (envZ/ompR, (Knopp and Andersson, 2015;Phan and Ferenci, 2013).
Here we first focused on a promoter mutation of the marR belonging to the mar regulon (marR*), not least because this specific gene has clinical relevance: mutations in marR have been reported to cause multidrug resistance in clinical E. coli isolates in several prior studies (Komp Lindgren et al., 2003;Mazzariol et al., 2000). This mutation was found in a T60 strain, indicating that it had accumulated during adaptation to the antibiotic-free environment. This specific mutation (marR*) was inserted individually into wild-type E. coli and the corresponding antibiotic-resistant T0 strain. First we measured the growth rates of the wild-type and the antibiotic-resistant T0 strain with and without marR*. In the absence of antibiotic, the phenotypic effects of this mutation depended on Figure 3. Table 1. Source file is available as Figure 3-source data 1. (B) The figure depicts qualitative changes in resistance level across 195 tested antibiotic-T60 line combinations. Using the CLSI resistance break-point cut-offs, we categorized each T60 and the corresponding T0 strains as being resistant (R), intermediate (IM) or susceptible (S) to each investigated antibiotic. Arrows indicate whether the resistance level was maintained (e.g. R->R) or reduced (e.g. R->S) during the course of laboratory evolution in the antibiotic-free medium. The numbers of antibiotic-T60 line combinations in each category are indicated on the arrows. Source file is available as Figure 3-source data 1. (C and D) The figures show the number of T60 lines with resistance maintained/lost to the range of antibiotics tested, as absolute number (3C) versus ratio (3D), respectively. Doxycycline (DOX) and tetracycline (TET) resistance was frequently lost (Fisher's Exact test: odds ratio = 0.05, p<0.001 for DOX and odds ratio = 0.20, p<0.01 for TET), while aminoglycoside (kanamycin -KAN, tetracycline -TET) resistance was generally maintained in the T60 lines (Fisher's Exact test: odds ratio = 14.29, p<0.001 for KAN, odds ratio = 6.5, p<0.01 for TOB). For antibiotic abbreviations, see Table 1. Source file is available as Figure 3-source data 1. DOI: https://doi.org/10.7554/eLife.47088.008 The following source data is available for figure 3: Source data 1. Changes of cross-resistance interactions following 60 day evolution in an antibiotic free environment. DOI: https://doi.org/10.7554/eLife.47088.009 the genetic background: it increased growth rate by 4.4% in the corresponding T0 strain, but reduced wild-type fitness ( Figure 5A, Figure 5-source data 1). This epistatic effect suggests that marR* reduces the deleterious side effects of antibiotic-resistance mutations, while it has a fitness cost in the wild-type strain (Knopp and Andersson, 2015;Praski Alzrigat et al., 2017). Next, we studied the impact of the marR* compensatory mutation on resistance level using standard E-test assays. The corresponding T0 strain was found to display detectable resistance to multiple antibiotics, including doxycycline, ampicillin, chloramphenicol, nalidixic acid and tetracycline, while the corresponding T60 strain lost resistance to all studied antibiotics. This can mainly result from the presence of marR* in the T60 genome, as the introduction of marR* to the T0 strain recapitulated the same Figure 4. Fitness recovery and resistance loss after compensatory evolution. (A) The scatterplot shows the relative resistance level and relative fitness improvement of individual T60 strains compared to the corresponding T0 strains (each data point is one strain, the colors indicate the antibiotic used in the analysis). Relative resistance was estimated by the minimum inhibitory concentration of the T60 line relative to that of the T0 line. There is a significant negative correlation between relative fitness improvement and relative resistance level (Spearman's correlation test, r = À0.35, p=0.0031). Blue line with gray shaded area represents linear regression line with 95% confidence interval. For antibiotic abbreviations, see Table 1. Source file is available as Figure 4-source data 1. (B) Resistance loss as a function of relative fitness in antibiotic-free environment. The T60 strains were classified into three main categories based on their resistance-profiles: the resistance level declined against all tested antibiotics (complete), declined towards at least one antibiotic (partial), or the resistance level was maintained (no). Using the CLSI resistance break-point cut-offs (CLSI Approved Standard M100, 29th Edition), we categorized each T60 and the corresponding T0 strains as being resistant (R), intermediate (IM) or susceptible (S) to each investigated antibiotic. A decline in resistance was defined by transitions R->IM, R->S or IM->S. We observed a significant association between the relative fitness in the antibiotic-free medium and the decline in resistance level (Mann-Whitney U-test: **** indicates p<0.0001, ns indicates that the p value is nonsignificant). Boxplots show the median, first and third quartiles, with whiskers showing the 5th and 95th percentiles. Source file is available as Figure 5. Phenotypic effects of a compensatory mutation in the marR promoter region. The figure shows the (A) relative fitness, (B) relative resistance level and (C) Relative Hoechst probe accumulation (a proxy of membrane permeability) in the doxycycline-resistant T0 and the corresponding T60 strain harboring a compensatory mutation in the marR promoter region (marR*). Additionally, marR* was introduced into the wild-type and T0 genetic backgrounds as well, yielding WT + marR* and T0 + marR* strains, respectively. (A) Fitness was measured as the area under the growth curve in an antibiotic-free medium, and was normalized to wild-type fitness. Boxplots show the median, first and third quartiles, with whiskers showing the 5th and 95th percentiles (2 biological and five technical replicates per each genotype). We observed a significant variation in relative fitness across the strains (Tukey's post-hoc multiple comparison tests, * indicates p<0.05). Source file is available as Figure 5-source data 1. (B) Resistance level of all five strains against six antibiotics. Minimum inhibitory concentration (MIC) was measured by the standard E-test assay, and was normalized to that of the wild-type strain. Only the T0 strain can be considered resistant to each antibiotic tested according to the CLSI resistance break-point cut-off. Source file is available as Figure 5-source data 1. (C) Membrane permeability across five strains. Membrane permeability was estimated by measuring the intracellular accumulation of a fluorescent probe (Hoechst 33342) in eight biological replicates per each strain or condition. Intracellular accumulation of the probe in the corresponding strains was normalized to that of the wild-type strain. Wild-type cells treated with a protonophore chemical agent (carbonyl cyanide m-chlorophenyl hydrazone, CCCP) served as a positive control, displaying an 88% larger membrane permeability value compared to that of the non-treated wild-type strain. T0 showed an exceptionally low level of Hoechst-dye accumulation compared to all other strains studied, while T0 + marR* displayed a 166% larger membrane permeability value compared to that of the T0 strain (Tukey's post-hoc multiple comparison tests: **** indicates p<0.0001). Boxplots show the median, first and third quartiles, with whiskers showing the 5th and 95th percentiles. Source file is available as pattern: the engineered strain lost resistance ( Figure 5B). Finally, we hypothesized that marR* shapes resistance and cellular fitness through antagonistic effects on drug uptake. This hypothesis was tested by measuring the intracellular accumulation of a fluorescent probe (Hoechst 33342) as a proxy for membrane permeability in the resistant T0 strain, in the T60 line and in the wild-type strain with and without the marR* compensatory mutation. Decreased intracellular level of the probe indicates either decreased porin activity or enhanced efflux-pump activity (Coldham et al., 2010). This is exactly what we found in the resistant T0 strains compared to the wild-type. Importantly, marR* restored membrane permeability of T0 to the wild-type level ( Figure 5C). In summary, a compensatory mutation in the promoter region of marR* increased bacterial fitness in a specific, antibiotic-resistant genotype only. As a side effect, the same mutation increased bacterial susceptibility to multiple antibiotics, probably through elevating membrane permeability. Similar patterns held for a compensatory mutation in envZ, a central regulatory protein involved in osmoregulation ( Figure 5-figure supplement 2A and B, Figure 5-source data 1).
Discussion
It is an open issue whether restricting antimicrobial usage would contribute to the elimination of multidrug-resistant bacteria. Although resistance mutations frequently have associated fitness costs, such costs may decline subsequently through the accumulation of compensatory mutations. It has been argued that such compensatory mutations mitigate the fitness costs of resistance mutations without affecting the level of resistance (Andersson and Hughes, 2010), suggesting that limiting antibiotic usage may not have much practical utility in clinical settings. However, most prior laboratory studies focused on bacteria carrying a single resistance mutation, whereas antibiotic-resistant clinical isolates usually carry multiple resistance mutations (Vogwill and MacLean, 2015). This issue is all the more relevant, as epistasis is prevalent between antibiotic-resistance mutations (Wong, 2017).
A specific case of compensation is molecular reversion. In this case, the mutation responsible for molecular reversion restores the wild-type, antibiotic-susceptible genetic sequence, and thereby eliminates the fitness costs associated with the resistance mutation. However, molecular reversion is assumed to be generally rare, as it requires very specific and mutational events (Andersson and Hughes, 2010;Durão et al., 2018). On the other hand, case studies indicate that phenotypic reversion can also occur, when the original resistance mutation is maintained, but acquisition of additional mutations simultaneously reduces fitness costs and increases antibiotic-susceptibility. For example, streptomycin-resistance is frequently mediated by resistance mutations in the ribosomal protein gene rpsL, but compensatory mutations in other genes involved in translation yield reversion to streptomycin-sensitivity (Moura de Sousa et al., 2017). As the molecular targets for phenotypic reversion are relatively broad, reversion to the antibiotic-susceptible state may be far more likely than previously appreciated.
To test the theory of phenotypic reversion, we studied laboratory evolved drug-resistant E. coli strains carrying 2 to 13 mutations and initially displaying reduced fitness compared to the wild-type strain. We found that 60 days of laboratory evolution in an antibiotic-free environment led to a rapid fitness improvement in 51% of the antibiotic-resistant lineages (some of which approximated wildtype fitness). Fitness may increase during the course of laboratory evolution as a result of general adaptation to the environment and/or accumulation of compensatory mutations that mitigate the deleterious side effects of resistance. The second option is more realistic, as several mutations that had accumulated during laboratory evolution affected genes involved in bacterial defense mechanisms against antibiotics or in general stress-responses (eg. rpoS promoter region/nlpD gene (Stoebel et al., 2009), potD (Shah et al., 2011), soxSR (Jain and Saini, 2016) (Supplementary file 1). An in-depth genetic analysis has also demonstrated epistatic interactions between resistance-and putative compensatory mutations in marR and envZ. These mutations had accumulated during laboratory evolution and increased fitness in the respective antibiotic-resistant strain, but reduced fitness in the wildtype ( Figure 5A and Figure 5-figure supplement 2A, Figure 5-source data 1). This latter finding also suggests that compensatory mutations themselves have associated fitness costs, preventing antibiotic-resistant bacteria to reach the full fitness of sensitive variants.
Crucially, we have demonstrated that drug-resistance declines in an antibiotic-free laboratory environment. In as few as 480 generations, 64.7% of drug-resistant E. coli strains showed elevated susceptibilities to at least one antibiotic investigated (Figure 3-source data 1). We did not observe bona fide reversion mutations in laboratory evolved bacteria (Durão et al., 2018). This is not unexpected, as compensation via the accumulation of additional mutations elsewhere in the genome is far more likely to occur.
Detailed genetic analysis of the mar regulon also supports the phenotypic reversion hypothesis. MarR is a transcriptional regulatory protein that controls the activity of the mar regulon in E. coli through the repression of marA. The mar regulon participates in controlling several genes involved in antibiotic-resistance, including the AcrA/AcrB/TolC multidrug-efflux system ( Figure 6). In response to antibiotic stresses (e.g. doxycycline or ciprofloxacin), marR is regularly mutated both in clinical and in laboratory settings, leading to increased expression of marA and other members of the mar regulon (Praski Alzrigat et al., 2017). Here we have focused on a multidrug-resistant laboratory evolved E. coli strain that carries a mutation in the protein coding sequence of marR. This resistance mutation has an associated fitness cost ( Figure 5-source data 1), promoting the accumulation of further mutations. Our study indicates that this can be achieved by a compensatory mutation in the promoter region of the mar operon. This compensatory mutation increases bacterial fitness, susceptibility to multiple antibiotics alike, and restores wild-type-like membrane permeability, probably through changing the activity of the mar regulon. In a follow-up work we are planning to study this phenomenon in detail.
It is important to emphasize that loss of antibiotic resistance is not equally likely across antibioticresistant strains. For example, in our study, resistance to doxycycline and tetracycline was frequently lost, while aminoglycoside-resistance was generally maintained during the course of laboratory evolution ( Figure 3C and D).
Our findings appear to be consistent with clinical data. For instance, a Finnish retrospective study assessed the proportion of quinolone-susceptible E. coli urine isolates before and after a nationwide restriction of ciprofloxacin use was implemented in Finland. The research revealed that a reduced consumption of quinolone antibiotics resulted in a significant decrease in quinolone-resistance of E. coli (Gottesman et al., 2009). Our laboratory study also shows that ciprofloxacin-resistance has declined in 66% of initially resistant populations, following a long-term exposure to antibiotic-free medium ( Figure 3D). Another study examined the impact of a 24 month voluntary restriction on the use of trimethoprim-containing drugs in Sweden on the prevalence of trimethoprim-resistant E. coli isolated from urinary-tract infections. All clinical isolates were found to retain their resistance levels and carried mutation in folA, the target gene of trimethoprim, even after 24 months of trimethoprim restriction (Brolund et al., 2010). In agreement with this clinical study, we have found that all five trimethoprim resistant E. coli strains with a folA resistance mutation have maintained their resistance following laboratory evolution in antibiotic-free medium (Figure 3-source data, Figure 4-source data 1). These considerations must be taken with some caution, as comparison of clinical and laboratory data is not straightforward. For instance, restricted usage of certain antibiotics in hospitals cannot completely eliminate antibiotic selection in a given region due to lack of isolation and crossresistance between antibiotics.
In summary, three main patterns indicate that phenotypic reversion to an antibiotic-susceptible state could be common during compensatory evolution. In our study, we have observed i) rapid fitness increase in antibiotic free-medium, ii) associated loss of antibiotic resistance, and we iii) identified specific mutations that simultaneously change both characteristics.
Our findings suggest that restricting antimicrobial usage could be a useful policy to control the increasing rise and spread of multidrug-resistant bacteria, but it may work for certain antibiotics only. We should also emphasize, however, that all our evolutionary experiments were performed in devoid of any antibiotics. The next logical step is to study the occurrence of phenotypic reversion during exposure to temporarily changing antibiotic treatments or sublethal antibiotic dosages as it may occur in real-life situations (Andersson and Hughes, 2014). Our study leaves open the question about the extent by which initial fitness costs of resistance mutations and/or intensity of antibiotic selection shapes subsequent compensatory evolution and associated loss of resistance. Finally, our study has focused on chromosomal resistance mutations. It is still unclear whether resistance-conferring plasmids are generally lost or the associated fitness costs are mitigated by genomic mutations (Dahlberg and Chao, 2003). . Hypothetical mechanism of compensation by a marR compensatory mutation. The mar regulon participates in controlling several genes involved in resistance to antibiotics including the AcrA/AcrB/TolC multidrug efflux system (panel A). MarR is a transcriptional regulatory protein that controls the activity of the mar system in wild-type E. coli through the repression of marA. In response to antibiotic stresses (e.g. doxycycline or ciprofloxacin), marR is mutated (indicated by a red star), leading to increased expression of marA and, subsequently, other members of the mar regulon (Praski Alzrigat et al., 2017). However, the elevated activity of the mar regulon is harmful in antibiotic-free conditions, promoting the accumulation of further mutations. Our study indicates that this can be achieved by a compensatory mutation in the promoter region of the mar operon (indicated by a yellow star). The compensatory mutation putatively restores the activity of the mar regulon to the wild-type level (panel B
Antibiotic-resistant strains
The 60 multidrug-resistant strains used in this study were derived from our previous work (Lázár et al., 2014), where parallel evolving populations of E. coli K12 BW25113 were adapted to increasing dosages of one of 12 antibiotics (Figure 1-figure supplement 1). These antibiotics employed were the following: ampicillin (AMP), cefoxitin (FOX), chloramphenicol (CHL), ciprofloxacin (CIP), doxycycline (DOX), erythromycin (ERY), kanamycin (KAN), nalidixic acid (NAL), nitrofurantoin (NIT), tetracycline (TET), trimethoprim (TMP) and tobramycin (TOB). The evolutionary experiment was continued for~240-384 generations, at which point the evolving populations reached an up to 328-fold increase in resistance compared to the wild-type ancestor. In all cases (except for the CPR9 strain which was found to be characterized by an intermediate level of resistance), the resistance levels were above the current clinical breakpoints for resistance according to the Clinical and Laboratory Standards Institute (CLSI Approved Standard M100, 29th Edition) guidelines. In spite of single antibiotic pressure, the evolution of multidrug-resistance was a frequent phenomenon. The 60 antibiotic-resistant strains (4-6 strains per antibiotic) were previously subjected to whole-genome sequencing. The identified resistance mutations affected drug targets, cell permeability or efflux pumps.
Antibiotics
In order to measure the resistance profile of the T60 and corresponding T0 evolved lines we used 11 of the above-mentioned 12 antibiotics (Table 1). Erythromycin (ERY) was excluded from the analysis, as none of the studied strains displayed cross-resistance to it. Standard E-test strips for all remaining antibiotics were purchased from bioMé rieux. Powder stocks of antibiotics were purchased from Sigma-Aldrich, except for DOX (AppliChem). Antibiotic solutions were freshly prepared on a weekly basis from powder stocks, kept at -20˚C and were filter-sterilized before use.
Laboratory evolutionary experiment
The laboratory evolution experiment followed an established protocol (Lázár et al., 2014). Briefly, we started with 23 antibiotic-resistant E. coli strains that displayed a significant fitness cost. We excluded erythromycin-(ERY) and nalidixic acid-(NAL) resistant strains from the evolutionary experiment, as these strains did not show a significant fitness cost. six parallel lines were initiated from each antibiotic-resistant strain, and were propagated in 96-well microtiter plates in antibiotic-free MS medium for 60 days. All parallel lines were inoculated into randomly selected positions of these 96-well plates. The plates also contained control wells in several positions that were not inoculated by cells to help plate identification and orientation as well as to avoid cross-contamination of parallel evolving lines. Using a manual-held 96-pin replicator (VP407, V and P Scientific), roughly 1.2 ml of each stationary phase culture was transferred every day to 100 ml of fresh medium. At every 120 transfers, a fraction of the overnight culture was kept at -80˚C as a glycerol stock. Cross-contamination events were regularly checked by visual inspection of empty wells.
High-throughput fitness measurements and determination of growth rate Fitness measurements
Established protocols were used to measure fitness in bacterial lines . Starter cultures were inoculated from frozen samples into 96-well plates. The starter plates were grown for 24 hr under conditions identical to the evolutionary experiment. 384-well plates filled with 60 ml MS minimal medium per well were inoculated for growth curve recording compared to the starter plates, using a pintool with 1.58 mm floating pins. The pintool was moved by a Microlab Starlet liquid handling workstation (Hamilton Bonaduz AG) to provide uniform inoculum across all samples. The 384-well plates were incubated at 30˚C in an STX44 (LiCONiC AG) automated incubator with alternating shaking speed every minute between 1,000 rpm and 1,200 rpm. Plates were transferred by a Microlab Swap 420 robotic arm (Hamilton Bonaduz AG) to Powerwave XS2/HT plate readers (BioTek Instruments Inc) every 20 min and cell growth was followed by recording the optical density at 600 nm. Five technical replicates of three biological replicate measurements were executed on all strains sampled from each time-point of the evolutionary experiment.
Growth curve analysis
Fitness was approximated by calculating the area under the growth curve (AUGC). AUGC has been previously used as a proxy for fitness (Hasenbrink et al., 2005) and it has the advantage to integrate multiple fitness parameters, such as the slope of exponential phase (growth rate) and the final optical density (yield). AUGC was calculated from the obtained growth curves of a 1000 min time interval following the end of lag phase. The end of the lag phase was identified according to an established protocol . To eliminate potential withinplate effects that might cause measurement bias, relative fitness was normalized by the fitness of the neighboring reference wells that contained wild-type controls. For each line and each evolutionary time point, relative fitness was calculated as the median of the normalized AUGC of the technical replicates divided by the median fitness of the wild-type controls. At day 0, the technical replicate measurements of the isogenic, independently evolving lines were combined to calculate median fitness for the ancestral antibiotic-resistant strain, since at that time these populations had no independent evolutionary history. Stringent criteria were used to define the set of antibiotic-resistant strains with a substantial fitness defect: significance was determined by the Mann-Whitney U-test. Significance of fitness increase for the evolving lines derived from antibiotic-resistant strains having initial fitness defect was also calculated by the Mann-Whitney U-test. All tests were corrected for false discovery rate (FDR-corrected p value < 0.05).
As expected, fitness estimated by AUGC showed a significant positive correlation with fitness estimated by growth rate (Pearson's correlation, r = 0.41, p<2.2e-16) or yield (Pearson's correlation, r = 0.78, p<2.2e-16). Additionally, we found that AUGC is more robust than yield, that is it shows less variation across biological replicates (median CVs values for AUGC and yield are 8.3% and 14.1%, respectively).
Determination of the minimal inhibitory concentration (MIC)
Minimal inhibitory concentrations (MICs) were determined using standard E-test strips (bioMé rieux) according to the manufacturer's instructions. Briefly, overnight cultures of bacteria were diluted to an optical density (OD 600 ) of 0.6. 100 ml of the diluted inoculum was spread on each MS agar plate and the plates were incubated at 30˚C for 24-48 hr. MICs were read directly from the E-test strips according to the instructions of the manufacturer. Based on the MIC results, we categorized each strain as being resistant (R), intermediate (IM) or susceptible (S) to each investigated antibiotic according to the Clinical and Laboratory Standards Institute (CLSI Approved Standard M100, 29th Edition) guidelines.
Hoechst 33342 (Bisbenzimide H 33342) accumulation assay
To estimate changes in membrane permeability and efflux pump activity, we implemented a scalable fluorescence assay (Coldham et al., 2010). This method is based on the intracellular accumulation of the fluorescent probe Hoechst 33342 (Bisbenzimide H 33342, Sigma-Aldrich). Strains were cultured in eight biological replicates overnight at 30˚C and stationary phase cultures were regrown in fresh medium to an optical density (OD 600 ) of 0.6 at 30˚C. Bacterial cells were collected by centrifugation at 4000 g and resuspended in 1 mL phosphate-buffered saline (PBS). The optical density (OD 600 ) of all suspensions was adjusted to 0.1, and 0.18 mL of each suspension was transferred to 96-well plates (CellCarrier-96 Black Optically Clear Bottom, supplied by Sigma-Aldrich). Plates were incubated in a Synergy two microplate reader at 30˚C, and 25 mM Hoechst 33342 was added to each well. The wild-type E. coli K12 BW25113 treated with an efflux inhibitor agent (carbonyl cyanide-mchlorophenyl hydrazone, CCCP, Life Technologies) served as a positive control. The OD 600 and fluorescence curves were recorded for 1 hr with 75 s delays between readings. Fluorescence was read from the top of the wells using excitation and emission filters of 355 and 460 nm, respectively. The first 15 min were excluded from further analysis due to the high standard deviation between replicates. Blank normalized OD 600 values were calibrated by applying the following transformation: OD calibrated = OD 600 +0.49312 * OD 600 3 . Data curves were smoothed and fluorescence per OD 600 ratio curves was calculated. Finally, areas under these ratio curves were determined.
Allelic replacements
Utilizing multiplex automated genome engineering (MAGE) (Nyerges et al., 2016) we reconstructed two candidate T60 compensatory mutations and the corresponding T0 resistance mutations in the wild-type genetic background (separately and in combination as well), as well as the compensatory mutations in the corresponding initial antibiotic-resistant strains. Each MAGE cycle consisted of the following steps: Upon reaching OD 600 = 0.4-0.6, cells were transferred to a 42˚C shaking water bath to induce l-Red protein expression for 15 min at 250 rpm. Cells were then immediately chilled on ice for at least 10 min. Electrocompetent cells were made by washing and pelleting the cells twice in 10 mL of ice-cold dH 2 O. 40 mL cell suspension was mixed with 1 mL of 100 mM oligonucleotide. Electroporation was done on a BTX (Harvard Apparatus) CM-630 Exponential Decay Wave Electroporation System in 1 mm gap VWR Signature Electroporation cuvettes (1.8 kV, 200 W, 25 mF). Immediately after electroporation, 1 mL TB + 2 ml LB medium was added onto the cells to allow recovery. The 100,000 diluted cells were spread onto solid medium and incubated at 30˚C for 24 hr. Allelic replacement frequencies for all strains were measured at each locus by allele-specific PCR. Selected clones carrying the desired modifications were verified by capillary sequencing.
Whole-genome sequencing
To identify potential compensatory mechanisms, 15 T60 adapted lines derived from a total of 10 antibiotic-resistant T0 strains were chosen for whole-genome sequencing. These T60 lines were chosen to represent diverse patterns of resistance loss, as well as to cover T0 strains adapted to a variety of antibiotics. E. coli genomic DNA was prepared with GenElute Bacterial Genomic DNA Kit (Sigma-Aldrich) and quantified using Qubit dsDNA BR assay in a Qubit 2.0 fluorometer (Invitrogen). 200 ng of genomic DNA was fragmented in a Covaris M220 focused-ultrasonicator (peak power: 55W, duty factor: 20%, 200 cycles/burst, uration: 45 s) using Covaris AFA screw cap fiber micro-TUBEs. Fragment size distribution was analyzed by capillary gel electrophoresis using Agilent High Sensitivity DNA kit in a Bioanalyzer 2100 instrument (Agilent) and then indexed sequencing libraries were prepared using TruSeq Nano DNA LT kit (Illumina) following the manufacturer's instructions. This, in short, includes end repair of DNA fragments, fragment size selection, ligation of indexed adapters and library enrichment with limited-cycle PCR. Sequencing libraries were validated (librarysizes determined) using Agilent High Sensitivity DNA kit in a Bioanalyzer 2100 instrument, then quantified using qPCR based NEBNext Library Quant Kit for Illumina (New England Biolabs) with a Piko-Real Real-Time PCR System (Thermo Fisher Scientific) and diluted to 4 nM concentration. Groups of 12 indexed libraries were pooled, denatured with 0.1 N NaOH, and after dilution loaded in a MiSeq Reagent kit V2-500 (Illumina) at 8 pM concentration. 2 Â 250 bp pair-end sequencing was done with an Illumina MiSeq sequencer, primary sequence analysis was done on BaseSpace cloud computing environment with GenerateFASTQ 2.20.2 workflow.
Paired-end sequencing data were exported in FASTQ file format. The reads were trimmed using Trim Galore (Babraham Bioinformatics) and cutadapt (Martin, 2011) to remove adapters and bases where the PHRED quality value was less than 20. Trimmed sequences were removed if they became shorter than 150 bases. FASTQC program (https://www.bioinformatics.babraham.ac.uk/projects/ fastqc/) was used to evaluate the quality of the original and trimmed reads.
The Breseq program was used with default parameters for all samples. The gdtools was used for annotating the effects of mutations and compare multiple samples. The genbank formatted reference genome BW25113.gb was used as a reference genome in the analysis. Sequencing data have been deposited in the NCBI Sequence Read Archive (SRA) under the accession number of PRJNA529335: (URL: https://www.ncbi.nlm.nih.gov/sra/PRJNA529335). Data availability Sequencing data have been deposited in the NCBI Sequence Read Archive (SRA) under the accession number of PRJNA529335.
The following dataset was generated:
|
v3-fos-license
|
2022-02-01T16:10:34.822Z
|
2022-01-29T00:00:00.000
|
246437127
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/12/2/345/pdf",
"pdf_hash": "2977fc2a9c0b0115aa2f20830b3d43f42fee7c4a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1190",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "571675fce5b7b1039e8b894271fa44881cc01e86",
"year": 2022
}
|
pes2o/s2orc
|
Ensembles of Convolutional Neural Networks for Survival Time Estimation of High-Grade Glioma Patients from Multimodal MRI
Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.
Introduction
A glioma is the most common primary tumor of the brain. It originates from glial cells. About 80% of malignant brain tumors are gliomas. According to the 2021 WHO Classification of Tumors of the Central Nervous System [1], glioblastoma is a sub-category of adult-type diffuse gliomas. It is considered a WHO grade IV glioma which represents approximately 57% of all gliomas and 48% of all primary malignant central nervous system (CNS) tumors [2]. Glioblastoma is highly invasive and can quickly spread throughout every region of the brain. Standard therapy consists of resection (surgery) followed by a combination of radiation and chemotherapy. Despite aggressive multimodal treatment, patients diagnosed with glioblastoma multiforme (GBM) have a dismal prognosis with an average survival time of slightly more than 1 year (15-16 months) [3].
Neuro imaging remains the cornerstone of tumor diagnosis and assessment during therapy. While treatment response continues to simply be assessed on changes in tumor size, new image analytic tools have been introduced to access additional information from clinical scans. Radiomics is a non-invasive method for the extraction of quantitative features from medical imaging that may not be apparent through traditional visual inspection [4]. By far, the most common neuro-imaging tool is magnetic resonance imaging (MRI). MRI provides an isotropic three dimensional (3D) picture of the brain with excellent soft tissue contrast and resolution without potentially harmful radiation. Typically, MRI imaging is performed in three different planes; axial, coronal, and sagittal. Figure 1 shows a slice of the human brain in three planes. Commonly used MRI sequences include contrast enhanced T1-weighted (T1CE), T2-weighted, and fluid attenuation inversion recovery (FLAIR) as shown in Figure 2. These sequences, because they are sensitive to different components of the tumor biology, play an integral role in providing refined anatomic and physiological detail. For instance, T1-weighted post-contrast images are sensitive to the enhancing core and necrotic tumor core, whereas FLAIR and T2 highlight the peritumoral edema. Recently, deep learning methods have been applied in computer vision and biomedical image analysis to improve the ability to extract features from images automatically and increase predictive power. They rely on advances in neural network architectures, GPU computing, and the advent of open source frameworks such as Tensorflow [5] and Pytorch [6]. When trained with sufficient data, deep neural networks can attain high accuracy in medical image analysis. The present state of the art is largely dominated by convolutional neural networks (CNN) with a number of effective architectures such as VGG-Net [7], Inception networks [8], and ResNet [9]. Instead of manually engineering features, convolutional neural networks allow automatic convolutions to extract an enormous number of features from input images. Recent review works [10] show the domination of convolutional neural network techniques applied to brain magnetic resonance imaging (MRI) analysis compared to other approaches.
Survival time after a patient is diagnosed with high grade glioma depends on several factors such as age, treatment, tumor size, behavior, and location, as well as histologic and genetic markers [11]. Integration of automated image analytics can provide insights for diagnosis, therapy planning, monitoring, and prognosis prediction of gliomas. A new study [12] by UT Southwestern shows deep learning models can automatically classify IDH mutation status in brain gliomas from 3D MR imaging with more than 97% accuracy. This technology can potentially eliminate the current need for brain cancer patients to undergo invasive surgical procedures to help doctors determine the best treatment for their tumors. This represents an important milestone towards non-surgical strategies for histological determination especially in highly eloquent brain areas such as the brain stem.
Applying deep learning algorithms to multi-sequence MR images is challenging due to several factors. First, the size of the 3D data. Second, the difficulty in designing an appropriate neural network architecture for the desired objective. Third (and most critically), the limited clinical knowledge of deep learning experts and the limited availability of large medical image datasets with labels. Finally, predictive power of deep convolutional neural network models is significantly enhanced by a large training dataset. Using a small number of cases to train a model using deep learning can result in over-fitting the training data such that generalization to future cases is poor.
Here, we address the issues of limited and heterogeneous data sets in medical imaging using an ensemble learning method applied to survival time prediction from glioblastoma patient data. Its efficacy is demonstrated through application to MR images from the BraTS dataset.
We present our investigation as follows: Section 2 discusses related research studies and recent trends of survival analysis and radiomics using deep learning. In Section 3, we describe our proposed survival prediction system in detail including data acquisition, preparation and augmentation, region of interest segmentation and model construction, and training. Our experimental results are presented in Section 5. In Section 6, the results are discussed and conclusions are drawn from experimental outcomes.
Related Work
The task of survival prediction of glioma from MRI images is challenging. Several studies applying various methods and approaches using the BraTS dataset are reviewed in this section.
Authors of [13][14][15] proposed the use of some handcrafted and radiomics features extracted from automatically segmented volumes with region labels to train a random forest regression model to predict the survival time of GBM patients in days. These studies achieved respectively 52%, 51.7%, and 27.25% accuracy on the validation set of the BraTS (Brain Tumor Image Segmentation) 2019 challenge.
Authors of [16] performed a two category (short-and long-term) survival classification task using a linear discriminant classifier that was trained with deep features extracted from a pre-trained convolutional neural network (CNN). The study achieved 68.8% accuracy when doing 5-fold cross validation on the BraTS 2017 dataset.
Ensemble learning was used by authors in [17]. They extracted handcrafted and radiomics features from automatically segmented MRI images of high grade gliomas and created an ensemble of multiple classifiers, including random forests, support vector machines, and multilayer perceptrons, to predict overall survival (OS) time on the BraTS 2018 testing set. They obtained an accuracy of 52%.
The authors of [18] achieved first place in the BraTS 2020 challenge for the overall survival prediction task (61.7% accuracy). They extracted segmentation features along with patient's age to classify the patients into three groups (long-, short-and, mid-term survivors) using an ensemble of a linear regression model and random forests classifier.
Over the last decade, there was increasing interest in ensemble learning for tumor segmentation tasks as well. Ensemble learning was ubiquitous in the BraTS 2017-2020 challenges, being used in almost all of the top-ranked methods. The winner of the BraTS 2017 challenge for GBM tumor segmentation [19] was an ensemble of two fully convolutional network models (FCN), and a U-net each generating separate class confidence maps. Then, each class was created by averaging the confidence maps of the individual ensemble models for each voxel. This study reached dice scores of 0.90, 0.82, and 0.75 for the whole tumor, tumor core, and enhancing tumor, respectively for the BraTS 2017 validation set. Authors in [20] built an ensemble of UNet-based deep networks trained in a multi-fold setting to perform segmentation of brain tumors from the T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) sequences. They achieved a dice score of 0.882 for the BraTS 2018 set.
BraTS Dataset
The Brain Tumor Segmentation (BraTS) Challenge is critical for benchmarking and has largely contributed to advancing machine learning applications in glioma image analysis. The BraTS challenge was first held in 2012 and has taken place annually as part of the Medical Image Computing and Computer Assisted Intervention (MICCAI) conference ever since. It focuses on evaluating state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. The glioma sub-regions considered for segmentation evaluation are: (1) The enhancing tumor (ET), (2) the tumor core (TC), and (3) the whole tumor (WT). The enhancing tumor sub-region is described by areas that are typically hypo-intense in T1CE when compared to T1. The tumor core (TC) consists of the bulk of the tumor, which is what is typically resected. The TC entails the ET, as well as the necrotic (fluid-filled) and the non-enhancing (solid) parts of the tumor. The WT corresponds to the complete extent of the disease, as it consists of the TC and the peritumoral edema (ED) [21].
To evaluate proposed segmentation methods, the participants are asked to upload their segmentation labels for regions in the image (e.g., edema, tumor) as a single multi-label file in the NIfTI format. Then metrics like dice score and the Hausdorff distance are reported to rank teams performances. In 2017, the BraTS challenge started to include another task, the prediction of overall survival time. Participants needed to use their produced segmentation in combination with other features to attempt to predict patient overall survival. The evaluation assessment of this task was based on the accuracy metric of a three category classification (long-survivors (e.g., >15 months), short-survivors (e.g., <10 months), and mid-survivors (e.g., between 10 and 15 months) ).
The Brain Tumor Segmentation (BraTS) challenge dataset is the largest publicly available glioma imaging dataset. It includes multi-institutional pre-operative clinicallyacquired multi-sequence MRI scans of glioblastoma (GBM/HGG) and lower grade glioma (LGG) with overall survival data. Images were acquired from different institutions using MR scanners with different field strengths (1.5 T and 3 T). Segmentation ground truth labels are done by expert board-certified neuroradiologists. Scans are provided in NIfTI format and have (a) native (T1) and (b) post-contrast T1-weighted (T1CE), (c) T2-weighted (T2), and (d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) sequences with 1-6-mm slice thickness. The BraTS data was made available after its pre-processing, i.e., co-registered to the same anatomical template, interpolated to the same resolution, and skull-stripped. The overall survival data is defined in days.
Data Acquisition
In this study we used subsets of glioblastoma (GBM) cases from the BraTS 2019 training dataset. We randomly split the data into train/test subsets. The training subset consisting of 163 cases and the testing set had 46 cases. Among the 163 cases, there were 81 long-term and 82 short-term survival cases based on a 12 month cut-off (which is the middle threshold of the mid-term survivors class in BraTS data). Among the 46 test cases, there were 23 long-term and 23 short-term survivors. The data can be downloaded online from the CBICA's Image Processing Portal (IPP) after requesting permission from the BraTS team. Figure 3 shows the Kaplan-Meier survival graph corresponding to the training dataset.
Data Pre-Processing and Augmentation
The first step is tumor segmentation, we decided to use the publicly available pretrained model of Wang et al. [22] to automatically segment the training and testing set used in this study. The segmented volumes were used as a starting point for our task of survival class prediction.
Each MRI volume is high resolution with a size of 240 × 240 × 155 which makes it difficult to fit into the GPU memory. To solve this issue, we had to remove empty voxels and null slices that are unnecessary for building a good predictive model.
Data augmentation is a fundamental way to help reduce model over-fitting when dealing with a small amount of training data. Hence, we performed transformations to each 2D slice consisting of vertical flipping and rotations of 10 degrees incrementally from 0 to 180 degrees. In addition, we applied elastic deformation by generating smooth deformations using random displacement fields that are sampled from a Gaussian distribution with a standard deviation of 17 (in pixels). The new values for each pixel were then calculated using bicubic interpolation. The augmentation was applied to each 2D slice which then gets stacked back in to form a 3D cube. The transformations resulted in increasing the size of training data from 163 images per sequence to roughly 10,000 augmented images, including the original images. Then all the images were resized to a uniform size of 160 × 160 × 110. Furthermore, we used the Keras [23] data generator to feed data by batches to our model instead of loading all data into memory at once. In addition, before feeding input images to the model, as a part of preprocessing, the images were normalized to have zero mean and unit standard deviation.
Overall Survival Prediction System
Our survival prediction system is next described in detail including data acquisition, preparation and augmentation, region of interest segmentation, model construction, and training.
System Outline
As shown in Figure 4, the overall work flow of our system consists of three stages. Firstly, the starting point was the raw 3D MRI images acquired from the BraTS platform after securing permission. The images were sent as inputs to the pre-trained segmentation network [22] resulting in a 3D segmented tumor in the NIfTI format. Image augmentation and pre-processing were done before the inputs were sent to our 3D convolutional neural network for automatic feature extraction and classifier training. Finally, the end point is the overall survival prediction. Ensemble learning was used to evaluate the prediction performance of the trained networks. More details on each phase are provided in the following sub-sections.
Snapshot Learning for Survival Prediction
An ensemble of deep neural networks is known to be more accurate and robust. However, training multiple deep networks is computationally expensive. Huang et al. [24] introduced snapshot learning which is a fast and effective way of creating an ensemble of models without additional training costs by saving one model's weights snapshots at different points over the course of training. Our preliminary results in [25] showed the effectiveness of an ensemble of snapshots in boosting the prediction performance using our limited size dataset. Snapshots of the model are saved at every training iteration then a selected subset of them are used to evaluate the model's prediction performance on a separate testing set. Majority voting was used to combine their predictions. The snapshots of the model are each evaluated and a classification outcome is obtained for each of the 46 test cases. Each sample can be correctly or incorrectly classified by a particular snapshot model. The predictions each provide a vote and our combination approach is to take the class with the most votes for each test example. The voting rule is as follows: If more than 50% of the snapshot models have correctly classified test example X then it is correctly classified. Otherwise, if ≤50% of the votes correctly classified the example then it is considered incorrectly classified. The overall final prediction accuracy will be the percentage of correctly predicted test examples (46).
Main CNN Model Construction and Training
Finding the most efficient network parameters for a given dataset is important. It is known that parameters like batch size, number of epochs, regularization, and learning rate can significantly affect the final performance. There is no predefined method to select the ideal architecture and parameters. Therefore, we attempted multiple combinations of settings and parameters. Our model consists of three 3 × 3 × 3 convolutional layers (Conv) with a stride of 1 × 1 × 1 size and ReLU activation function followed by a 2 × 2 × 2 maxpooling layer with a stride of 1 × 2 × 2. The first two convolutional layers have 20 filters and the third has 40. Padding was used for the convolutional layers. A fully connected layer (FC) of 64 units follows a global average pooling operation. Finally, an output with sigmoid activation is used to generate a prediction. To reduce overfitting, a dropout of 50% was added after the fully connected layer. The developed model consists of a total of 35,709 parameters.
The implementation of the model was carried out with Keras 2.3.1 using TensorFlow 1.2 as backend. Code is available on GitHub at https://github.com/kbenahmed89/Glioma-Survival-Prediction, accessed on 14 December 2021.
Experiments and Results
The experimental settings and results of this work are reported in this section. Measures include accuracy, area under the ROC curve (AUC), sensitivity, and specificity.
Parameter Exploration Results
In this section, we report some preliminary experiments performed to determine the best parameters for our dataset. Here, for simplicity, we only used a T1 contrast enhanced (T1CE) sequence as input.
Choice of Snapshots
The original snapshot learning method showed using the cosine annealing cyclic learning rate was most effective. It works by saving snapshots at the end of each cycle (X number of training epochs). Here, we compare the performance results of the following: The original method versus using fixed learning rate and selecting snapshots based on the training accuracy evolution. When not using cyclic learning the stochastic gradient descent optimizer was used for training the models with the starting learning rate set to 10 −3 and momentum set to 0.9. The network state was saved at each epoch. We selected snapshots based on the training accuracy graph. The accuracy of the model starts to increase rapidly in the beginning due to the high initial learning rate. Then, it slows down and keeps increasing at a lower rate and with high stability. During the last period of training, the training accuracy has already reached 100% thus over-fitting the training data. Therefore, we decided to avoid the last set of training epochs and only focus on the earlier phase of training. To create an ensemble of snapshots, we picked the weights of every five epochs to allow a fair amount of change in the network weights. We go back 5 steps from when the training accuracy reaches 80% (epoch 30) and then proceed backwards to choose every fifth epoch as a snapshot (epochs 30, 25, 20, 15, and 10). The performance results of testing the individual five snapshots on the 46 example testing dataset are shown in Table 1 along with the results of the majority voting-based ensemble of snapshots. To see if the original approach to snapshots might be better, we trained using a cosine annealing cyclic learning rate with a maximum learning rate of 10 −3 . Training was done for 500 epochs and snapshots saved at the end of each of the 10 cycles, which were 50 epochs long. The majority voting-based ensemble of the last five snapshots saved at the end of the last five cycles achieved a 52.17% accuracy on the unseen test set of 46 test cases using only T1CE MRI modality. This was not as good as our approach of choosing snapshots from the training data. Of course, it is possible that a different set of parameters would result in better performance, but we had little data to test with and wanted to leave the test set alone until parameters were chosen.
Based on this experiment, we decided that our choice of snapshots based on the training graph works best for this particular dataset. Therefore, we used this snapshot choice method for the remainder of the paper.
Ensemble of Convolutional Neural Networks vs. Snapshot Learning
This experiment consisted of the performance comparison of an ensemble of five CNNs versus an ensemble of five snapshots from the same CNN. The CNNs have the same main architecture described in Section 3.3 but different random initializations. We decided to test at the epoch where the models reach a training accuracy of 80% to avoid overfitting. The results are presented in Table 2. The results are roughly equivalent to the ensemble of 5 fully trained models with much less training time. In this experiment, due to high computational costs and time required for 3D image processing, we investigated whether the third dimension added significant value over the 2D analysis of the images. We decomposed each 3D scan into 155 separate slices. Then, we manually looked at each slice and chose the one with the largest visible tumor area. Consequently, the new training set in this experiment consists of 163 images of size 240 × 240 × 1. In order to increase the size of the dataset, image augmentation was also done using the same transformations explained in Section 3.1.3. We created a 2D version of the main CNN and kept the same architecture and parameters. For this dataset, after observing the training accuracy curve, we noticed that the model reaches 80% training accuracy quickly (epoch 28). Therefore, we decided to go back 5 steps from when the training accuracy reaches 90% (epoch 59) and then proceed backwards to choose every fifth epoch as a snapshot. Thus, epochs 59, 54, 49, 44, and 39 were used. The performance results of snapshots ensemble using the 2D CNN model is compared to the 3D CNN in Figure 5.
Combination of Multi-Sequence Data for Survival Prediction
In this section we describe two methods attempted in order to merge the three MRI sequences (T1CE, T2, and FLAIR) to create a multi-sequence dataset for our model.
Ensemble of Ensembles Using T1CE, T2, and FLAIR MRIs
Before merging the three sequences, we first trained our CNN model that has the same settings and using the same training procedure described in Section 3.3, this time with the other two sequences (T2 and FLAIR) as inputs instead of T1CE. Results are reported for each sequence individually.
To create a combined decision, an ensemble of ensembles approach in which 5 snapshots from each of the three sequences T1CE, T2, and FLAIR together participate to build a final outcome prediction was used. To create each of the 5 initial ensembles, we used the 5 snapshots of each sequence that were trained individually. In a given ensemble, for each sample, one snapshot from each of the three sequences gets to vote. We repeat the process for all the 5 snapshots. We then combine the 5 voted decisions and again use majority voting at a sample level to form the final outcome.
Multi-Branch CNN Training with Multi-Sequence 3D MRIs
In this experiment, we combine the three sequences (T1CE, T2, and FLAIR) together to train a multi-branch 3D CNN and test it on the multi-sequence MRI test set. The architecture of the CNN is illustrated in Figure 6.
Separate CNN models operate on each sequence where each CNN has two 3D convolutional layers each followed by a max-pooling layer. Then, the results from the three models are concatenated for interpretation and ultimate prediction. The CNN was evaluated using a snapshot ensemble with max voting as explained in Section V. Here, we used a lower learning rate (10 −4 ) than in the previous experiments. Thus, the model makes small changes between epochs. Therefore, we decided to enlarge the spacing between chosen epochs. We go back 5 steps from when the training accuracy reaches 90% (epoch 183) and then proceed backwards to choose every 25th epoch as a snapshot (epochs 183, 158, 133, 108, and 83). The performance results of the multi-branch CNN are presented in Figure 7 and compared to the performance outcome of each individual sequence and to the first ensemble method described in Section 4.2.1.
Comparison with Other Methods
To further assess the prediction performance of our method, we compared our approach to the method proposed in [16]. In their paper, the authors used the BraTS 2017 training dataset with a threshold time of 18 months to classify cases into short-term and long-term survival. They extracted volumetric and location features from the 163 input images and trained a logistic regression classifier. They achieved an accuracy of 69% using 5-fold cross validation. The BraTS 2017 training dataset is a subset of the 2019 training dataset used here. To compare our results with their method, we used the same 163 cases and we also used an 18-month threshold to divide the data into two classes, short-and long-term survival. We then augmented the data and partitioned it to do the five-fold cross validation experiment. Our snapshot ensemble was used on each of the five folds. The average accuracy of the five folds was 6% higher than in [16] at 75%.
Discussion
As demonstrated in Table 2, the overall survival prediction performance of the ensemble of CNNs method outperformed individual CNNs. It was 67% accurate whereas the maximum we could get if we only use one individual CNN was 63%.
From Table 1, a snapshot ensemble indeed can perform as well as an ensemble of multiple CNNs and is faster and less expensive to train. Clearly, training five separate models (sequentially) is five times slower than saving snapshots of a single model. Using a snapshots ensemble, we achieved a prediction accuracy of 70%. We can also see that the performance of an ensemble is sometimes the same as only one snapshot, but it usually outperforms individual snapshots. Generally, if it can not improve the performance, it does not cause a clear decrease in accuracy.
As seen in Figure 5, comparing these results to the performance of a CNN trained using 2D images, we can conclude that even though 3D processing is expensive and time consuming, it brings relevant extra information to increase the model's performance from 61% to 70%. Figure 7 summarizes the performance results of models trained using T1CE, T2, and FLAIR sequences standalone along with the results of the two attempted combining methods. The accuracy of the voting ensemble for T2 was 63% and for FLAIR was 65%. The 70% prediction accuracy using the T1CE sequence was slightly higher. This result probably reflects the sensitivity of T1CE to the size of the tumor core area, which is probably the most important biological predictor of overall survival in GBM patients. Experiments with combining the three sequences together were done. The first method of combination we explored was voting an ensemble of sequences. This ensemble achieved a 72% prediction accuracy. The second combining method involved creating a multi-branch CNN where each sequence was sent as input to a branch then the results were combined in the final layer. The prediction accuracy of the ensemble using this multi-branch CNN reached 74%.
The result of the two experiments with sequence combination demonstrates the potential power in the combination of multiple sequences where each contributes unique information relevant to the overall survival prediction. We can also conclude that all the tumor areas are important to determine the survival of the patient. Though perhaps the tumor core area contains the most information.
Since the BraTS testing dataset is not released by the BraTS challenge team and the validation set is only available at the time of the challenge, we were unable to directly compare our method to those applied to BraTS test/validation set. However, our prediction accuracy of 74% on an unseen testing set from BraTS 2019 suggests good generalization ability to other datasets. In addition, in the experiment conducted in Section 4.3, our method achieved a prediction accuracy of 75%, thus outperforming the approach proposed in [16], which had 69% accuracy on the same dataset.
Conclusions and Future Work
In summary, we demonstrated an overall survival prediction for glioblastoma patients at 74% accuracy using a multi-branch 3D convolutional neural network, where each branch was a different MR image sequence. The system classified patients into two classes (longand short-term survival). In a comparison on the same data with the best known predictor, our approach was 6% more accurate. An important contribution of this work is the effective use of snapshot ensembles in a novel fashion towards a solution to the limited availability of labeled images in many medical imaging datasets. Snapshots are a fast, low cost way to generate an ensemble of classifiers.
Based on recent studies [26][27][28][29][30][31][32][33][34], improved communication and information exchange between radiology and histopathology is needed more than ever. Many papers have shown benefits in integrating several markers for more accurate OS prediction of patients with glioma. A study [29] used deep artificial neural networks on histopathologic images of gliomas and found an association of OS with several histopathologic features, such as pseudopalisading and geographic necrosis and inflammation. Recent work [26,32,33] has shown the integrated potential of MR and histopathologic images, which has provided diagnostically relevant information for the prediction of OS in glioma. Similarly, it has been demonstrated that combining radiomics with multiomics can offer additional prognostic value [34]. Another study confirmed higher prediction accuracy when a combination of histopathologic images and genomic markers (isocitrate dehydrogenase [IDH], 1p/19q) was used [27]. Moreover, it has been shown that the addition of a radiomics model to clinical and genetic profiles improved survival prediction on glioblastoma when compared with models containing clinical and genetic profiles alone [28].
In future, we intend to create a more advanced version of our deep learning model combining radiologic-based features from MR imaging with pathologic, clinical, and genetic markers to develop more informed and better predictors of overall survival of GBM patients.
This study has the limitation of using a pre-operative dataset to perform OS prediction. This can result in leaving out several prognostic factors that are available after histological confirmation. In research, the most common prognostic factors for survival in glioblastoma patients were found to be: The age of patients, extent of resection, recursive partitioning analysis (RPA) class [35], performance status (using Karnofsky Performance Scale (KPS) or the Eastern Cooperative Oncology Group (ECOG)/World Health Organization (WHO) performance status) [36], and postoperative chemotherapy and/or radiotherapy [37].
In addition to IDH mutations [38] and 1p/19q codeletion [39], the methylation status of the O6-methyl guanine DNA methyltransferase (MGMT) gene promoter has been shown to be a strong predictor of the survival of glioblastoma patients [40]. There exists other prognostic molecular markers for which diagnostic evaluation is not routinely performed such as G-CIMP methylation, TERT promoter mutations, EGFR alterations, BRAF V600E mutations, Histone mutations, and H3K27 mutation, which can occur in histone H3.1 or H3.3 [41].
On the other hand, it may be possible to identify additional prognostic information using image analysis of the post-operative or post-treatment MRIs. A recent study has demonstrated an association between post-operative residual contrast-enhancing tumor volume in the post-surgical MRI and overall survival in newly diagnosed glioblastoma [42]. Furthermore, in our recent work [43], we showed that deep features extracted from posttreatment MRI scans, which were obtained during or after therapy, can entail relevant information for overall survival prediction. Therefore, to lend further validity to our model, we aim to combine data before and after therapy or resection, if available.
Recently, there has been increased interest in integrating non-invasive imaging techniques with clinical care in brain tumor patients. For instance, the use of amino acid PET has been shown to better identify the most biologically aggressive components of heterogeneous low and high-grade glioma [44,45]. As another future work direction, we may explore the potential of non-invasive metabolic imaging as complementary to conventional MRI to predict survival outcomes of GBM patients using convolutional neural networks.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-04-03T00:28:20.112Z
|
1999-04-23T00:00:00.000
|
12262796
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/274/17/11768.full.pdf",
"pdf_hash": "d0f11098430f204a2c826d8523f2609ae138da83",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1192",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "28a08496c45ff6babec41576be71b5a9ea0c62bf",
"year": 1999
}
|
pes2o/s2orc
|
Differential Mechanisms of Recognition and Activation of Interleukin-8 Receptor Subtypes*
We have probed an epitope sequence (His18-Pro19-Lys20-Phe21) in interleukin-8 (IL-8) by site-directed mutagenesis. This work shows that single and double Ala substitutions of His18 and Phe21 in IL-8 reduced up to 77-fold the binding affinity to IL-8 receptor subtypes A (CXCR1) and B (CXCR2) and to the Duffy antigen. These Ala mutants triggered neutrophil degranulation and induced calcium responses mediated by CXCR1 and CXCR2. Single Asp or Ser substitutions, H18D, F21D, F21S, and double substitutions, H18A/F21D, H18A/F21S, and H18D/F21D, reduced up to 431-fold the binding affinity to CXCR1, CXCR2, and the Duffy antigen. Interestingly, double mutants with charged residue substitutions failed to trigger degranulation or to induce wild-type calcium responses mediated by CXCR1. Except for the H18A and F21A mutants, all other IL-8 mutants failed to induce superoxide production in neutrophils. This study demonstrates that IL-8 recognizes and activates CXCR1, CXCR2, and the Duffy antigen by distinct mechanisms.
Interleukin-8 (IL-8) 1 is a chemokine secreted in response to injury and infection that selectively attracts and activates neutrophils. IL-8 is a 72-79-amino acid peptide that belongs to the CXC chemokine subfamily because its first two Cys are separated by a single residue. The NMR and crystal structures of IL-8 reveal that the polypeptide chain is folded into three antiparallel -sheets and a C terminus ␣-helix (1,2). IL-8 receptors belong to the superfamily of seven transmembrane G protein-coupled receptors. Several IL-8 receptor subtypes have been identified. Two structurally homologous receptors, CXCR1 and CXCR2, are expressed in neutrophils (3)(4)(5). CXCR1 selectively binds IL-8, whereas CXCR2 binds IL-8 and the structurally related CXC chemokines Neutrophil Activating Protein-2 (NAP-2) and MGSA (6 -8). The Duffy antigen of red blood cells is a promiscuous chemokine receptor that binds several chemokines including IL-8 (9,10). Moreover, G proteincoupled receptors encoded by the Kaposi's sarcoma-associated herpesvirus bind IL-8 plus other chemokines (11). The mechanisms of recognition and activation of IL-8 receptor subtypes are not well defined.
Studies with IL-8 mutants created by site-directed mutagenesis and synthetic peptides have demonstrated that the N-terminal triad, ELR, is a major determinant for binding affinity and activation of IL-8 receptors in neutrophils (12,13). Further studies with chemokine chimeras have shown that the region in between Cys 7 -Cys 50 is also important for binding to the IL-8 receptors (14,15). In particular, this region contains a surfaceexposed hydrophobic pocket formed by Phe 17 , Phe 21 , Ile 22 , and Leu 43 which is separated by over 20 Å from the N-terminal ELR triad, and it has been suggested that residues in or adjacent to the hydrophobic pocket are major determinants for binding selectivity to both CXCR1 and CXCR2 (16,17). This hydrophobic pocket may be essential for recognition of CXCR1. In this work, we have created IL-8 mutants in which residues corresponding to the epitope of blocking anti-IL-8 mAbs were substituted by Ala, polar, or charged residues. We determined that this epitope plays a major role in the differential recognition mechanisms of CXCR1, CXCR2, and the Duffy antigen. But most importantly, this epitope also plays a role in the activation mechanism for CXCR1.
Preparation of Human Neutrophils, Red Blood Cells, and Rabbit
Neutrophil Membranes-Human blood was drawn from healthy human donors and layered in a Mono-Poly Resolving Medium (ICN Biochemical, Inc., Aurora, OH). Neutrophils and red blood cells were isolated in accordance with the instructions of the manufacturer. Neutrophils were suspended in physiological buffer containing 140 mM NaCl, 4 mM KCl, 1 mM MgCl 2 , 1 mM CaCl 2 , 1 mM Na 2 HPO 4 , 5 mM glucose, 20 mM HEPES (pH 7.4), and 1 mg/ml bovine serum albumin. Red blood cells were stored in Alsevers solution consisting of 114 mM dextrose, 27 mM sodium citrate, 71 mM NaCl (pH 6.1) at 4°C. Rabbit neutrophil membranes were prepared as described (18).
Expression of CXCR1 and CXCR2-CXCR1 was subcloned into the retrovirus vector MSX, and virus stocks were produced from amphotropic packaging cell lines (19). HL-60 cells were infected with these virus stocks and selected by limiting dilution and binding to 125 I-IL-8. Cell line HL-60 was selected for this study because it expresses a high density of CXCR1 (19) but undetectable levels of CXCR2. HL-60 cells expressing recombinant CXCR2 were kindly provided by Dr. Richard Ye (University of Illinois, Chicago).
Protein Expression and Purification-IL-8 mutants were created by site-directed mutagenesis as described previously (20). The mutant constructs were subcloned into the thioredoxin-based vector GM-TRX for expression of thioredoxin-IL-8 mutant fusion proteins. Escherichia. coli GI724 transformed with mutant constructs was induced with 100 g/ml tryptophan for 4 h at 30°C. The cells were lysed by French press, and the fusion protein was purified by fractionation on a QAE anionexchange column followed by a G-75 gel filtration column. Fractions enriched with thioredoxin fusion proteins were digested with enterokinase to release the IL-8 mutant protein. The digestion products were further purified by using an SP-650 cation-exchange column. 125 I-IL-8 Binding Assays-IL-8 was iodinated by the chloramine-T procedure as described (3). Rabbit neutrophil membranes were suspended at a concentration of 200 g/ml in binding buffer (phosphatebuffered saline containing 0.1% (w/v) bovine serum albumin and 20 mM HEPES (pH 7.4)) and incubated for 30 min at room temperature in the presence of 2 nM 125 I-labeled IL-8 and several concentrations of unlabeled IL-8 or IL-8 mutants. The binding reaction was terminated as * This work was supported by National Institutes of Health Grant AI34031. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Red blood cells, or HL-60 cells expressing CXCR1 or CXCR2 were suspended at a concentration of 2.5 ϫ 10 7 cells/ml or 1 ϫ 10 7 cells/ml in binding buffer containing 2 nM 125 I-labeled IL-8 and several concentrations of unlabeled IL-8 or IL-8 mutants and then incubated for 2 h on ice. The reaction was terminated by overlaying the incubation mixture on top of a 10% (w/v) sucrose solution followed by centrifugation. Radioactivity in the pellet was measured in a ␥-counter. The K i for each IL-8 mutant was calculated according to the equation derived by Cheng and Prusoff (21): where IC 50 is the concentration of mutant that produces 50% inhibition on IL-8 binding to the receptor subtype, is the concentration of radiolabeled IL-8, and K d is the dissociation constant of IL-8 binding to the receptor subtype.
Intracellular [Ca 2ϩ ] Measurements-Neutrophils were suspended in physiological buffer at a density of 1 ϫ 10 7 cells/ml and loaded with 5 M Indo-1/AM (Molecular Probes, OR) for 30 min at room temperature in the dark. Neutrophils were subsequently washed once with ice-cold phosphate-buffered saline and then resuspended in physiological buffer at 1 ϫ 10 7 cells/ml. HL-60 cells expressing CXCR1 or CXCR2 were suspended in physiological buffer at a density of 1 ϫ 10 7 cells/ml and loaded with Indo-1/AM for 30 min at 37°C in the dark. HL-60 cells were subsequently washed once with ice-cold physiological buffer and resuspended in physiological buffer containing 1 mM probenecid at 1 ϫ 10 7 cells/ml. Neutrophils and HL-60 cells were placed in a continuously stirred cuvette maintained at 37°C in a 650 -10S spectrofluorometer (Perkin Elmer) and stimulated with IL-8 or IL-8 mutants. Fluorescence intensity was measured using an excitation wavelength of 330 nm and an emission wavelength of 405 nm (22).
Superoxide Production Assay-Superoxide release was monitored by the continuous fluorometric measurement of 2,2Ј-dihydroxybiphenyl-5,5Ј-diacetate produced from p-hydroxyphenylacetate by the enzymatic reduction of H 2 O 2 by horseradish peroxidase (HRP) (24,25). Neutrophils were suspended in physiological buffer containing 130 mM NaCl, 4.6 mM KCl, 1.1 mM KH 2 PO 4 , 1 mM CaCl 2 , 5 mM glucose, and 20 mM HEPES (pH 7.4) at 1 ϫ 10 7 cells/ml. Neutrophils were placed in physiological buffer containing 1 mM p-hydroxyphenylacetate, 20 units of horseradish peroxidase and 100 M sodium azide in a continuously stirred cuvette maintained at 37°C in a spectrofluorometer. Neutrophils were stimulated with IL-8 or IL-8 mutants and fluorescence intensity was measured using an excitation wavelength of 334 nm and an emission wavelength of 425 nm.
Mutations of the IL-8 Epitope Region and Receptor Recognition
Previous studies (20) have shown that the epitope consensus sequence of the blocking anti-IL-8 mAb is H-X-K-F and corresponds to residues His 18 -Pro 19 -Lys 20 -Phe 21 in IL-8. These residues are in or adjacent to the surface-exposed hydrophobic pocket in IL-8 which has been previously implicated in the recognition mechanisms to the IL-8 receptors (16,17). Moreover, binding of IL-8 to the N-terminal fragment of CXCR1 causes perturbations in the NMR chemical shifts of His 18 and Phe 21 (26), suggesting that these residues bind to the N terminus domain of the IL-8 receptors. Single and double IL-8 mutants were created by substitution of His 18 and Phe 21 by Ala, Ser, or Asp. All these mutants exhibited reduced binding to the anti-IL-8 mAb (20). In this study we expressed recombinant IL-8 mutants as fusion proteins linked to thioredoxin. IL-8 mutants were purified to near homogeneity. Fig. 1 shows the purity of the IL-8 mutants, H18D and H18A/F21A, as determined by SDS-polyacrylamide gel electrophoresis. To examine whether His 18 and Phe 21 residues play a role in the mechanisms of recognition and activation of the IL-8 receptors, we initially performed displacement of 125 I-IL-8 bound to neutrophil membranes by IL-8 mutants. In Table I we showed that H18A and F21A mutants exhibited 4-and 3-fold increases in the K i , respectively, as compared with that of the wild type IL-8, suggesting a moderate reduction in binding affinity of the mutants toward IL-8 receptors. The substitution of His 18 for Asp, or Phe 21 for Ser or Asp resulted in larger increases in the K i (up to 40-fold) ( Table I). To determine whether these two residues interact independently with the receptors or overlap, we examined the effect of double substitution mutants. As shown in Table I, mutants with double substitutions exhibited larger increases in the K i , up to 636-fold, than that of the wild type IL-8. Since these binding experiments were carried out with neutrophil membranes co-expressing both CXCR1 and CXCR2, we performed similar binding experiments with cells expressing either CXCR1 or CXCR2 to determine whether the IL-8 mutants exhibit receptor-subtype selectivity. Interestingly, similar to the binding with neutrophil membranes, large increases in the K i , up to 431-fold, were observed upon binding of IL-8 mutants to cells expressing CXCR1 (Table I). In contrast, a clearly distinct binding profile with less dramatic increases in the K i , up to 61-fold, were observed upon binding of IL-8 mutants to cells expressing CXCR2 (Table I). Further studies were focused on determining the binding profile of these IL-8 mutants to other IL-8 receptors. We selected to test the binding of IL-8 mutants to the red blood cells promiscuous chemokine receptor, the Duffy antigen, that binds IL-8, MGSA, Regulated on Activation, Normal T cell Expressed and Secreted (RANTES), and Monocyte Chemotactic Protein-1 (MCP-1) (9, 10). Binding of IL-8 to red blood cells showed a K i ϭ 13.2 Ϯ 0.4 nM ( Table I) that was similar to the previously reported K d (10). We found that single and double substitution IL-8 mutants showed up to 241-fold increases in the K i (Table I) nizes CXCR1, CXCR2, and the Duffy antigen by distinct mechanisms.
Activation of IL-8 Receptors
Activation of IL-8 receptors were evaluated by measuring the agonist-dependent rise in intracellular calcium in neutrophils and HL-60 cells expressing either CXCR1 or CXCR2. All single and double mutants triggered maximal calcium responses in neutrophils (data not shown), suggesting that the activation mechanisms on the IL-8 mutants are preserved. However, neutrophils co-express both CXCR1 and CXCR2, and it is possible that some mutants selectively activate either CXCR1, CXCR2, or both. Mobilization of intracellular calcium were examined with HL-60 cells expressing either CXCR1 or CXCR2. We found that mutants with single substitutions, and the double mutant H18A/F21A, exhibited calcium responses in a dose-dependent fashion (Figs. 2 and 3). Maximal calcium responses were achieved at concentrations of IL-8 mutants near their K i , indicating that full receptor occupancy is not necessary for maximal calcium response as previously observed with the fMLP chemoattractant receptor (27). Furthermore, because these mutants elicited maximal calcium responses as those of the wild type IL-8, it is likely that these mutant sites are major determinants for the binding affinity to the receptors but not for the activation mechanisms of the receptors. On the other hand, double mutants containing either polar and charged or two charged residues showed a unique activation profile. For example, the mutant H18A/F21D triggered a calcium response in HL-60 cells expressing CXCR2 in a dose-dependent fashion and a maximal calcium response as IL-8 did (Figs. 3 and 4B). In contrast, this mutant elicited a weak calcium response in HL-60 cells expressing CXCR1 (Figs. 2 and 4A). This observation indicates that double mutants containing one or two charged residues are full agonists of CXCR2 but are partial agonists of CXCR1. These data suggest that IL-8 activates CXCR1 and CXCR2 by distinct mechanisms.
Neutrophil Responses
Release of -Glucuronidase-Except for double mutants H18A/F21D and H18D/F21D, all single and double mutants triggered release of -glucuronidase in neutrophils (Fig. 5). This finding suggests that the rise in intracellular calcium mediated by binding of H18A/F21D or H18D/F21D to CXCR2 is not sufficient to elicit neutrophil degranulation.
Superoxide Production-Previous studies have indicated that MGSA, in contrast to IL-8, is a poor activator of superoxide production in neutrophils (28). We have tested the effect of single and double mutants on superoxide production. IL-8 mutants at concentrations ranging from 100 nM to 2 M failed to trigger superoxide production. The H18A and F21A mutants were only partially active (Fig. 6). These data indicate that activation of calcium responses mediated by both IL-8 receptor subtypes in neutrophils is not sufficient to trigger superoxide production.
DISCUSSION
The results of this work have indicated that His 18 and Phe 21 are involved in the mechanisms of recognition to anti-IL-8 mAb (20), CXCR1 and CXCR2, and the Duffy antigen of red blood cells. The data show that IL-8 recognizes CXCR1, CXCR2, and the Duffy antigen by different mechanisms. Furthermore, despite the high degree of sequence homology of CXCR1 and CXCR2, they appear to be differentially activated by IL-8. This is consistent with our previous work indicating that IL-8 binding to IL-8 receptor subtypes causes a higher rate of internalization of CXCR2 than that of CXCR1 (29), suggesting that agonist binding to IL-8 receptor subtypes trigger different internalization signals. Finally, this work shows that the transient rise of intracellular calcium is not sufficient to trigger neutrophil responses including release of -glucuronidase and superoxide production.
His 18 and Phe 21 are adjacent or are in the surface-exposed hydrophobic pocket that has been argued to drive the association of IL-8 to the receptor by hydrophobic interactions (17). In our present work, substitution of His 18 and Phe 21 for Ala caused modest changes in binding affinity, calcium responses, and release of -glucuronidase. These observations indicate that the hydrophobic nature of these residues does not play a major role in the mechanisms of recognition and activation of the IL-8 receptors. Although, these single Ala mutants are poor activators of superoxide production, suggesting that these mutants are not full agonists in terms of production of superoxide. Single or double substitutions of His 18 and Phe 21 by polar or charged residues produce major changes in binding affinity to CXCR1 and the Duffy antigen. In contrast, minor changes were observed with CXCR2. These findings support the idea that this epitope plays a role in determining binding selectivity among IL-8 receptors.
Substitution of His 18 and Phe 21 by Ala, polar, or charged residues may lead to localized, long range, or gross structural changes. In our studies, the mutants do not appear to show gross structural changes because all the mutants trigger full calcium responses in neutrophils, and they exhibit different binding affinities toward CXCR1, CXCR2, and the Duffy antigen. In addition, NMR chemical shifts of the mutant F21A are indistinguishable from the wild type, suggesting that this mutation causes a localized change (17). His 18 and Phe 21 are in close proximity to each other on the protein (Fig. 7). Whether these two residues overlap or interact independently with the IL-8 receptor subtypes remain to be established.
In summary, our binding data generated with this set of IL-8 mutants strongly argue that distinct mechanisms operate in the recognition of IL-8 to CXCR1, CXCR2, and the Duffy antigen. Binding of IL-8 to the N terminus fragment of CXCR1 causes perturbations in the NMR chemical shifts of His 18 and Phe 21 (26). This finding plus our binding data suggest that the selective recognition of IL-8 receptor subtypes by IL-8 is mediated by the interaction of His 18 and Phe 21 with the variable N terminus domains of IL-8 receptors (6).
On the basis of the patterns of calcium responses elicited with this set of IL-8 mutants, we can distinguish at least two sets of mutants. One set composed of H18A, F21A, H18D, F21D, F21S, H18A/F21A, and H18A/F21S exhibit wild-type calcium responses and release of -glucuronidase. The other set composed of H18A/F21D and H18D/F21D are full agonists of CXCR2 but are partial agonists of CXCR1. This finding suggests that the chemical nature of both residues 18 and 21 in IL-8 is a major determinant for the calcium responses mediated by CXCR1 but not for the calcium responses mediated by CXCR2. It is likely that IL-8 contains distinct motifs for activation of IL-8 receptor subtypes. Interestingly, H18A and F21A release -glucuronidase as the wild type IL-8 but are poor activators of superoxide production in neutrophils. This finding supports the idea that the single Ala mutants are partial agonists in terms of superoxide production. This study provides the framework to further elucidate the activation motifs in IL-8 that trigger the biological responses in neutrophils including superoxide production, phagocytosis, and degranulation.
|
v3-fos-license
|
2018-04-03T04:27:16.045Z
|
2017-06-30T00:00:00.000
|
20261743
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "817a00d044d5266a619be736274da174a81098be",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1193",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "817a00d044d5266a619be736274da174a81098be",
"year": 2017
}
|
pes2o/s2orc
|
The Relationship of ST Segment Changes in Lead aVR with Outcomes after Myocardial Infarction; a Cross Sectional Study.
Introduction
Among the 12 leads studied in electrocardiography (ECG), lead aVR can be considered as the most forgotten part of it since no attention is paid to it as the mirror image of other leads. Therefore, the present study has been designed with the aim of evaluating the prevalence of ST segment changes in lead aVR and its relationship with the outcome of these patients.
Methods
In this retrospective cross sectional study medical profiles of patients who had presented to emergency department with the final diagnosis of myocardial infarction (MI) in a 4-year period were evaluated regarding changes of ST segment in lead aVR and its relationship with in-hospital mortality, the number of vessels involved, infarct location and cardiac ejection fraction.
Results
288 patients with the mean age of 59.00 ± 13.14 (18 - 91) were evaluated (79.2% male). 168 (58.3%) patients had the mentioned changes (79.2% male). There was no significant relationship between presence of ST changes in lead aVR with infarct location (p = 0.976), number of vessels involved (p = 0.269) and ejection fraction on admission (p = 0.801). However, ST elevation ≥ 1 mv in lead aVR had a significant relationship with mortality (Odds = 7.72, 95% CI: 3.07 - 19.42, p < 0.001). Sensitivity, specificity, positive and negative predictive values and positive and negative likelihood ratios of ST elevation ≥ 1 for prediction of in-hospital mortality were 41.66 (95% CI: 22.79 - 63.05), 91.53 (95% CI: 87.29 - 94.50), 31.25 (95% CI: 16.74 - 50.13), 94.44 (95% CI: 90.65 - 96.81), 0.45 (95% CI: 0.25 - 0.79), and 0.05 (95% CI: 0.03 - 0.09), respectively.
Conclusion
Based on the results of the present study, the prevalence of ST segment changes in lead aVR was estimated to be 58.3%. There was no significant relationship between these changes and the number of vessels involved in angiography, infarct location and cardiac ejection fraction. However, presence of ST elevation ≥ 1 in lead aVR was associated with 8 times increase in in-hospital mortality risk.
Introduction
C hest pain caused by myocardial infarction (MI) is among the most common causes of patients visiting emergency departments (EDs). There has been a prediction of 80% increase in cases affected with problems related to coronary arteries in developing countries such as India, Latin America Region, Middle East, and African Sahara by 2020 (1,2). Predicting the outcome of these patients can be helpful in more proper triage, giving information to their relatives, and more accurate care for the more severe cases. The final outcome of these patients depends on various factors such as age, size and place of the infarction, residual left ventricular function, presence of underlying illnesses, and delay in reperfusion of affected arteries (3)(4)(5)(6). Numerous studies have been done regarding the relationship between electrocardiogram (ECG) evidence with angiography findings and final outcome of these patients and among them, ST segment changes in lead aVR has been associated with poor outcome of patients with MI during their hospitalization period (7)(8)(9). In a study by Kukla et al. patients who had ST-elevation in lead aVR, faced in hospital death 1.5 times those with ST-depression and about 30 times those without changes in ST segment (10). In addition, findings of Senaratne et al. revealed that mortality rate was 16 times higher in patients who had MI with ST depression in lead aVR (11). They expressed that although ST segment depression in lead aVR is difficult to diagnose, it can be good evidence of ischemia or injury to apex and infrolateral regions of the heart. Presence of ST segment depression is indicative of a larger area on the cardiac muscle being involved and somehow shows the need for more invasive interventions (12,13). Additionally, ST elevation in lead aVR has been associated with increased probability of recurrent infarction, developing cardiac failure and increased need for bypass surgery of coronary arteries (12). A study by Kosuge et al., indicated 78% sensitivity and 86% specificity of more than 0.05 mV ST segment elevation in aVR lead for predicting involvement of all 3 cardiac vessels (14). Until now, no similar studies have been carried out on Iranian patients in this field; therefore, the present study has been designed with the aim of evaluating the relationship of ST segment elevation in lead aVR with the final outcome of the patients with MI presenting to ED.
Study design
In this retrospective cross sectional study medical profiles of patients who were hospitalized in Taleghani Hospital, Tehran, Iran, during the time between 2012 and 2015 with the final diagnosis of MI were evaluated with the aim of assessing the prevalence of ST segment changes in lead aVR and its relationship with the outcome of studied patients. The study was approved by the ethics committee of Shahid Beheshti University of Medical Sciences. Researchers adhered to the principles of Helsinki declaration and keeping patients' data confidential.
Participants
Patients who had been hospitalized with the final diagnosis of MI in the mentioned period were evaluated without any age or sex limitations. Inclusion criteria consisted of time interval of less than 24 hours between chest pain and presenting to ED, confirmation of MI based on clinical findings and raise in cardiac enzymes (CPKMB or troponin) in the initial 48 hours of admission. Patients with a history of coronary artery bypass grafting (CABG), history of heart blocks including left bundle branch block (LBBB) and right bundle branch block (RBBB), and those with a pace maker as well as cases where the information was not completely available were excluded from the study.
Data gathering
Data gathering was done using a pre-designed checklist by referring to patients' medical profile. A senior cardiology resident was in charge of gathering patients' data. Studied variables included demographic data (age and sex), vital signs of the patient on admission to ED, medical history, drug history, left ventricle ejection fraction based on echocardiography findings, ECG findings regarding changes of ST segment in lead aVR, angiography findings regarding number and location of coronary artery involvement and finally, mortality of the patients. Locating the infarct zone was done based on ECG findings and Echocardiography confirmation.
Statistical Analysis
The sample size required for performing the study was estimated as 288 cases by considering the probability of changes being present in lead aVR to be 25%, type 1 error of 5%, 80% power, and d = 10%. SPSS21 software was used for statistical analysis. To report variables, descriptive statistics such as mean ± standard deviation or frequency and percentage were used in tables and a chart. To compare qualitative variables chi square or Fisher's exact tests and for quantitative cases, t-test was applied. Additionally, sensitivity, specificity, positive and negative predictive values and positive and negative likelihood ratio as well as area under the receiver operating characteristic (ROC) curve of ST elevation ≥ 1 in lead aVR were calculated and reported with 95% confidence interval using Med Calc software. P < 0.05 was considered as significance level.
Baseline characteristics
288 patients with the mean age of 59.00 ± 13.14 (18 -91) were evaluated (79.2% male). Mean duration of hospitalization was 6.11 ± 4.82 (1 -45) days. 247 (85.8%) of the MI cases were ST elevation MI (STMI) and 41 (14.2%) cases had non STMI. Table and figure 1 show the baseline characteristics of the studied patients. More than half of the patients were over 60 years old (50.3%). Only, 11 (3.8%) patients had unstable hemodynamics on presentation to ED and all but 10 (3.5%) patients had degrees of decrease in cardiac ejection fraction on admission to ED. Based on the results of angiography, 93 (32.3%) cases had three vessel involvement, 85 (29.5%) had two vessel, 63 (21.9%) had single vessel involvement and in 4 cases angiography was normal. Table 2 depicts the location of vascular lesion based on angiography results. 168 (58.3%) patients had changes of ST segment in lead aVR (79.2% male). Findings regarding changes in ST segment el- evation in lead aVR are summarized in table 3. There was no relationship between presence of these changes with age (p = 0.260), sex (p = 0.977) and type of MI (p = 0.247).
Relationship of ST segment changes with outcome
There was no significant relationship between presence of ST changes in lead aVR with infarct location (p = 0.976), number of vessels involved (p = 0.269) and cardiac ejection fraction on admission (p = 0.801). Out of the 32 (11.1%) patients who finally died, 25 (78.1%) had ST changes in lead aVR (Odds = 2.72, 95% CI: 1.13 -6.52, p < 0.020).
Relationship of ST elevation ≥ 1 mv with outcome
There was no relationship between presence of ST elevation ≥ 1 mv in lead aVR with infarct location (p = 0.466), number of vessels involved (p = 0.206) and cardiac ejection fraction
Relationship of ST depression ≥ 1 mv with outcome
There was no significant relationship between ST depression ≥ 1 in lead aVR with infarct location (p = 0.160), number of vessels involved (p = 0.521), cardiac ejection fraction on admission (p = 0.309) and mortality (p = 0.546).
Discussion
Based on the results of the present study, the prevalence of ST segment changes in lead aVR was estimated to be 58.3%. There was no significant relationship between these changes and the number of vessels involved in angiography, infarct location and cardiac ejection fraction. However, presence of ST elevation ≥ 1 in lead aVR was associated with 8 times increase in in-hospital mortality risk. ECG, as a cheap and non-invasive method, has been used for more than 70 years around the world to diagnose cardiac tissue ischemia and MI. Among the 12 leads studied in ECG, lead aVR can be considered as the most forgotten part of it since no attention is paid to it as the mirror image of other leads. In the last few decades, this lead is regaining its place as an important part of ECG among cardiologists. ST segment changes can be considered as the most important ECG finding in diagnosis and evaluation of MIs. Lead aVR is a good reference regarding things that occur in the upper and right side of the heart (15). The findings of this lead are usually covered by the information from the left leads of the heart such as aVL, II, V5, and V6; that is why it has been forgotten. Researchers Yamaji et al. (17). So that more than 0.5 mv ST segment depression in aVR was associated with higher mortality rate during the 30 days after occurrence of anterior MI. The last chest lead (V6) is placed on the axilla midline and a lead V7 in the posterior axillary line can show wide ischemia of the cardiac tip more clearly. This finding is indicative of the importance of the mirror image in aVR, which is more reflective of ischemia in the apex of the heart and shows the mirror image of V7 more than any other lead. This means that deeper depression in aVR in ST is a sign of higher ST elevation not only in V5 and V6 leads but also in V7 (17). Using ST segment, T wave and Q wave in lead aVR to evaluate the current or past situation of the patients such as previous or current MI has been suggested in various studies (18). Although Wong believes that ST segment changes in lead aVR are not significantly related to mortality of MI patients, in 2012 Kukla et al. announced that among 320 patients with inferior MI who had ST segment changes in various leads of ECG, these changes in lead aVR had occurred in half the patients and had a significant relationship with poor prognosis for their problem (10,17). In the present study, a significant statistical relationship was found between ST segment changes in lead aVR and mortality of the patients. The higher the elevation, the higher the prevalence of mortality. ST segment depression and isoelectric level of the segment, respectively, were associated with lower levels of mortality. Some studies have expressed the higher importance of ST segment elevation in aVR compared with this segment's depression (12). Meanwhile, Kukla et al. in their study on 320 individuals observed that patients with inferior wall MI, in case of ST elevation in aVR had a mortality rate 1.5 times those who had ST depression and 30 times those with isoelectric ST in aVR (10). While, Senaratne in 2003 showed that ST segment depression in aVR was associated with 16 times more patient mortality compared to other cases (11). Regarding the anatomy of coronary arteries' involvement and its relationship with changes in lead aVR, no significant relationship was found in our study; however, ST segment elevation in aVR was accompanied with higher involvement of all 3 coronary arteries. On the other hand, although not statistically significant, proximal obstruction of LAD and medial obstruction of the left circumflex artery was accompanied by higher mortality in patients. Mortality rate in the present study was 11.1%, which is higher than the mean mortality rate reported during hospital stay in a meta-analysis done by Sorita et al in 2014 (7%) (19). This meta-analysis was carried out on 48 research studies related to the topic and more than 1.5 million patients, and reported mortality of MI patients to be 12% during the 30 days after occurrence of MI. Although this study was done on 288 MI patients and the sample size and power of the study were acceptable, its only statistically significant finding was the relationship between ST segment elevation in lead aVR and in-hospital mortality, which is of course highly important, and ST changes in lead aVR did not significantly change with LVEF changes and coronary artery involvement and its location. It seems that performing prospective cohort studies by considering all of the identified risk factors in prediction of MI patients' outcome can give a more accurate picture of the role of lead aVR findings in prediction of outcome and estimation of location or extent of the necrosis occurring following ischemia.
Limitation
This study was done retrospectively by evaluating patients' medical profiles and therefore, had the natural limitations associated with these studies such as missing information and not being sure of the accuracy of the reports. Other known risk factors of patient outcome were not evaluated in this study, thus, multivariate analysis for identifying independent factors could not be performed.
Conclusion
Based on the results of the present study, the prevalence of ST segment changes in lead aVR was estimated to be 58.3%. There was no significant relationship between these changes and the number of vessels involved in angiography, infarct location and cardiac ejection fraction. However, presence of ST elevation ≥ 1 mv in lead aVR was associated with 8 times increase in in-hospital mortality risk.
|
v3-fos-license
|
2020-11-13T02:00:59.118Z
|
2020-11-11T00:00:00.000
|
226306711
|
{
"extfieldsofstudy": [
"Computer Science",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.013176",
"pdf_hash": "fee1799c28567dd8faf5010a6bd72f9ceabd1b2a",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1194",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "fee1799c28567dd8faf5010a6bd72f9ceabd1b2a",
"year": 2020
}
|
pes2o/s2orc
|
Quality of internal representation shapes learning performance in feedback neural networks
A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study non-linear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, de-synchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behaviour of performance in non-linear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained non-linear networks to explain behaviours which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.
Introduction
A fundamental feature of the brain, and biological networks in general, is the ability to form closed-loop interactions with their environment.Such interactions are often implemented through a dimensionality bottleneck: while networks typically consist of large numbers of units, signals exchanged with the environment are low-dimensional.In fact, external stimuli can often be represented in terms of a few scalar variables (e.g. the angle and speed of a tennis ball approaching); these low-dimensional variables are encoded in the high-dimensional activity of a large population of neurons [1,2] before being again transformed into low-dimensional decision variables and motor outputs (e.g. the angle and speed of the hand holding the racket).
Simple but effective models for studying closed-loop interactions are feedback networks.These models implement a simple form of closed-loop interaction: the output (or readout) signal, which is extracted from a reservoir of randomly connected units as a linear combination of unit activities, is directly injected back into the reservoir as external input [3,4].By adjusting the weights which specify how reservoir activity is mapped to the output, feedback networks can be trained to produce the desired readout signal.In the most common training algorithms [5,6,7], readout weights are updated through least-squares (LS) regression; this can be performed only once, by using a complete batch of activity samples [5], or in an online fashion, by recursively integrating activity samples as they are simulated [7,8].
What kind of closed-loop dynamics can feedback networks implement?Despite some theoretical advancement [4,9,10,11,12], computational properties of feedback networks are still poorly understood.
Early theoretical work has indicated that most feedback models are expected to be able to approximate readout signals characterized by arbitrarily complex dynamics [4].However, it has been reported that not all feedback architectures and target dynamics result in the same performance: trained networks can experience dynamical instabilities [10,11], and converge to fragile solutions for certain choices of the feedback architecture and parameters [7,13].
For a fixed task, several studies have reported that training performance is strongly influenced by the overall strength of recurrent connections in the reservoir [14,7,6].Specifically, performance is high when recurrent connections are strong, but not strong enough to lead to the appearance of chaotic activity [15] -a parameter region named edge-of-chaos [16].Intuitively, the edge-of-chaos defines an optimal tradeoff point where the internal reservoir dynamics are rich but stable.
Reservoir activity, however, is not determined by connectivity alone: because the system is coupled to the environment, activity depends also on the statistics and dynamics of the target signal, which specify the task.How the internal reservoir dynamics interact with the target in determining trained networks performance is a fundamental question in feedback systems which is still not well understood [17].In particular: are there specific target features which optimize performance, and how do they depend on internal properties of the reservoir network?For given values of the target parameters, what are the properties of reservoir activity that support optimal training?How sensitive is the optimal performance to the learning algorithm?To the current date, these questions remain largely unsolved.
In this work, we consider a simple setup consisting of a non-linear reservoir of rate units which is trained to sustain a sinusoidal output with given frequency ω.Consistently across three different training techniques, we find that learning performance is maximized at a finite "preferred" frequency ω, which in turn depends on reservoir connectivity: as the connectivity strength is increased towards the edge of chaos, ω decreases towards zero.This nontrivial dependence of performance, even in a simple task, provides a test case to study the interplay between reservoir and target properties and its effects on learning.
To gain analytical insight into this phenomenon, we consider a simplified setup where reservoir dynamics are linearized, and perform exact mathematical analysis.By averaging over the ensemble of random reservoir networks, we characterize reservoir activity in response to the target signal, and show that a "resonance" frequency ω * emerges, which decreases with the connectivity strength.Under this frequency, dimensionality of neural activity is maximal and synchrony across different units in the reservoir is minimal.When training the network to output the target signal, feedback interactions are most robust in the vicinity of the resonance frequency, thus resulting in optimal performance.Moreover, this behaviour is predicted to be qualitatively consistent across different training algorithms, even if performance itself is sensitive to the algorithm used.We show that our theoretical predictions correctly capture the qualitative behaviour of learning performance observed numerically in non-linear network models.Overall, our results shed light on the learning capacity of recurrent network architectures by quantifying how learning precision is determined by the interaction between internal reservoir connectivity and target dynamics.
Emergence of a preferred frequency in trained feedback networks
We consider a reservoir network consisting of N units characterized by the evolution dynamics: where Φ(x) = tanh(x) is applied to the activation vector x element-wise.Recurrent weights J are fixed, and are generated independently from the normalized Gaussian distribution N (0, g 2 /N ) [15,7], so that the parameter g controls the strength of reservoir connectivity.The one-dimensional external signal u(t) acts as a forcing on the reservoir through input weights m, which are fixed and drawn as independent standard Gaussian variables.
The output of the reservoir network is a one-dimensional readout signal, defined as: through a set of decoding weights n that are assumed to be plastic.The feedback is realized by using the output signal as input: u(t) = z(t) (see Fig. 1A for an illustration), which yields the final autonomous dynamics ẋ(t) = −x(t) + (J + mn )Φ(x(t)). ( During training, the vector n is updated until the output z(t) best matches the desired target f (t).The target function that we consider is a simple sinusoidal wave of frequency ω, i.e. f (t) = A cos(ωt).
We trained multiple instances of this feedback architecture and analyzed how performance depends on the frequency of the target signal ω and on internal coupling strength g (Fig. 1).Three common training algorithms (least-squares (LS) regression, ridge regression [18] and recursive least-squares (RLS) [19,7]) were used (training details are reported in Appendix 4.1).We quantified the error as the mismatch between the target f (t) and the readout z(t) averaged over a finite number of target cycles in the post-training activity.
We observe that, for fixed reservoir connectivity g, the accuracy of signal reconstruction by the output strongly depends on the target frequency: while training on one frequency results in highly precise readout for many cycles, others result in a runaway from the target signal (Fig. 1B).For every value of g, the error has a non-monotonous dependence on ω, and reaches a minimum at a finite frequency that we name ω (Fig. 1C).Each curve, corresponding to a different value of g, has a different optimal frequency: specifically, ω decreases as the strength of reservoir connectivity g increases from zero towards the edgeof-chaos (Fig. 1D; see Appendix 4.11 for a characterization of the edge-of-chaos in our framework).
Although the exact value of the preferred frequency ω is found to be algorithm-dependent, the same qualitative behaviour is observed consistently across the three different algorithms we used for training.
It is also observed for both small and large amplitudes of the target signal A, which are expected to elicit, respectively, weakly or strongly non-linear activity in the reservoir.
The observations from Fig. 1 provide a striking example of the non-trivial interplay between reservoir features (the connectivity parameter g) and external task parameters (the target frequency ω) in determining learning performance.Because the network is completely random, one might naively think that its dynamics do not exhibit a typical timescale, and are thus blind to the signal frequency; instead, the network appears to have its preference even for a simple task.In the rest of this paper, we aim to understand this observation in detail through mathematical analysis.
To this end, we consider a simplified model which greatly eases the analysis: the case of linear reservoir dynamics (Φ(x) = x).The analysis strategy we use consists of two steps [10].To begin with, we examine the feedback network in an open-loop setup (Fig. 1A, yellow), where the encoding of the input and the decoding of the output signals can be analyzed separately.In the encoding phase, we take the input to the reservoir network to be identical to the target function: Preferred frequency Frequency ω C.
Frequency ω analytically the reservoir response x(t) both at the level of single units and the population as a whole (Section 2.2).In the decoding phase, we use the reservoir response to pick a readout n which allows the network to reconstruct the correct output: n x(t) = f (t) (Section 2.3).At that point, our feedback architecture admits the desired target as a solution; to investigate success of such solutions in performing the task, in Section 2.4 we close the loop (Fig. 1A, purple), and characterize dynamics stability.
Open loop: encoding the target signal
We begin our analysis by examining encoding in the open-loop framework: this corresponds to a random reservoir with linear dynamics driven by the target signal.The time evolution is described by here J is a Gaussian random matrix as defined above; to avoid dynamic instabilities, we consider g < 1 [20].The linear dynamics is indifferent to the amplitude of the input, so we set A = 1; in response to the periodic input f (t) = cos(ωt), the stationary solution for t → ∞ is where are complex conjugate vectors representing the reservoir activity in Fourier space (see Appendix 4.2).
A geometric description The stationary solution may be written as showing that activity occupies the plane spanned by two vectors v ± , which are the real and imaginary parts of x ± : R(x + ) = v + and I(x + ) = −v − .In this plane, the state-space trajectory is a closed, elliptic curve (Fig. 2A), with geometry determined by the spanning vectors.
The spanning vectors v ± , in turn, depend both on the recurrent connectivity J and on the driving frequency ω (Eq.( 6)).Their geometry is self-averaging in the limit of large networks, and can be computed by averaging over the ensemble of randomly connected reservoir networks (see Appendix 4.3).
Fig. 2B shows the dependence upon ω of the norms v ± .For very small frequencies, the trajectory follows the drive adiabatically and v − ≈ 0; there is practically only one spanning vector.As frequency Frequency ω Angle θ (rad) Frequency ω increases, the response acquires a phase shift and the second spanning vector v − becomes non-negligible.
At high frequencies, both norms decrease due to the filtering property of the network; the second spanning vector thus obtains a maximal norm at an intermediate frequency.
We quantify the elliptical trajectory by its linear dimensionality, i.e. the participation ratio computed from the principal components of reservoir activity [21,22].Denoting the activity crosscorrelation matrix by C := 1 T T 0 x(t)x dt and its eigenvalues by ν i , the trajectory dimensionality d is defined as By using Eq. ( 7), and by integrating out time, we find that C is a rank-two matrix, C = 1 2 v + v + + v − v − , whose non-zero eigenvalues (which we take to be ν 1 , ν 2 ) are identical to those of the 2 × 2 reduced crosscorrelation matrix [23 Explicitly computing the eigenvalues of C R yields the expression We observe that the linear dimensionality, which is bounded between 1 and 2, is insensitive to the overall trajectory magnitude, but depends on the ratio of norms r = v − / v + and on the angle θ between the spanning vectors: The ratio r indicates how much the curve is squeezed along a single direction, with both extremes (r very small or very large) resulting in trajectories squeezed along the dominant spanning vector.For a fixed angle, as the ratio passes through r = 1, the trajectory goes through a shape which is most similar to a circle and has maximal dimensionality.For a fixed r, the angle θ determines to what degree the curve is skewed relative to a perfect ellipse; the dimensionality increases monotonically as θ opens up from zero to π/2.
Examination of the vector norms v ± in Fig. 2B indicates that they intersect at a frequency value that we name ω * , where r = 1.Fig. 2C shows how the angle θ varies as a function of frequency; surprisingly, we find that it displays a maximum at ω * .These dependencies are reflected in the behaviour of the dimensionality (Fig. 2D), which itself attains a maximum at frequency ω * .Our mathematical analysis reveals that (see Appendix 4.4) i.e. the resonance frequency ω * decreases to zero as g increases towards the instability boundary (g = 1).
This analytic result is in excellent agreement with finite network simulations, as shown in Fig. 2E.
A single-unit description The analysis above considered the geometry of trajectories spanned by the reservoir population in its high-dimensional activity space, and revealed that trajectory dimensionality is maximized at the resonance frequency ω * .An alternative viewpoint is obtained by considering the statistics of single-unit activity profiles across the population.As we shall see, this alternative perspective reveals that the optimal frequency ω * has a second natural interpretation in terms of population synchrony.
To do so, we derive a self-consistent expression for x + by inserting Eq. ( 5) into the evolution equations (Eq.( 4)): This form highlights that vector x + is given by the sum of two contributions: one associated with the external forcing via the input vector m, and one associated with the reservoir response via the recurrent input Jx + .Since J is random, the direction of the latter contribution is random (i.e., it varies across realizations of J), but its amplitude is self-averaging and depends on the strength of recurrent connectivity g [15].
We use Eq. ( 13) to gain intuition about how the network encodes the external oscillatory signal at the level of single-unit activity.To this end, we visualize the entries of the x + vector as points in the complex plane: (x + ) i = R i e iφi , where R i and φ i represent the amplitude and phase with which a single unit responds to the forcing input (Fig. 3A).How are points corresponding to different units distributed on the complex plane?When recurrent connections are very weak (g 0), different units behave as uncoupled filters of the input; we have x + m/(1 + iω), implying that the real and imaginary part of (x + ) i for different i are proportional one to each other.As a consequence, points on the complex plane are collinear (Fig. 3A left), and phases are identical: φ i = φ.Responses of different units are thus synchronized (Fig. 3B left).As g grows from 0, the second term in Eq. ( 13), which originates from recurrent interactions, starts spreading the real and imaginary parts of (x + ) i away from the line φ i = φ (Fig. 3A right), and introduces variability in response phases (Fig. 3B right).
For fixed values of g and ω, the distribution of dots on the complex plane is a bivariate Gaussian (Fig. 3A); a narrow distribution corresponds to highly synchronized units, and its broadening at stronger coupling indicates their desynchronization.As both m and J are generated from a centered Gaussian distribution, the mean of the distribution vanishes.The covariance is given by: implying that the shape distribution is controlled by the statistics of the spanning vectors v + and v − .
The similarity between the covariance matrix and the reduced cross-correlation matrix C R (Eq. ( 9)) analyzed in the previous paragraph suggests that synchrony in single-unit response and dimensionality of state-space trajectories are deeply related properties of reservoir activity.To formalize this relationship, A. C. we compute the spread of phases φ i across the reservoir population, i.e.
where p(φ) is the probability distribution of phases for a bivariate Gaussian distribution ( [24], see Appendix 4.5).The phase spread for different values of recurrent strength g and frequency ω is plotted in Fig. 3C.These results show that it monotonically increases with g; for any fixed g, it reaches a maximum at a finite frequency value, given again by ω * = 1 − g 2 (Fig. 3D).
To conclude, we have examined the behaviour of single-unit activity in response to a sinusoidal forcing input.In line with classical mean-field studies, we have analyzed the statistical distribution of single-unit activity profiles across the reservoir population [15,25,9].This approach has revealed that, for fixed g, ω * corresponds to the frequency at which single-unit activity is maximally desynchronized.
Note that historically, desynchronized single-unit profiles have been pointed out as a desirable feature of reservoir activity, as temporally heterogeneous profiles form a rich set of basis functions from which complex target functions can be reconstructed [26].
Open-loop setup: decoding the internal representation
After having characterized the reservoir activity during stimulus encoding, we turn to the decoding step of the open-loop analysis.Decoding corresponds to finding a readout vector n ∈ R N which satisfies: the projection of driven reservoir activity along n needs thus to match the target f (t) (Fig. 4A, yellow).
In terms of the Fourier-space representation (Eq.( 5)), n is a solution to the set of two linear equations given by When g = 0, interactions vanish and the equations above read n m = 1±iω, which cannot be satisfied by any n.This scenario corresponds to completely synchronized reservoir activity, or equivalently, activity spanning one-dimensional state-space trajectories.For any g > 0, on the other hand, this system of equations is under-determined, since it fixes only 2 among the N degrees of freedom in n.
We explore the effect of these degrees of freedom by defining a family of readout vectors n parametrized by an integer k, where k = 2, . . ., N ; k indicates the number of reservoir units from which the readout signal is reconstructed.We term such solutions from-k regression.To obtain such a solution, we set all elements except for the first k of n to zero, and then solve Eq. ( 17) by considering the leastsquares (LS) solution of minimal norm, which can be computed through the pseudo-inverse (see Appendix 4.6).When k = N , we obtain the full LS solution, which reads: or, in terms of v ± vectors: where we defined the short-hand notation All the readouts within the from-k family exactly solve the task in the open-loop setup.However, it is not clear a-priori whether all of them are equivalent when closing the loop, i.e. when the feedback network is required to autonomously generate the target signal (Eq.( 3)).In the following, we assess dynamics and stability of closed-loop networks corresponding to the different choices of the readout n.
Closing the loop: autonomous signal generation
In the previous two sections, we have analyzed how random networks encode a one-dimensional periodic signal, and how the network response can be used to reconstruct the same signal as output.Ultimately, we want the encoding and the decoding steps to be self-consistent, i.e. we require which is equivalent to transforming our problem from an open-loop to a closed-loop setup, where dynamics are autonomous and follow Eq. ( 3) with linear interactions: and n satisfies Eq. ( 16).This step is illustrated in Fig. 4A by the purple feedback arrow connecting the reservoir output to the input.If closing the loop does not perturb activity in the reservoir by changing its stability properties, then at every time point the readout n x(t) = cos(ωt) is fed back into the system, and the solution obtained through the open-loop setup is self-consistent.
The solutions to Eq. ( 21) and their stability are fully characterized by the eigenspectrum of J = J + mn (the leak term in the dynamics contributes by uniformly shifting the spectrum by −1).
For N sufficiently large, the eigenvalues of J are distributed uniformly in a disk of radius g < 1 [20].
The position of some or all of the eigenvalues can, however, be modified by the rank-one perturbation mn ; we refer to these as outliers.In order for the closed-loop system to stably sustain the periodic activity we found in the encoding step, the eigenspectrum of J must satisfy two key requirements: (i) a pair of complex outlier eigenvalues with value: λ ± = 1 ± iω (which ensures that a periodic trajectory of frequency ω is realized), and (ii) a stable bulk of remaining eigenvalues: R(λ) < 1 ∀λ = λ ± (which ensures that no runaway activity is generated along other directions).
All eigenvalues of J are roots of the characteristic polynomial det (J + mn ) − λI = 0. Fraction unst.networks The Matrix Determinant Lemma [13,12] allows us to decompose this polynomial into two factors, corresponding to the two sets of eigenvalues: It is seen that the second term vanishes on the spectrum of J, whereas the first term vanishes for the outlier eigenvalues.The outliers therefore satisfy If n satisfies Eq. ( 16), then λ ± = 1 ± iω are indeed solutions, implying that condition (i) is satisfied.
Note that the eigenvectors corresponding to λ ± are identical to the vectors x ± , as (J + mn Thus fixing n in the open-loop framework is equivalent to directly controlling the value of the targetrelevant eigenvalues in the eigenspectrum of the closed-loop network. We next examine whether this pair of eigenvalues are the only outliers generated by closing the loop: while Eq. ( 24) is guaranteed to have λ ± = 1 ± iω as solutions, other solutions might be admitted which could violate requirement (ii).Such potential solutions depend on the overlap between the vectors n and x λ = (λI − J) −1 m.Note that if the readout n was random (and thus orthogonal to J and m), this overlap would vanish and no additional outliers would be generated.
In the case of the full LS solution (k = N , Eqs. ( 18)-( 19)), the readout vector n LS is contained in the plane spanned by vectors v ± .As a consequence, the overlap between n and x λ can be expanded in terms of x ± x λ .In the limit N → ∞, these terms have a simple form which can be evaluated analytically (see Appendix 4.8), yielding an equation in λ which reads: where P 11 and P 21 are the elements of the first column of P = C R −1 and depend on g and ω.As the equation above is quadratic, it admits λ = λ ± as unique solutions.Therefore, in large networks, LS training is guaranteed to result in stable dynamics, as no additional outliers are generated in the eigenspectrum other than the task-relevant ones.This is confirmed by numerical simulation in the left panels of Fig. 4B.
In the more general case of from-k LS regressors with k < N , readout vectors might contain extra components that are correlated with J and m and are not fully contained within the spanning plane; as a consequence, more than two outlier eigenvalues and unstable dynamics can be expected.The right panels of Fig. 4B show an example of such a situation (in the simulation, k = 2).One outlier eigenvalue with R(λ) > 1 is seen in the top panel, which induces the dynamic instability seen in the bottom panel.
As k decreases from the maximal value (N ) to the minimal one (2), the component of the readout vector n outside of the v ± plane becomes larger (Fig. 4C).Numerical analysis indicates that, correspondingly, the fraction of networks with unstable dynamics increases (Fig. 4D).
In summary, we have shown that -although the open-loop setup admits multiple exact solutions -different solutions are not equivalent in terms of dynamical stability when the loop is closed.
Stability properties are related to the orientation of the readout vector relative to the driven open-loop trajectory.In the case of the full LS solution (Eq. ( 18)), the readout n LS is completely aligned with the trajectory plane, and closed-loop dynamics are guaranteed to be stable.Other solutions generally contain components outside of this plane, which can cause activity to diverge.
Predicting performance of trained linear networks
We now turn back to the problem of understanding performance in trained feedback networks and its dependence on the target frequency ω.We start by considering linear feedback networks which are trained (as in Fig. 1C Consider first the encoding phase of learning (Section 2.2), where reservoir activity is stimulated.
Because of noise, learning algorithms may not have access to the true spanning vectors v ± .Rather, we assume that corrupted versions ṽ± = v ± + ξ ± (where the entries of ξ ± are independent Gaussian noise) are measured.The estimated LS readout then reads: where P is the inverse reduced cross-correlation matrix which includes the noise disturbance.
As in Section 2.4, we can characterize closed-loop dynamics by computing the outlier eigenvalues of J + m ñ LS .The second term in the r.h.s. of Eq. ( 27) is random and orthogonal to m and J, and therefore does not affect the position of outlier eigenvalues.In contrast, the first term is a vector fully aligned with the noise-free spanning vectors v ± , which generates two outlier eigenvalues λ± .Because of the noise, their values deviate from the target eigenvalues λ ± ; they are solutions of an equation identical to Eq. ( 26), but with P replaced by P .For every noise realization, the inverse reduced cross-correlation matrix P is perturbed in a random direction, yielding random modifications to the target eigenvalues λ ± .We can estimate the average mismatch between λ± and λ ± from the sensitivity of matrix P to perturbations, which is quantified by the condition number of the reduced correlation matrix C R , i.e. the ratio between the largest and smallest eigenvalue [27,17]: The value of c and its dependence on ω and g can be computed by taking the limit N → ∞ and averaging over the networks ensemble.Fig. 5A shows that, for fixed connectivity strength g, the condition number is a non-monotonic function of the forcing frequency ω, and attains a minimum at the resonance frequency ω * = 1 − g 2 (see Appendix 4.9).Thus, when training linear feedback networks through noisy LS regression we expect that, for fixed g, the readout would be closest to the desired one at ω = ω * , where the reduced cross-correlation matrix is most robust to noise.This robustness directly reflects the properties of the internal representation of the target signal within the reservoir, which is characterized by maximal dimensionality and minimal synchrony at ω = ω * .
We tested this prediction on finite-size trained networks.Examples from Fig. 5B confirm that the task-related eigenvalue pair λ± deviate from the target ones.As in the case of non-linear networks (Fig. 1B-C), we find that the error is frequency dependent (Fig. 5C, left); furthermore, for fixed strength of the internal connectivity g, we observe that the error is minimized at a frequency ω which is very close to ω * (Fig. 5D, left).
As a second way to characterize performance, like in Fig. 1, we considered feedback networks trained via Ridge regression [18].In this case, the readout vector is deterministic; in the Fourier space, it can be expressed as (see Appendix 4.10): where P = C R −1 + N σ 2 I and I is the 2 × 2 identity matrix.As in the case of noisy LS regression, also this readout vector generates only two outlier eigenvalues λ± , whose values can be again computed through Eq. ( 26); in this case a closed-form expression can be computed (Appendix 4.10).
For moderate values of the regularization parameter σ, the resulting λ± are complex conjugates that deviate somewhat from λ ± (see Supp.Fig. 10 for the full bifurcation diagram).Specifically, their real part is always smaller than 1, implying that the resulting autonomous dynamics are always stable (see Appendix 4.10).The amplitude of the mismatch between the real and the imaginary parts of λ± and the target eigenvalues λ ± depends both on g and ω (see Appendix 4.10), and is minimized at a finite frequency ω which monotonically decreases with increasing g (Fig. 5D center, solid lines).Importantly, the value of ω is predicted to behave similarly (although not identically) to ω * .Fig. 5D (middle) shows an excellent match between these predictions and simulation results.
As a third and final example, we considered linear networks trained via the RLS algorithm [19,7].In this case, an analytical description of the closed-loop spectrum and resulting dynamics is much harder to obtain; we thus computed the value of the preferred frequency ω from simulations.We found that the mismatch between λ± and λ ± displays a strong, non-monotonic dependence on the target frequency (Fig. 5C, right); the preferred frequency ω is, again, quite close to ω * (Fig. 5D, right).To conclude, we analyzed performance in linear feedback networks; as for non-linear networks (Fig. 1), we found that performance is maximized for a preferred frequency ω which decreases with the connectivity strength g.Analysing how the simple LS readout solution interacts with noise, we predicted that the preferred frequency ω is expected to lay close to ω * , i.e. the resonance frequency where encoding dynamics has maximal dimensionality and is minimally synchronized.This prediction is exactly verified in networks trained via LS regression, but also carry over in a qualitative fashion to networks trained via different training algorithms.In fact, we showed that different algorithms are affected by different kinds of biases, whose effect is to shift the value of the preferred frequency ω away from ω * without changing its overall qualitative behaviour.
Internal representation in non-linear networks
We finally turn back to the original problem of analyzing training performance in non-linear feedback networks (Fig. 1).Our analysis of linear networks revealed that a key feature which determines training performance is the quality of representation of the target signal within the reservoir.This representation can be characterised by its dimensionality or, equivalently, by the synchrony of activity across units in the reservoir.
Guided by these insights, we examined the properties of open-loop dynamics (Eq.( 1)) in nonlinear networks.Because of the non-linearity, the neural trajectory x(t) is in this case not planar, but curved along many dimensions (Fig. 6A); most of its variance, however, is still explained by two directions (Fig. 6B).We investigated numerically the properties of non-linear target representations by using the same measures as for linear networks, namely the dimensionality and the spread of phases across units.Although in non-linear systems these are not equivalent measures, we find that their behaviour is qualitatively similar to one another, and to the behaviour of their analogues in linear systems (Fig. 6C-D left).First, both measures increase monotonically with the connectivity strength g.Second, for any fixed value of g, both measures display a maximum at an intermediate frequency ω * .
In the middle panels of Fig. 6C-D, we display the resonance frequency ω * computed from both measures of non-linear representations (left panels) across various values of g and for three target amplitudes (legend).As in the linear case, we find that the value of ω * decreases with the connectivity strength g; unlike the linear case, however, it depends on the target amplitude A. For small target amplitudes (light gray), both measures of non-linear representations yield values of ω * which are quantitatively very close to the values predicted by the linear theory, i.e. 1 − g 2 (yellow line).This is expected, as for low-amplitude driving the reservoir activity mostly remains in the vicinity of the origin, a region where the non-linear dynamics are approximately linear.In the nonlinear case, however, as the target amplitude A increases (darker shades of gray, see legend), the resonance frequency ω * also increases.The decrease of ω * with g is retained, but to a lesser extent.
In the right panels of Fig. 6C-D, we compare the resonance frequency ω * predicted from an-alyzing non-linear representations to the preferred frequency ω which minimizes training performance (Fig. 1).Although the two quantities do not exactly coincide, they display significant correlations.Remarkably, the value of ω * correctly captures the behaviour of the preferred frequency ω with the target amplitude A: like ω * , ω increases with A, as can be seen by the clustering of different shades of grey in
Discussion
Ubiquitously across biology, complex high-dimensional systems interact with their environment through low-dimensional channels.The computational modelling of such setups has advanced considerably in the past two decades with the emergence of reservoir computing techniques [3,26], where learning acts on such low-dimensional bottlenecks.Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood [17].In particular, a theory is lacking for predicting -based on the characteristics of the reservoir and the target function -dynamics and performance of trained feedback networks.
In this work, we studied learning performance of feedback networks trained to self-sustain a sinusoidal readout signal.Through mathematical analysis, we showed that learning performance is mostly controlled by the quality of the internal representation of the target signal.This quality can be quantified by analyzing the open-loop dynamics and measuring the condition number of their cross-correlation matrix, a number that characterizes to what extent the network dynamics is robust to training noise.
We found that the condition number displays, like training performance, a complex dependence on the parameters controlling the reservoir internal properties (strength of reservoir connectivity g) and the readout target function (frequency ω).The parameter values where the condition number is minimized, , define an optimal spot for learning.At this optimal point, internal representations are characterized by maximal dimensionality and minimal synchrony, which are two ways of quantifying the richness of the dynamic repertoire available to the learning algorithm.Our insights were derived by studying linearized dynamics and were later tested on non-linear networks, where they successfully capture non-trivial aspects of training performance.
The condition number of the cross-correlation matrix has been pointed out in several studies as a key quantity in determining performance [17,27].Our work analytically quantifies those empirical observations in the framework of networks trained on a simple task via common LS-based algorithms.
We have shown, however, that performance might depend on other features, such as closed-loop stability, D. Using phase spread of driven trajectories to predict performance.Unit activities x i (t) were fitted with sinusoidal functions of the driving frequency ω, and the variance of the phase distribution was measured.
Left, center and right panels are the same as in C.
for other non-standard algorithms (see Fig. 4).
Importantly, our analysis differentiates between two properties that might hinder network performance: high-norm readouts and non-normality.In a number of classic studies [7,17,12], large norms of the readout vector have been associated with impaired performance.In addition, recent observations indicate that training performance is low in parameter regions where the open-loop dynamics is highly non-normal [13], and link low performance to large readout vectors.In our framework, the two properties can be analyzed separately.Non-normality can be measured from the angle θ between the two activity eigenvectors v ± ; Fig. 2C indicates that non-normality is minimal at the resonance frequency ω * .The norm of the readout vector n LS can be instead derived from Eq. ( 16) (see Appendix 4.7); we show in Supp.Fig. 9 that, for every value of connectivity g, the norm of the readout vector is monotonic in the target frequency ω.We conclude that these two quantities are not equivalent predictors of learning performance; in our setting, training performance is optimal close to ω * , so that non-normality is identified as the dominating factor in controlling performance.
Several studies have supported the hypothesis that learning capability is maximized in the parameter region where dynamics is close to the boundary between ordered and chaotic activity, i.e. the edge-of-chaos [14,7,6].Our findings are consistent with this hypothesis: we have shown that the condition number (and, consequently, the training error) monotonically decreases as the strength of reservoir connectivity g is increased from 0 towards its critical value.However, our analysis has shown that, together with the strength of internal connectivity, learning performance is crucially shaped by the properties of the target function.By analysing non-linear networks, furthermore, we have found that the parameter region characterized by maximally high-dimensional and de-synchronized internal representations does not necessarily coincide with the edge-of-chaos; the two regions in fact diverge as the target amplitude A is increased and activity becomes strongly non-linear (Figs. 6 and 11).Specifically, as A increases, the critical frequency where activity becomes chaotic moves to very high values [25] (Fig. 11), while the resonance frequency ω * (which measures activity dimensionality and synchrony) remains close to the training-preferred frequency ω (Fig. 6).This result suggests that future research should focus on characterizing the properties of driven non-linear activity rather than analysing the transition to chaos per-se.
The numerical analysis of non-linear networks (Fig. 6), which was led by the insights gained from the linear theory, suggests that representation quality is a major determinant of closed-loop performance also in the case of non-linear networks.Exploiting the link between the two, we were able to predict the dependence of the preferred frequency ω on both the internal connectivity g and the target amplitude A, which plays no role in the linear counterpart.This is despite the fact that the non-linearity of the dynamics introduces, in trained networks, new qualitative behaviours which do not exist in linear networks.In particular, we observe that the training error (and, consequently, the value of ω) strongly depends on the hyper-parameters controlling the stability of the limit cycle which constitutes the internal representation (see Appendix 4.1 and Supp.Fig. 8).In this respect, a more detailed analysis is called for; we hope that future work would extend our analytic framework to cover non-linear reservoirs.
and Srdjan Ostojic and Manuel Beiran for their feedback on a previous version of the manuscript.LS would like to thank Friedrich Schuessler for helpful discussions.
Training of feedback networks
In the following, we report the procedures used to train feedback architectures (Figs. 1 and 5).Procedures are detailed for the general case of non-linear networks; the case of linear networks corresponds to taking Φ(x) = x.Results are averages across 1000 different network and training realizations.the cross-correlation matrix and to ease local stability in non-linear networks, white noise is then added on top of activity: Φ = Φ + σ LS ξ, where ξ is a L × N matrix of standard Gaussian variables.The trained readout vector n is finally computed as:
LS regression training
In linear networks, training performance is measured in terms of the mismatch between the target outlier eigenvalues λ ± (see Section 2.4) and the outlier eigenvalues λ± , defined as the pair of complex conjugate eigenvalues of J = J + mn whose real part is maximally close to one.In non-linear networks, performance is measured on closed-loop activity.To this end, closed loop dynamics (Eq.( 3)) is simulated from t = 0 to t = T tot .The initial condition is taken to be equal to activity in the last time step of the open-loop simulation; on top of it, an N -dimensional vector of white noise of amplitude σ pert A is added.The latter perturbation was used to take into account training error generated by unstable local dynamics; we take σ pert = 0 in linear networks.To measure the test error we fitted a sinusoidal function F (t) of fixed amplitude A and frequency ω to the readout signal z = n Φ obtained in the closed-loop simulation, yielding a novel L-dimensional vector F .Readout error is finally measured as , where the average is taken over all the integration time points from t = 0 to t = T tot .If the fit fails, we set the readout error to 1. Parameters used in Fig. 1
RLS training
Training is performed in the closed-loop setup, from t = 0 to t = T tot , with T = N tot 2π/ω and N tot = 20 (Fig. 1) or 10 (Fig. 5).At t = 0, an N × N -dimensional matrix P is initialized as: P = I/α, where I indicates the N -dimensional identity matrix and α is a free parameter.Matrix P represents a running estimate of the inverse of the activity cross-correlation matrix [7].Readout vector n is further initialized with zero entries.At every learning step, closed-loop activity is simulated from t 0 to t 0 + τ (Eq.( 3)), with τ = (2π/ω)/500.Activity at t = t 0 + τ is stored in an N -dimensional vector Φ.
Matrix P is then updated as [7]: The readout vector n is then updated as: where the error e is measured as: e = z(t 0 + τ ) − f (t 0 + τ ).Once training is completed, performance is measured as in the LS case.Parameters used in Fig. 1 are N = 400, α = 1 and σ pert = 0.1.Parameters used in Fig. 5 are N = 400, α = 1 and σ pert = 0.
Analysis of linear open-loop reservoirs
For a general input f (t), the system of linear equations Eq. ( 4) admits the asymptotic solution (t → ∞) which for a complex exponential f (t) = e st for s ∈ C, simplifies to In particular, for f (t) = cos(ωt) = 1 2 (e iωt + e −iωt ), one finds the expression in Eq. ( 5) in the main text.Note that x ± defined in Eq. ( 5) correspond to the amplitude of the peaks of the Fourier transform of x(t); indeed, we have: In deriving Eq. ( 7), we defined: A.
Statistics of spanning vectors v + and v −
In this section, we characterize the geometry of vectors v + and v − in terms of their norms and overlap.
We start by evaluating the dot product: with a, b ∈ C. If the eigenvalues of J/a and J/b have real part smaller than one, we can use the power series expansion so that Eq. ( 38) becomes Since J is random, the value of this expression randomly fluctuates across different realizations of recurrent connectivity J.We thus turn to a statistical characterization, and evaluate Eq. ( 38) by computing its mean and variance with respect to different realizations of J.
The mean yields, to the leading order in N [12]: We have used: which comes from observing that J p is a random matrix, which is uncorrelated to J q for q = p has variance g 2p /N .This yields: from which we obtain Eq. (42).
The variance can be computed in a similar way.Like the mean, the variance is characterized by O(N ) scaling [12]; as a consequence, variability due to different realizations of J does not enter the dot product Eq.(38) to the leading order in N , and dot products can be replaced with their mean (Eq.( 41)) when N → ∞.
We can now compute the mean norm of the spanning vectors from combining Eqs.(37) and (41): the calculation of the norm of v − is very similar, and only differs in the sign of the first summand in the second line above: Finally, when computing the dot product between the two vectors, the cross terms cancel to yield With these expressions, the angle θ between v + and v − can be written as By denoting ε = 1 − g 2 , we now summarize the statistics of the spanning vectors v ± : and angle between the two spanning vectors reads:
Analysis of geometric properties of driven trajectories
We can use the expressions computed in Appendix 4.3 to evaluate the participation ratio d: which, again, implies a nontrivial maximum of d at ω * (g) = 1 − g 2 .
Finally, we observe that g = 1 is a singular point, as in the limit of low frequency we have but the overlap vanishes and the participation ratio attains its global maximum Note that the limits lim g→1,ω→0 d and lim g→1,ω→0 cos(θ) do not exist, since they depend on the order of limits taken.To see this, compare Eqs. ( 57) and (62).
Distribution of response phases
The phase spread in response of different units (Eq.( 15)) was computed as a numerical integral performed over the probability distribution p(φ) whose analytical form is available in [24].We used: where ρ = cos(θ).From [24] we also used:
From-k regression
In this section, we explain how from-k least-squares regression (Fig. 4) is performed.
For 2 ≤ k ≤ N , we define the cropped spanning vectors as where [v ± ] i indicates the i-th element of vectors v ± .For every k, the from-k LS regressor is given by the pseudo-inverse, which yields the LS regressor of minimum norm: Note that n N LS = n LS .
In the following, we show that any from-k readout vector n k LS with k < N is not fully contained in the plane spanned by vectors v ± .The from-k readout vector n k LS is contained in the plane spanned by cropped vectors v k ± .Consider now any vector a which is orthogonal to the reservoir trajectory plane spanned by v ± .We have We have that cropped vectors v k ± do overlap with vector a, because: As a result, the readout vector n k LS also has a nonzero overlap with a.
Analysis of least-squares regression: norm
We analytically compute the norm of the least-squares readout solution n LS (k = N , Eqs. ( 18) and ( 19)).
We start from Eq. ( 19) to write: The vector within the parenthesis above is contained in the activity-spanning plane and is orthogonal to v − , and has norm the norm of n LS is therefore given by We find that this expression is monotonically increasing in both g and ω, as shown in Supp.Fig. 9.
Analysis of least-squares regression: outlier eigenvalues
In this section, we compute the outlier eigenvalues of J which result from full LS regression (k = N , Eqs. ( 18) and ( 19)).As derived in the main text, outliers obey: Because of Eq. ( 17), we know that the equation above admits the solutions λ = λ ± = 1 ± ω.In the following, we show that λ ± is in fact the only solution admitted.To this end, we use Eq. ( 19) to rewrite the equation as: where we defined the short-hand notation P := C R −1 .A little algebra yields: We can evaluate dot products in the form: by following Eq.(41), which was derived in Appendix 4.3 by averaging over the random connectivity J.
This yields: As we know that the quadratic equation above is satisfied by λ = λ ± , we conclude that Eq. (72) cannot admit other solutions beyond these two.
Condition number of the cross-correlation matrix
The condition number of the reduced cross-correlation matrix C R is defined as: where ν 1 and ν 2 are the two eigenvalues of C R .Their value can be computed as a function of the statistics of the spanning vectors v ± , which in turns depend on ω and g (see Appendix 4.3).
In this section, we show that for fixed g, the condition number c is minimized at the same value of ω which maximizes the participation ratio d; this frequency coincides with ω * (see Appendix 4.4). Using: we have: The participation ratio d is maximized when the quantity ∆ γ is minimized, which is precisely where the condition number c is minimized.
Analysis of ridge regression
We start by computing the readout vector which performs ridge regression in the Fourier space.The ridge regressor of Eq. ( 17) can be written in terms of the v ± spanning vectors as [18]: where P = C R + N σ 2 I −1 i.e.
This yields the ridge regressor readout: where v indicates a normalized vector.By comparison with the LS regressor, Eq. (70), we observe that σ has two effects on the readout: first, it reduces the norm, as expected from a regularizer.Second, it biases the readout vector towards v + .
The outlier eigenvalues λ± imposed by the ridge regressor can be found by utilizing the same strategy as in Appendix 4.8.We insert P 11 = P11 , P 21 = P21 and λ ± = 1 ± ω into Eq.( 77) to obtain the equation for the outlier eigenvalues λ: Depending on the values of σ, ω and g, the equation above admits real or complex conjugate eigenvalues (see Supp.Fig. 10).For low frequencies, the dynamics are characterised by two real eigenvalues λ± .As ω increases, a complex conjugate pair of eigenvalues is formed.Importantly, their real part is always smaller than 1, yielding stable closed-loop dynamics.To see this, we approximate the real part of the solution to Eq. ( 86) by assuming that σ 1: Now, by use of Eq. ( 48) we evaluate implying that the pre-factor for the σ 2 term in the denominator is always larger than 1; as a consequence, σ > 0 always reduces the real part.
To conclude, note that the analysis above allows to predict the behaviour of outlier eigenvalues when ridge regression is performed in the Fourier space (i.e. from the 2-dimensional system of equations in Eq. ( 17)).In Figs. 1 and 5, however, regression is performed in the temporal domain, on a higherdimensional (L -dimensional) set of equations (see Appendix 4.1).In order to compare the analytical prediction with trained networks, in Fig. 5 we thus scale the regularization parameter w.r.t. the value of σ which is used to derive analytical predictions, i.e. we set σ R 2 = σ 2 • L /2 (see Appendix 4.1).
Characterization of the edge-of-chaos in non-linear networks
In analysing non-linear networks (Figs. 1 and 6), we varied the strength of internal connectivity g in such a way that the open-loop dynamics driven by the target function f (t) remains non-chaotic [15] for every value of the forcing frequency tested.The critical value of connectivity strength g c at which open-loop dynamics becomes chaotic depends on the target frequency ω and amplitude A [25], and was investigated numerically (Fig. 11).
In order to find the critical values g c , we start by computing the Lyapunov dimension d L of driven activity [28,29], which is defined based on the Lyapunov spectrum Λ = {µ i } N i=1 .Intuitively,
Figure 1 :
Figure 1: Emergence of preferred frequency in non-linear feedback networks trained to sustain a sinusoidal output.A. Illustration of network architecture used in open (yellow) and closed (purple) loop.B. Example readout signal (dark grey; target shown in yellow) for three learning trials corresponding to the frequency values indicated in green in C (w = 0.1, 0.7, 2.1; g = 1).Other parameters as in C, except (for illustration purposes) training is performed on a smaller number of target cycles (N tot = 4 and N tr = 2, see Appendix 4.1).LS regression was used for training; examples trials for Ridge and RLS are reported in Supp.Fig. 8. C. Readout error as a function of ω, for a range of g values (blue shades), for networks trained via LS (left), ridge regression (middle) and RLS (right).We take here A = 1.Training details and parameters are reported in Appendix 4.1.D. Error-minimizing frequency ω as a function of g, for the three learning algorithms as in C. Three different target amplitudes A were tested (grey shades).
Figure 2 :
Figure 2: Encoding of the target signal: geometry of activity trajectories.A. Projection of one example trajectory x(t) (Eq.(7)) on the plane spanned by vectors v ± .B. Norm of the two vectors v ± as a function of the target frequency ω. C. Angle between the two spanning vectors v ± .In A-B-C we used g = 0.5.In B-C, the black vertical line indicates the resonance frequency ω * where the two norms are equal (r = 1, B) and the angle is maximized (C).D. Linear dimensionality: participation ratio computed from the principal components of activity.We plot results for five increasing values of g (blue shades); black stars indicate the position of ω * for every value of g.E. Resonance frequency ω * .In all panels, continuous lines indicate the analytical predictions.In D, dots show results averaged over 20 simulations of finite size networks, N = 2000.
Figure 3 :
Figure 3: Encoding of the target signal: single-unit description.A. Entries of vector x + in the complex plane.Left: g = 0.1, right: g = 0.5; w = 0.6 for both panels.Continuous lines are contour lines of the bivariate Gaussian distribution predicted by the theory.The most external contour indicates a probability of 0.01.Grey points are results of a finite network simulation (N = 400).B. Sample of activity from four randomly selected units chosen from the corresponding panel in A. C. Spread of response phases across the population (Eq.(15)) for increasing values of g and as a function of ω.Black stars indicate the maximum value.D. Value of the frequency which maximizes the spread of phases from C.
Figure 4 :
Figure 4: Closing the loop.A. Transforming the open-loop encoding/decoding setup (yellow) into a closed-loop system (purple).B. Sample networks trained through full LS (from-N, left) or from-2 (right) regression.The top panels show the eigenspectra of the closed-loop connectivity matrix J (red dots); small black dots indicate the unperturbed eigenspectrum of J.The bottom panels show the output generated by the corresponding networks.Here we used parameters g = 0.8, ω = 0.6 and N = 400.C.Overlap between readout n and the principal components (PC) of driven reservoir activity (Eq.(7)), of which the first two span the v ± plane.The same parameters as in B were used.D. Fraction of unstable closed-loop systems (over 2000 sample networks) as a function of connectivity strength g, for several values of k, measured over 1000 different realizations.We used N = 1000 and ω = 0.6.
9 Figure 5 :
Figure 5: Training performance in linear networks is maximized at ω * .A. Condition number of the reduced cross-correlation matrix C R computed analytically for different values of g (blue shades in C).Stars denote minimal condition number.B. Closed-loop outlier eigenvalues λ± for example networks from three learning trials corresponding to the frequencies marked by green triangles in C (w = 0.1, 0.4, 1.5; g = 0.9).C. Spectrum error as a function of ω, for a range of g values (blue shades), for networks trained via noisy LS (left), ridge regression (middle) and RLS (right).Error is measured from the imaginary part of outlier eigenvalues as |I( λ± ) − I(λ ± )|/ω, and similarly for the real part; the spectrum error is an average of the two.Training details and parameters are reported in Appendix 4.1.D. Frequency ω minimizing the error in the real and imaginary part of λ± as a function of g, for the three training algorithms.Solid orange lines in the middle panel show theoretical prediction for ridge regression (see Appendix 4.10).
Figure 6 :Fig. 1 )
Figure 6: Using open-loop reservoir dynamics to predict training performance in non-linear networks.A. Reservoir trajectories x(t) in driven non-linear networks: example trajectory projected onto the first three PCs of network activity (note scale of PC3 axis).B. Variance explained by projecting trajectories on the first two PC axes (note scale of variance).C. Using representation dimensionality to predict training performance.Left: participation ratio of driven trajectory as a function of ω for a range of g values (blue shades).Results are averages over 20 simulations of networks of size N = 2000, with A = 1.Center: frequency ω * , measured as the position of maximum dimensionality, as a function of g for three different values of A (grey shades).Right: error-minimizing frequency ω (from Fig. 1) plotted against ω * (from center panel).Results are shown for three training algorithms (legend).
Training is performed in the open-loop setup.In a first phase, open-loop activity (where we enforce u(t) = A cos(ωt) in Eq. (1)) is simulated by using the Scipy odeint routine from t = 0 to t = T tot , with T = N tot 2π/ω.Activity is stored in a L × N matrix Φ, where L indicates the number of time points used for integration.The L-dimensional vector F is constructed by computing the target function f (t) at the same time points.Activity and target function from t = 0 to T tr = N tr 2π/ω are later discarded, resulting in L × N and L × 1 matrices Φ and F , where L indicates the number of time points kept after discarding the transient.We used N tot = 20 and N tr = 8.In order to regularize are N = 400, σ LS = 0.01 and σ pert = 0.1.Parameters used in Fig. 5 are N = 400, σ LS = 0.01 and σ pert = 0. Ridge regression training As in the LS case, training is performed in the open-loop setup.The L × N open-loop activity Φ matrix is obtained as above.The trained readout vector n is then computed as:n = (Φ Φ + (σ R ) 2 I) −1 Φ F (31)where I indicates the N -dimensional identity matrix.Training performance is measured as in the LS case.Parameters used in Fig.1are N = 400, σ R = 1 and σ pert = 0.1.Parameters used in Fig.5areN = 400, σ R 2 = L /2• σ 2 and σ pert = 0; σ 2 = 10 −7 is the regularization parameter used for regression in the Fourier space (see Appendix 4.10), which was used to compute the theoretical prediction for outlier eigenvalues.
Figure 7 :
Figure 7: Emergence of preferred frequency in non-linear feedback networks, supplementary results obtained for different hyper-parameters.A. LS training.Parameters are as in Fig. 1, except σ pert = 0.01 (top) and σ LS = 0.001 (bottom).B. Ridge training.Parameters are as in Fig. 1, except σ pert = 0.01 (top) and σ R = 0.1 (bottom).C. RLS training.Parameters are as in Fig. 1, except N tot = 10 (top and bottom), σ pert = 0.01 (top) and α = 5 (bottom).In the bottom row, we have removed from the plot the preferred frequencies ω at low g values in the cases where training fails (i.e. it is characterized by very high error) for every value of ω tested.
Figure 8 :
Figure 8: Emergence of preferred frequency in non-linear feedback networks, example trials.Example trials as in Fig. 1B where training is performed via Ridge regression (A) or RLS (B).Parameters are as in Fig. 1B, except N tot = 3 in B.
9 Figure 9 :
Figure 9: Norm of LS-trained readout vector n.Computed via Eq.(71) for a range of values g and ω.
1 = N 2 P
11 + iP 21 λ + λ − g 2 + P 11 − iP 21 λ − λ − g 2 (77)which can be re-cast as a quadratic equation in λ:(1 + ω 2 )λ 2 − 2g2 + N (P 11 + ωP 21 ) λ + g 4 + N g 2 P 11 = 0. (78) left) via LS regression.The analysis of the previous sections has shown that linear networks trained via LS regression can exactly implement the feedback task with stable dynamics.Due to noise, however, real-life LS regression never converges to this ideal solution.Noise arises in training from multiple sources, such as finite sampling of training data, variability due to different initial conditions or regularization noise.In order to characterize training performance, we thus use our theoretical framework to analyze the effect of noise on the dynamics of feedback linear networks trained via LS regression. )
|
v3-fos-license
|
2020-05-07T09:08:19.726Z
|
2020-05-03T00:00:00.000
|
234960585
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-26305/latest.pdf",
"pdf_hash": "9b1f75d1d5bb9ab2474ee3de8e7aed4502b3c2bc",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1196",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8827dd21a99058757934f7185de0fe889f5d38af",
"year": 2020
}
|
pes2o/s2orc
|
Hemisensory paresthesia as the initial symptom of a SARS-Coronavirus-2 infection. A Case report.
Neurological symptoms might be associated with a Covid-19 infection. There are frequent reports in the last weeks. The neurological symptoms range from harmless side effects of a viral infection to meningoencephalitis and acute haemorrhagic necrotizing encephalopathy. Our patient reported burning headache and paresthesia as the initial symptoms mainly without other signs of viral infection like cough or fever. Such an initial neurological presentation seems to be rare. Most cases have neurological symptoms which can be expected after severe systemic viral infections like fever associated headache. Many COVID-19 patients with mild disease are at home and the further course is unknown. Our case shows, that neurological symptoms can be the �rst manifestation of an COVID-19 disease. While restricted paraesthesia has been reported in SARS-CoV-2 infections, hemisymptoms have not been described as initial symptoms.
Introduction
Neurological symptoms might be associated with a Covid-19 Infection.There are frequent reports in the last weeks.The neurological symptoms range from harmless side effects of a viral infection to meningoencephalitis [1] and acute haemorrhagic necrotizing encephalopathy [2].The most important report comes from Wuhan [3].Mao et al. retrospectively assessed 214 patients and 34.6% of them had neurological symptoms.The most common symptoms were unspeci c like dizziness and headache, but also taste and smell impairment.Neurological symptoms were more common in severe cases.A case report from Japan con rms the neurotropic potential of SARS-CoV-2.A 24-year-old-man developed a meningoencephalitis [1].On day 9 of the infection, the patient had a positive PCR in the cerebrospinal uid but not in the nasopharyngeal swab.Initially he felt headache, fatigue and fever.In the later course there was a loss of consciousness and seizures which led to hospitalisation.Helms et al. presented MRI imaging and CSF ndings in severely affected COVID-19 patients [4].
Case Report
We report the case of a 31-years-old woman who presented with a neurological manifestation without fever, cough or feeling sick.No previous illnesses are known.The symptoms began with a holocephalic burning headache after awakening.She described the burning character of the headache as something the patient never felt before.On day 2, paraesthesia of the left half of the face and left arm began, about 6 hours later paraesthesia included the left leg.The headache persisted and was of moderate pain (5/10 NRS).There was no response to acetaminophen, ibuprofen or dipyrone.
The patient was admitted to the hospital since paraesthesia worsened.Neurological examination con rmed hypesthesia of the left face and left arm and no further symptoms.Immediately a cerebral MRI was performed and showed normal brain morphology, no meningeal enhancement and normal arterial and venous vessels.The routine laboratory blood tests including renal and liver function, CRP, blood cell count and muscle enzymes were normal.The CSF cell count was slightly increased (7 per microliter) with a lymphocytic cell pattern.Other CSF parameters including oligoclonal bands and standard microbiological and virologic tests were negative.
The patient is a general practitioner and harbours an increased risk of infection, so that a swab for SARS-CoV-2 and other respiratory viruses was taken.Only the PCR for SARS-CoV-2 was positive.Unfortunately, no PCR investigation of the CSF was performed.The infection source remained unknown.
On day 3 the patient was discharged home with stable clinical symptoms.Three days later she developed a signi cant fatique still without fever.A minor cough occurred.After 17 days there was a worsening with chest pain.Laboratory and ECG testing disclosed heart ischemia, chest x-ray showed no abnormalities.
The suspected diagnosis was pleuritis.Blood cell count revealed a moderate leucocytosis and increased lactate dehydrogenase level.Par-and hypoesthesia of the left face are still present now for 3 weeks (see g. 1).
Discussion
It is assumed that patients with neurological symptoms showed more severe infections.Our patient reported burning headache and paresthesia as the initial symptoms mainly without other signs of viral infection like cough or fever.Such an initial neurological presentation seems to be rare.Most cases have neurological symptoms which can be expected after severe systemic viral infections like fever associated headache [5], smell loss and even postinfectious GBS [6].Further strokes, vigilance reduction and seizures, which could be primary to the infection or due to complications like hypoxic brain damage occur [3,4,7].
Symptoms and CSF of our patient suggested encephalitis.MRI, however showed no abnormalities.One severely affected patient from Japan with symptoms tting to meningoencephalitis had 12 cells per microliter in the CSF and the MRI ndings indicated a ventriculitis and encephalitis [1].MRI ndings in the report from Helms et al. were leptomeningeal enhancement in 8 patients and perfusion abnormalities in all 11 examined patients [4].None of these patients had CSF pleocytosis.
Many COVID-19 patients with mild disease are at home and the further course is unknown.Our case shows, that neurological symptoms can be the rst manifestation of an COVID-19 disease.While restricted paraesthesia has been reported in SARS-CoV-2 infections, hemisymptoms have not been described as initial symptoms.
The normal MRI disclosed stroke, haemorrhage or demyelination as a cause of the sensory loss on the hemibody.Although very small lesions can be overlooked one might speculate that the symptoms might be due to direct neurotropism of the virus.How the virus selectively reaches the somatosensory system of the brain is not known.Like other Coronaviruses, SARS-CoV-2 probably reaches the nervous system via different routes [7] [8], particularly via retrograde axonal path as suggested by taste and olfaction loss.
Figures
Figures
Figure 1 All
Figure 1
|
v3-fos-license
|
2019-10-24T09:11:33.979Z
|
2019-10-17T00:00:00.000
|
208273568
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://currentprotocols.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cpmb.108",
"pdf_hash": "75c1df2d449a61aadd1332def65cee0938a31eea",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1198",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"sha1": "c188d9923c4b570d5d6df2b8fdb796920046fabe",
"year": 2019
}
|
pes2o/s2orc
|
deltaTE: Detection of Translationally Regulated Genes by Integrative Analysis of Ribo‐seq and RNA‐seq Data
Abstract Ribosome profiling quantifies the genome‐wide ribosome occupancy of transcripts. With the integration of matched RNA sequencing data, the translation efficiency (TE) of genes can be calculated to reveal translational regulation. This layer of gene‐expression regulation is otherwise difficult to assess on a global scale and generally not well understood in the context of human disease. Current statistical methods to calculate differences in TE have low accuracy, cannot accommodate complex experimental designs or confounding factors, and do not categorize genes into buffered, intensified, or exclusively translationally regulated genes. This article outlines a method [referred to as deltaTE (ΔTE), standing for change in TE] to identify translationally regulated genes, which addresses the shortcomings of previous methods. In an extensive benchmarking analysis, ΔTE outperforms all methods tested. Furthermore, applying ΔTE on data from human primary cells allows detection of substantially more translationally regulated genes, providing a clearer understanding of translational regulation in pathogenic processes. In this article, we describe protocols for data preparation, normalization, analysis, and visualization, starting from raw sequencing files. © 2019 The Authors. Basic Protocol: One‐step detection and classification of differential translation efficiency genes using DTEG.R Alternate Protocol: Step‐wise detection and classification of differential translation efficiency genes using R Support Protocol: Workflow from raw data to read counts
INTRODUCTION
Next-generation sequencing methods have become commonplace tools in the life sciences, allowing researchers to understand the molecular mechanisms underpinning cellular processes, shaping phenotypic differences, and ultimately modifying disease susceptibility. While it is evident that mining every layer of gene expression would be required for a thorough understanding of gene regulation, expression profiling studies most commonly focus on the abundance of RNA molecules.
RNA sequencing (RNA-seq) is a methodology that quantifies fragments of RNA molecules to assess the level of gene transcription. To achieve this, sequencing reads are mapped to the genome and counted to quantify the expression of each gene. Significant changes in these counts between conditions identify genes undergoing transcriptional regulation. However, RNA-seq alone does not capture the full picture. While transcription serves to generate a broad collection of transcripts, the final expression of a gene is refined, and its fate determined, in the downstream stages of gene expression regulation, such as translational regulation, protein stability, protein degradation, and others.
Ribosome profiling (Ribo-seq) offers a quantitative approach to study translational regulation, a post-transcriptional process affecting protein levels. Transcriptome-wide translation is quantified via the capture of ribosome-protected RNA fragments (RPFs; Ingolia, Ghaemmaghami, Newman, & Weissman, 2009; also see Fig. 1A). Changes in the number of RPFs between conditions for a given gene can be used as a proxy for a change in the translation of the encoded protein. However, reliably identifying differences in translational regulation is complicated by the fact that the mRNA abundance of the transcript directly affects the probability of ribosome occupancy.
The number of ribosomes per transcript can be estimated by integrating RNA-seq and Ribo-seq to calculate translation efficiency (TE), the ratio of the RPFs over mRNA counts within a gene's coding sequence (CDS). TE is essentially the number of ribosomes per gene, normalized to transcript abundance. Genes with changes in TE between conditions are considered to undergo translational regulation [differential translation efficiency genes (DTEGs)]. Specifically, a gene is classified as DTEG if the changes in the number of RPFs cannot be explained by variation in mRNA read counts. A gene with a significant change in its mRNA counts and a concordant change in RPFs is transcriptionally, but not translationally, regulated [differentially transcribed gene (DTG); Fig. 1B]. Conversely, genes that have significant changes in RPFs independent of changes in mRNA counts are considered DTEGs (Fig. 1C).
Figure 1
Transcriptional and translational regulation (A). Genome-wide quantification of mRNA counts and ribosome-protected mRNA fragments (RPFs) using RNA sequencing (RNA-seq) and ribosome profiling (Riboseq), respectively. Lines are not drawn to scale. In a hypothetical study with two conditions, control and treatment, (B) a gene with change in mRNA counts and RPFs at the same rate is a differentially transcribed gene (DTG) and, (C) a gene with change in RPFs independent of change in mRNA counts, which leads to a change in translation efficiency, is defined as a differential translation efficiency gene (DTEG). TE = translation efficiency = RPF/mRNA. (D-E) Classification of genes based on fold changes of RPF, mRNA, and TE. (D) A gene could be either/both DTG and/or DTEG, and based on the direction of change would fall into one of the eight gene-regulatory possibilities (sig: significant, n.s.: not significant). Translationally forwarded genes are DTGs that have a significant change in mRNA and RPF at the same rate, with no significant change in TE. Conversely, translationally exclusive genes are DTEGs that have a significant change in RPF, with no change in mRNA leading to a significant change in TE. Several genes are both DTGs and DTEGs, and their regulatory class is determined based on a combination of the relative direction of change between transcription and translation efficiency. Specifically, translationally buffered genes have a significant change in TE that counteracts the change in RNA; hence, buffering the effect of transcription. Translationally intensified genes have a significant change in TE that acts with the effect of transcription. In all cases, the change in RNA can be either positive or negative, and where buffering or intensifying takes place, the direction of change is taken into account. For example, a gene that exhibits an increase in transcription and an increase in translation efficiency is classified as intensified, while a gene that exhibits an increase in transcription but a decrease in translational efficiency is classified as buffered. (E) Simulated data showing fold changes for each gene in RNA-seq and Ribo-seq data. Translationally forwarded genes (in blue), exclusive genes (in red), buffered genes (in purple), and intensified genes (in purple) are highlighted.
There are a number of existing approaches to detect DTEGs by combining Ribo-seq and RNA-seq data, with the earliest report based on differences in TE (Ingolia et al., 2009). However, this approach does not take into account the variance, low expression of RPFs or mRNA counts, or batch effects, severely compromising the accuracy of detection. Several other approaches to detect DTEGs by modeling changes in TE have Chothani et al. ß120 min ß60 min ß5 min ß20 min ß5 min been developed subsequently: Ribodiff (Zhong et al., 2017), Xtail (Xiao, Zou, Liu, & Yang, 2016), Riborex (Li, Wang, Uren, Penalva, & Smith, 2017), and Anota2Seq (Oertlin et al., 2019). At their core, all of these approaches either utilize existing differential expression programs [e.g., DEseq2 (Love, Huber, & Anders, 2014) or EdgeR (Robinson, McCarthy, & Smyth, 2010)], or apply similar statistical assumptions to model the data. Unfortunately, these methods mostly miss essential functionalities of the underlying tools, vastly reducing their effectiveness. For instance, none of these methods, with the exception of Anota2Seq, allow for complex experimental design (i.e., with more than two conditions) or the use of alternative statistical setups (such as likelihood ratio tests for comparisons across time). Crucially, they do not account for the widespread batch effects in next-generation sequencing datasets. Although stand-alone tools for batch correction of sequencing data exist (Leek et al., 2010), differential expression tools require raw read counts to accurately model sample-to-sample variation (Anders et al., 2013; also see Table 1).
of 22
This article outlines detection of DTEGs by introducing an interaction term into the statistical model of DESeq2, an approach that we refer to as TE. We show that the fold change of the interaction term is equivalent to changes in TE, which detect DTEGs more accurately compared to all existing methods. When combining RNA-seq and Ribo-seq from two conditions, the interaction term can be used to model condition (untreated/treated) and sequencing methodology (Ribo-seq/RNA-seq). This allows the identification of significant differences between conditions that are discordant between sequencing methodologies. In order to do this, we design our generalized linear model with three components: the condition (c), the sequencing type (s), and an interaction term containing both (c:s); refer to the Commentary for details. The result is a TE fold change and an associated false discovery rate (FDR) for significant changes of this fold change, which quantify the extent of translational regulation between conditions.
The protocols require the installation of R and basic familiarity with R or a Unix-like environment. The workflow in the Basic Protocol includes a script, DTEG.R, which can be run in one step. This script implements two processes: (a) detection of DTEGs and (b) classification of genes into regulatory classes. An Alternate Protocol is included that carries out the same functions step-by-step in R, allowing flexibility in the case of complex experimental designs. Lastly, a Support Protocol is provided that outlines the Chothani et al.
of 22
workflow of obtaining count matrices from raw sequencing files, including a quality check of the data.
STRATEGIC PLANNING
Ribo-seq can be carried out as described in the Current Protocols article Ingolia, Brar, Rouskin, McGeachy, & Weissman (2013). Similar to RNA-seq analysis, careful experimental design is crucial. At least three biological replicates per condition or group are recommended for robust analysis of differential transcription, translation, and translational efficiency. The sample processing and library preparation should be carried out together for different conditions and sequenced on the same lane of a sequencing machine, or in a randomized order across lanes, to avoid batch effects. It is not possible to account for batch effects that are completely confounded by any other covariate. For instance, if all the control samples were prepared in one batch and the treatment samples in another batch, it would not be possible to distinguish differences due to treatment versus control from differences arising due to separate preparation batches. Thus, it is recommended to prepare control and treatment samples together. Alternatively, when there are large sample sizes, it is important to split the samples in such a way that the conditions are randomized. Samples should be sequenced to sufficient depth both in RNA-seq and Ribo-seq. Despite the presence of an experimental step to remove ribosomal RNA (rRNA) fragments from the input RNA, sequenced Ribo-seq reads still include a fraction of rRNA sequences, which should be discarded before TE analysis. Thus, it is recommended to sequence at least 20 million reads per sample. Single-end 50-bp read sequencing is sufficient, since ribosome footprints are expected to be 29 bp in length. After sequencing and processing the data, the fastq and alignment files should be checked for several quality measures, as described in the Support Protocol.
ONE-STEP DETECTION AND CLASSIFICATION OF DIFFERENTIAL TRANSLATION EFFICIENCY GENES (DTEG) USING DTEG.R
The RNA-seq and Ribo-seq data should be processed first as described in the Support Protocol, in order to determine translationally regulated genes. In the following steps, we quantify the change in TE of each gene, calculate an FDR value for this change, and categorize genes into regulation classes using the TE approach. A DTEG is determined based on significant change in TE (FDR < 0.05). This protocol describes a wrapper script, DTEG.R, to detect and classify DTEGs. It also includes a script to visualize the transcriptional, translational, and TE changes for a gene of interest. Alternatively, the protocol can also be carried out step-by-step in R, allowing flexibility for complex experimental designs (see Alternate Protocol).
Materials Hardware
Computer running Unix, Linux or Mac OS X Administrative privileges and internet connection to install packages Software DTEG.R and goi_viz.R script: These scripts can be downloaded from our github page by typing the following command in the terminal window: $ git clone https://github.com/SGDDNB/translational_regulation.git R: https://cran.r-project.org/bin/windows/base/ Chothani et al.
These files contain raw read counts obtained from read-counting tools and should not be normalized or batch corrected. Each row represents a gene and each column represents a sample as shown below:
of 22
Visualizing changes in mRNA counts, RPFs, and TE for a gene of interest 4. Run goi_vis.R.
This step includes a one-step script to visualize the fold changes across the condition given in the study for a gene of interest as shown in Figure 2D-G: Figure 2D-G.
STEP-WISE DETECTION AND CLASSIFICATION OF DIFFERENTIAL TRANSLATION EFFICIENCY GENES USING R
This protocol performs the same task as the Basic Protocol, but step-wise in R, describing each step allowing flexibility to users for complex experimental designs.
of 22
Current Protocols in Molecular Biology Input files ribo_counts.txt: RPF count matrix including genes as rows and samples as columns rna_counts.txt: mRNA count matrix including genes as rows and samples as columns sample_info.txt: Sample-wise information on sequencing methodology used, condition and batch 1. Prepare input files as described in steps 1 and 2 of the Basic Protocol.
Additionally, using this protocol, the sample information file can have more columns for other covariates that can be included in the model design, as described in step 3.
2. Open Rstudio and load count matrices and sample information file: These commands assume that all required files are within your working directory. In case they are not, provide the full path to the input file in the read.delim command.
3. Create DESeq2 object for the combined dataset of Ribo-seq and RNA-seq counts.
The interaction term should be included in the linear model design as follows: The data can be tested for batch effects using principal component analysis (PCA 4). Similarly, name="Sequencing_Ribo_vs_ RNA" quantifies the difference between Ribo-seq counts and RNA-seq counts using the reference level as condition 1. These can also be supplied using the contrast parameter instead of the name parameter as follows: contrast=c("Condition","2","1") and contrast=c("SeqType","RIBO","RNA"), respectively. For interaction term fold change we use name="Condition2.SeqTypeRIBO". This quantifies the change in TE in condition 2 versus baseline condition 1. Refer to Commentary for the mathematical proof that the interaction coefficient is equivalent to TE.
of 22
Detecting differential translation efficiency genes 6. Store the list of DTEGs in a file: 7. Run DESeq2 for mRNA counts in order to obtain DTGs: These data may also be tested for batch effects using PCA, and if any batch effects are identified, they should be included in the sample_info.txt file and in the design as ßCondition + Batch.
Categorizing genes into different regulation groups 8. Run DESeq2 for RPFs (Ribo-seq counts): In order to classify genes into different regulation classes, quantification of the change in the RPFs is required. Similar to mRNA counts, these data should also be tested for batch effects, and, if any batch effects are identified, the batches should be included in the file sample_info.txt and the model design.
9. Obtain genes for each regulation class described in Figure 1D, E. Figure 1D a In order to further categorize these genes into intensified and buffered genes, the direction of the transcriptional change ( RNA) and translational efficiency change ( TE) are compared.
This step carries out the same function as step 4 of the Basic Protocol. It requires a gene id for your gene of interest, which can be obtained from https://www.ensembl.org/index.html, or can be based on the genome annotation file used to obtain count matrices with Support
Protocol. The input id should be a row name in the count matrix file.
WORKFLOW FROM RAW DATA TO READ COUNTS
The raw sequencing data should be processed prior to the Basic Protocol or Alternate Protocol, as shown below. It is also strongly recommended to carry out quality check for the raw and processed data as described in the following steps.
Materials Hardware
Computer running Unix, Linux or Mac OS X
of 22
abundant.fa: List of abundant sequences (rRNA, transfer RNA (tRNA), and mitochondrial RNA (mtRNA)) in fasta format organism.fa: Genome sequence in fasta format for the organism used in the study organism.gtf: Genome-wide transcript annotations in gene transfer format (GTF) for the organism used in the study
Remove reads mapping to abundant sequences.
This step first prepares a bowtie2 index for the known abundant sequences: rRNA, tRNA, and mtRNA. These sequences are considered contaminants of Ribo-seq data, since we want to capture only RPFs. Therefore, reads mapping to these contaminant sequences are removed prior to further analysis:
$ bowtie2-build abundant.fa index
Where: abundant.fa is the list of abundant sequences (rRNA, tRNA, and mtRNA) in fasta format; index is the prefix for the bowtie index output files. $ bowtie2 -L 20 -x index --un-gz outfile -U infile -S samfile Where: infile is the trimmed sequencing fastq.gz file, which was the outfile obtained in step 1; outfile is the output filename for unmapped reads in fastq.gz format; samfile is the output filename for mapped reads in SAM format; index is the prefix used for the bowtie index.
The arguments are based on Bowtie2 (V2.2.9), and other parameters can be explored as described in the manual. This function builds the index for abundant sequences, aligns the reads to the same, and saves a fastq.gz file, retaining only the unmapped reads. This output fastq.gz file comprises a cleaned set of reads that do not map to the abundant sequences and represent the RPFs. The reads in this file are further mapped to the genome in the next step.
3. Align reads to the genome file using the transcriptome index.
Before aligning the reads, it is required to generate a transcriptome index for the organism of interest. The required input files, the genome fasta and annotation files, can be downloaded from the Ensembl database at https://asia.ensembl.org/info/data/ftp/ index.html. These files should be for the same organism and same genome build. Run the following commands to generate the index, followed by alignment of reads to the same: $ STAR --runMode genomeGenerate --genomeDir --genomeFastaFiles organism.fa --sjdbGTFfile organism.gtf Where: organism.fa is the genome sequence in fasta format; organism.gtf is the genome-wide transcript information; genomeDir is the directory name for the output STAR index files. $ STAR --runThreadN 16 --alignSJDBoverhangMin 1 --alignSJoverhangMin 51 --outFilterMismatchNmax 2 --alignEndsType EndToEnd --genomeDir star2.5.2b_ genome_index --readFilesIn infile --readFiles Command gunzip -c --outFileNamePrefix outPrefix --quantMode GeneCounts --outSAMtype BAM SortedBy Coordinate --limitBAMsortRAM 31532137230 --outSAMattributes All Where: genomeDir is the directory name for the STAR index files generated in the previous step; infile is the cleaned fastq.gz file, which was the outfile in step 2; outPrefix is the prefix for the output filenames.
The arguments are based on STAR version 2.5, and other parameters can be explored as described in the manual. This function builds a STAR index for a given fasta and GTF, aligns the reads to the same, and saves an alignment file in the BAM format.
Where:
organism.fa is the genome sequence in fasta format; organism.gtf is the genome-wide transcript information; outfile is the output file name for the count matrix; infile_path is the path to the directory containing all bam files obtained in step 3.
Figure 3
Quality check of Ribo-seq data using Ribo-TISH. The tool RiboTISH provides several visualizations to investigate the data quality of Ribo-seq. First, it includes the length distribution for the Ribo-seq reads as a histogram. As the length of ribosome-protected mRNA fragment (RPF) is expected to be around 29 base pairs, the length distribution of the sequenced reads is used as a quality measure. Second, the 3-nucleotide periodicity of the RPFs mapped on all known protein-coding genes is shown for each read length. As shown, in these data, we have a high (93%) percentage of reads in Frame 1 with the predominant read length (29 bp). This is shown using a histogram of read coverage in the three frames, a barplot of the number of RPFs in each position around the START codon and STOP codon, and lastly a density plot for read coverage on the coding sequence across all genes.
of 22
The first step creates an index for the alignment file (.bam) generated in step 3. The user should replace [bam_file_prefix] with the outfile prefix specified in step 3 for alignment files. The second step evaluates the quality of the alignment file. This step saves a .pdf that shows the read-length distribution and periodicity of the Ribo-seq data, as shown in Figure 3.
COMMENTARY Background Information
Several methods have been developed for read alignment and read counting since the advent of RNA-seq (see Current Protocols article Ji & Sadreyev, 2018). In the Support Protocol, we use STAR, bowtie2, and feature counts for both Ribo-seq and RNA-seq datasets. These tools can be chosen based on user preferences. Due to the slightly different nature of Riboseq reads, it is important to modify parameters accordingly. For instance, since the RPFs are expected to be around 29 bp, soft clipping of reads can be quite detrimental to alignment pipelines and is not recommended. Furthermore, RNA-seq pipelines use six to eight allowed mismatches, but this can be quite large in a 29-bp read. We recommend one to two allowed mismatches for a robust downstream analysis.
In this protocol, we describe an interaction term-based TE analysis using DESeq2, but a similar model can also be incorporated in other generalized linear model-based differential expression tools such as edgeR. Previously, several publications have used DESeq2 to identify DTEGs, but in a suboptimal manner. For instance, these tools are used to calculate RPF and RNA, following which changes in TE are calculated using the ratio RPF/ RNA. The translationally regulated genes are then identified using |z-score| > 1.5 (Xu et al., 2017). This approach is referred to as the Ratio method in the benchmarking analyses. Another approach used previously also involves quantification of RPF and RNA using DESeq2. However, in this case, the translationally regulated genes are defined as genes with significant changes in either RPF or mRNA levels, but not both (Schafer et al., 2015). This approach falsely calls genes as translationally exclusive or buffered in cases where counts have a large variance across samples or are very low in either sequencing methodology. It would be unable to differentiate between a case where a gene is translationally regulated and a case where a gene has low counts/high variation in one of the sequencing methodologies. This is referred to as the Overlap method in the benchmarking analyses.
In order to benchmark the performance of our approach, we use three independent simulation datasets, two derived from previous publications (Oertlin et al., 2019, Xiao et al., 2016 and a third that was newly generated to Chothani et al. evaluate the performance of the tools in the presence of a batch effect. Despite DESeq2 being a key component of many existing approaches, it was either not included or not used correctly in previous benchmarks. Figure 4A-C shows accuracy curves for detection of DTEGs in each of these benchmarking datasets across typically used FDR thresholds. A full receiver operating characteristic (ROC) and area under the curve (AUC) analysis can be found in the associated web resource. Our benchmarking shows that TE has a superior accuracy in comparison to existing methods, especially in the presence of a batch effect. The only method that performs at a similar level to TE is RiboDiff, in the case of the data from Oertlin et al. (2019) (Fig. 4A). However, in the presence of a batch effect or based on the data from Xiao et al. (2016), TE is superior.
To further verify that this effect is not confined to simulated data, we analyzed RNAseq and Ribo-seq data derived from our recent study on cardiac fibrosis (Chothani et al., 2019). This experiment contained cardiac fibroblasts from four different individuals, and, as a result, has a pronounced patient-related batch effect accounting for roughly 25% of the variance within the data. While it is not possible to quantify the accuracy of these real data, it is consistent with the benchmark results. For instance, the overlap and ratio methods predict the highest number of DTEGs, but were shown to have high FP rates in the benchmarking. Conversely, other existing tools that detect very few genes consistently showed the worst accuracy in the benchmark containing batch effects.
Taken together, the three benchmark studies and real data analysis strongly suggest that the TE method is the most suitable for any integrative analysis of Ribo-seq and RNA-seq data, being both accurate and robust regardless of the data being analyzed.
Critical Parameters and Troubleshooting
Experimental design is one of the most important factors for efficient detection of DTEGs. In best-case scenarios, designs should avoid batch effects. Unavoidable batch effects should not be completely confounded with the groups of interest. This would lead to a nonfull-rank design in DESeq2, which makes correction of the batch effect impossible within the model. It is recommended to evaluate samples for batch effects or outliers using PCA prior to analysis. Batch effects can be checked by visualizing PC1 and PC2, which account for most variance, and the remaining PCs can also be explored to identify minor batch effects.
Installation of tools can be quite cumbersome due to different platforms and versions. Apart from the standard installation procedures provided in the protocols, the required tools can also be installed using the Anaconda software package https://docs.anaconda.com/ anaconda/install/. For instance, some of the tools used in Support Protocol can be installed with the following commands: conda install -c bioconda trimmomatic conda install -c bioconda bowtie2 conda install -c bioconda subread conda install -c bioconda star
Statistical Analysis
DESeq2 utilizes the Wald test for differential expression analysis in pair-wise data (i.e., two conditions). If the experimental design includes a time-series, each time point can be compared pair-wise using the Wald test. Alternatively, the likelihood ratio test within DE-Seq2 can be used, which is more suitable to identify differences across a time-series.
Mathematical proof: Interaction term coefficient is equivalent to the changes in translation efficiency
The interaction term in a generalized linear model provides a coefficient that models the non-additive effects of two variables. The design described in the protocol corresponds to the following linear equation (Equation 1): log count s c,s = β 0 + β 1 c+ β 2 s+ β 3 c × s
Equation 1
where c = condition and s = sequencing methodology. When this is used to model changes in the gene expression between conditions, it is possible to disentangle the transcriptional and translational contributions. For example, in an experimental setup with Riboseq (s = 1) and RNA-seq (s = 0) carried out over two conditions (c = 0 or 1), the genewise transcriptional and translational changes are calculated as follows.
First, the coefficients contributing towards the mRNA levels (s = 0) are identified for each condition (c = 0 or 1) separately. We then compute the difference of the identified Figure 4 Benchmarking of published tools to detect differential translation efficiency genes (DTEGs). Simulation datasets (A) derived from Oertlin et al. (2019), (B) derived from Xiao et al. (2016), and (C) generated using the Polyester package to introduce batch effects were used. All three simulations show that TE outperforms all other published methods. Comparisons are made using all the DTEGs as the true set. Since Anota2Seq has two different functions for obtaining exclusive and buffered genes, the results are combined prior to comparison. Riborex is omitted in simulated datasets without batch effects (A, B), since it is equivalent to the TE approach in these cases. The ratio method is based on quantifying the ratio of DESeq2 fold changes for mRNA counts and RPF. The overlap method identifies DTEGs as genes which have either significantly changing mRNA counts or RPFs but not both. (D) Analysis on published data showed inability of previous tools to reliably identify DTEGs.
coefficients to obtain the change in transcription.
Chothani et al.
RPFs given condition (c = 0) and sequencing methodology (s = 0) using Equation 1. log count s c = 0,s = 1 Changes in RPFs between the two conditions: log count s c = 1,s = 1 − log count s c = 0,s = 1 In order to obtain the translational changes that are independent of transcriptional changes, we subtract the changes in mRNA from the change in RPFs. This is equivalent to the interaction term coefficient β 3 as follows: Changes in RPFs -Changes in mRNA levels: Thus, β 3 , which is the interaction term coefficient, is equal to translational changes that are independent of transcriptional changes. Importantly, it is also possible to show that this interaction term coefficient is equivalent to the fold change in TE: Further, since TE is defined as the ratio of mean normalized Ribo-seq counts (RPFs) over RNA-seq counts, where TE is the translation efficiency.
As a result, the fold change (and associated adjusted p-value) obtained using the interaction term coefficient β 3 describes, for each gene, the change in TE. Genes with a significant adjusted p-value for TE are considered as DTEGs. Since this is a linear model, the design can also be extended to facilitate more complex experimental designs, such as batch effects or other covariates, making it a powerful tool for identifying DTEGs.
In order to demonstrate the usage and output of DTEG.R, we utilized Ribo-seq and RNA-seq count data from our recent study (Chothani et al., 2019) on primary human fibroblasts stimulated with TGFB1. We obtained a subset of this dataset using four patients and two conditions (unstimulated, stimulated). The results directory generated after following the Basic Protocol on this dataset is also saved in the github repository.
The subdirectory fold_changes/ contains three files, namely: deltaRibo.txt, deltaRNA.txt, and deltaTE.txt. These files store gene-wise expression changes across the given conditions in RPF, RNA, and TE, respectively. The results are obtained using DESeq2 and are saved in its standard output format. The two important columns, gene-wise log fold changes and the associated adjusted p-values, are used to determine gene expression changes between the two conditions. Generally, p adj < 0.05 is used as a threshold for determining genes that are changing significantly. A threshold for the absolute log fold change can also be used to select only high-effect sizes. The genes obtained using these thresholds are considered as significantly changing across the given condition or treatment. Genes passing these Chothani et al.
of 22
thresholds in deltaRNA.txt are those with a significant change in RNA and are considered DTGs, and genes passing these thresholds in deltaTE.txt are considered DTEGs.
Furthermore, the combination of changes in RPF, RNA, and TE are used to determine a gene's regulatory class, as shown in Figure 1D. A subdirectory, gene_lists/, contains files that list genes from each regulatory class. These include genes that have been identified as either DTG or DTEG and then further classified into translationally forwarded, buffered, exclusive, or intensified (see details in Table 2). Genes that are classified as forwarded are transcriptionally driven and exhibit no change in TE. On the contrary, translationally exclusive genes exhibit changes in TE but no change in transcription, which implies that these genes are only regulated translationally. Buffered and intensified genes have changes in TE as well as changes in RNA. If these changes in RNA counteract the change in TE, we consider them as translationally buffered, while if RNA changes act with changes TE, we consider them intensified. In each case, these genes are under both transcriptional and translational regulation.
Beyond what is described in these protocols, to understand the potential functions of the different gene regulatory classes, a gene set enrichment analysis (GSEA) or gene ontology (GO) overrepresentation analysis is recommended. Furthermore, hierarchical clustering of the gene-wise fold changes can also be performed to identify subgroups of genes that have a similar regulatory profile.
The PCA is conducted for both the Ribo-seq and RNA-seq count data. A PCA transforms the data in such a way that each component captures a different source of variation within the data, with the first component (PC1) capturing the largest source of variance in the data. Thus, a PCA can be used to determine any batch effect that is a source of variation in the data. In the example data, it shows that PC1 accounts for 42% of the variance in the Ribo-seq and 46% of the variance in the RNA-seq data. Importantly, PC1 separates the individual patients in both the datasets, indicating that the largest variance in these data is due to the difference between patients in the study. Since these datasets were generated to study the changes in different conditions (unstimulated/stimulated), it is important to remove this patient effect ( Fig. 2A, B). Therefore, in this case, the DTEG.R script should be run with the batch effect parameter (Argument 4) set to 1.
The .pdf file also includes a visualization of the global fold changes, as shown in Figure 2C. This is drawn using a scatter plot of the fold changes in RNA and RPFs. The plot also highlights whether the gene is a DTG and/or a DTEG. This plot gives an overview of the overall impact of translational regulation in the system. As such, it can be used to determine the dominant mode of regulation in the dataset and visualize the overall effect sizes of the different regulation types. For instance, if there were very few DTEGs and many DTGs found, it would imply that there is very little translational regulation in the system, and most of the changes occur via transcriptional regulation.
In order to look at individual examples, the file further visualizes the gene-wise fold changes for the genes with the strongest effect in each category (Fig. 2D-G). A line plot is used for visualizing the changes from unstimulated to stimulated in this study. These line plots can be generated for any gene of interest using step 4 in the Basic Protocol or step 11 in the Alternate Protocol.
Time Considerations
The protocol takes a couple of minutes on a standard computer for the example dataset, which includes four samples and two conditions. This could vary based on the number of samples and conditions to be tested.
|
v3-fos-license
|
2018-04-03T05:03:29.572Z
|
2013-10-01T00:00:00.000
|
22255241
|
{
"extfieldsofstudy": [
"Physics",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1118/1.4821543",
"pdf_hash": "b6d92f40cb1bbe5972907dda9d8895fc16095047",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1201",
"s2fieldsofstudy": [
"Engineering",
"Medicine",
"Physics"
],
"sha1": "be2ee6ff747421da5e6fe8615ff6241159fd29f7",
"year": 2013
}
|
pes2o/s2orc
|
Superficial dosimetry imaging based on ˇCerenkov emission for external beam radiotherapy with megavoltage x-ray beam
Purpose: ˇ Cerenkov radiation emission occurs in all tissue, when charged particles (either primary or secondary) travel at velocity above the threshold for the ˇCerenkov effect (about 220 KeV in tissue for electrons). This study presents the first examination of optical ˇCerenkov emission as a surrogate for the absorbed superficial dose for MV x-ray beams. Methods: In this study, Monte Carlo simulations of flat and curved surfaces were studied to analyze the energy spectra of charged particles produced in different regions near the surfaces when irradiated by MV x-ray beams. ˇCerenkov emission intensity and radiation dose were directly simulated in voxelized flat and cylindrical phantoms. The sampling region of superficial dosimetry based on ˇCerenkov radiation was simulated in layered skin models. Angular distributions of optical emission from the surfaces were investigated. Tissue mimicking phantoms with flat and curved surfaces were imaged with a time domain gating system. The beam field sizes (50 × 50–200 × 200 mm 2 ), incident angles (0 ◦ –70 ◦ ) and imaging regions were all varied. Results: The entrance or exit region of the tissue has nearly homogeneous energy spectra across the beam, such that their ˇCerenkov emission is proportional to dose. Directly simulated local intensity of ˇCerenkov and radiation dose in voxelized flat and cylindrical phantoms further validate that this signal is proportional to radiation dose with absolute average discrepancy within 2%, and the largest within 5% typically at the beam edges. The effective sampling depth could be tuned from near 0 up to 6 mm by spectral filtering. The angular profiles near the theoretical Lambertian emission distribution for a perfect diffusive medium, suggesting that angular correction of ˇCerenkov images may not be required even for curved surface. The acquisition speed and signal to noise ratio of the time domain gating system were investigated for different acquisition procedures, and the results show there is good potential for real-time superficial dose monitoring. Dose imaging under normal ambient room lighting was validated, using gated detection and a breast phantom. Conclusions: This study indicates that ˇCerenkov emission imaging might provide a valuable way to superficial dosimetry imaging in real time for external beam radiotherapy with megavoltage x-ray beams. © 2013 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi.org/10.1118/1.4821543]
INTRODUCTION
In megavoltage (MV) external beam radiotherapy (EBRT), skin is either included in the intended treatment volume, or may be a dose limiting organ at risk depending on the clinical plan. Skin dose measurements during radiotherapy would be a useful tool for treatment monitoring, skin reaction estimation, 1 and treatment plan design and modification. [2][3][4][5][6][7] However, superficial dose is generally deposited in the build-up region, being sensitive to many factors including beam energy, beam type, beam filter, 8,9 incident angle, 10-12 distance, 9,13 complex patient surface profiles, 7,14 internal heterogeneities, 15 patient movement, and deformation. These factors, especially irregular surface profiles, internal heterogeneities, movement and deformation of the treatment region, decrease the accuracy of superficial dose prediction and may result in underdosing or overdosing in specified treatment plans. Several conventional dose measuring methods, such as radiochromic film, [16][17][18][19][20] ionization chamber, 21 MOSFETs, [22][23][24] and TLDs (Refs. [25][26][27], exist for superficial dose measurement; however, these techniques require clinical intervention to place detectors on the patient and additional personnel time for postprocessing. All are limited by small fixed region measurements and sensitivity is often a function of angular orientation of the detector with respect to the incident beam. Film and TLDs have longer offline processing procedures which prevent superficial dose monitoring in real time, and so are not done routinely. Thus, a simple method of superficial dose monitoring in real time, suitable for large fields of view, would be desirable. In this study, the feasibility of usingČerenkov emission is investigated for the first time for superficial dose imaging for megavoltage x-ray beam. Cerenkov radiation emission occurs in a dielectric medium (such as water and biological tissue) when charged particles move with a phase speed greater than the speed of light in that medium. 28 TheČerenkov effect induces continuous wavelengths of optical emission from the ultraviolet down to the near-infrared. Frank-Tamm's formula shows that the spectral intensity emitted varies as the inverse square of the wavelength, and thusČerenkov light is observed as highly weighted to the blue wavelength ranges. 29 Recently,Čerenkov radiation was measured from megavoltage external x-ray and electron beams during radiotherapy in both water and tissue, 30 showing the potential applications such as oxygenation sensing and oxygen partial pressure tomography. [31][32][33] It has been shown that, above the threshold energy forČerenkov radiation (approximately 220 KeV in biological tissue), under the approximation of charged particle equilibrium, the dose deposited by megavoltage radiotherapy radiation, and the number ofČerenkov photons released locally are directly proportional; therefore, beam profiling and superficial dosimetry imaging based onČerenkov radiation is feasible. [34][35][36] Radiation dose is calculated by D = E max 0 S(E)P (E)dE, where D represents radiation dose, S(E) represents the mass stopping power (J m 2 /kg) of the medium and P(E) represents the fluence spectra of charged particles (m −2 ). Similarly, the local intensity ofČerenkov radiation can be calculated by I = E max E c C(E)P (E)dE, where I represents the local intensity ofČerenkov radiation, E c represents the threshold energy ofČerenkov radiation in the medium, C(E) represents the number ofČerenkov photon emitted by a charged particle (such as electron) with kinetic energy of E per unit path length of the propagation of the charged particle and P(E) represents the spectra of charged particles. Typically, S(E) and C(E) have very different profiles. However, as long as P(E) is spatially independent, i.e., the spectra of charged particles is a constant distribution in the region of interest, radiation dose is directly proportional to the local intensity ofČerenkov radiation above the threshold energy. Reference data 37 show that the continuous slowing down approximation (CSDA) range of electrons, with kinetic energy below the threshold energy in the medium (take water as example, E c ∼ = 0.263 MeV) is around 0.1 mm. Due to scattering of electrons, the absolute distance of travel below the threshold energy ofČerenkov radiation is actually smaller than the CSDA range. This means that, to a resolution of 0.1 mm, with the assumption that P(E) is spatially independent in the region of interest, the dose contributed by those charged particles below the threshold energy will be a constant off-set of the dose contributed by charged particles above the threshold energy. As long as P(E) is spatially invariant (charged particle equilibrium) in the region of interest, to the resolution of the CSDA range of charged particles below the threshold energy, local intensity ofČerenkov radiation will be directly proportional to radiation dose. This observation is the theoretical underpinning of whyČerenkov emission can be considered to be proportional to deposited dose in several situations.
In this study, the spectra of charged particles [P(E)] was simulated at different regions (entrance and exit regions), crossing the whole beam near the surfaces while irradiating flat and curved phantoms with megavoltage x-ray beams to validate the spatial homogeneity of P(E). Radiation beams were simulated in flat and curved phantoms, and radiation dose and local intensity ofČerenkov radiation near the surfaces were compared directly with each other, to demonstrate that local intensity ofČerenkov radiation is proportional to superficial radiation dose. Sampling regions of superficial dosimetry based uponČerenkov emission for x-ray beams were investigated by Monte Carlo simulations in layered skin models (flat and curved surface) with typical optical properties. Angular distributions ofČerenkov photons escaped the surfaces on flat and curved phantom, which is potentially useful for intensity corrections due to the viewing angle and curvature of the surfaces, were simulated. Experimentally, while irradiating with megavoltage x-ray beams, surface of flat and curved (breast-shaped) tissue-mimic phantoms were imaged by a time domain gating system, 38 varying field sizes, incident angles, and imaging regions. The acquisition speed and signal to noise ratio (SNR) were investigated for different acquisition procedures. The ability of imaging with reasonable ambient light levels was validated with a gated intensified CCD camera. Taken together this provides a comprehensive preclinical analysis of the feasibility of surface dose monitoring with optical imaging.
MATERIALS AND METHOD
This study utilized the GEANT4 based toolkit GAMOS for Monte Carlo modeling to stochastically simulate radiation transport, dose deposition,Čerenkov radiation emission, and transport. 39 The process of radiation transport, dose deposition, generation ofČerenkov photons, and transport of optical photons have been explained in detail by previous studies. 31,40 Phase space files of 6 MV x-ray beams for the linear accelerator (LINAC) (Varian Clinic 2100CD) were generated in Ref. 41 and used in this study. The experiments were performed with a LINAC (Varian Clinic 2100CD, Varian Medical Systems, Palo Alto) at the Norris Cotton Cancer Center in the Dartmouth-Hitchcock Medical Center.
2.A.1. Flat surface
As shown in Fig. 1(a), detectors with size of 5 × 5 × 5 mm 3 at positions (central and edge regions) were place on the surface of a flat phantom (water equivalent) of 100 mm thickness. Phase space files of 6 MV x-ray beams were used to irradiate the flat phantom at SSD = 1000 mm. Energies of charged particles in the detectors were logged. Entrance and exit planes were investigated for field sizes of 20 × 20, 40 × 40, 100 × 100, and 200 × 200 mm 2 . For each field size, 100 × 10 6 primary particles (x-ray photons from the phase space files) were launched for the simulation.
2.A.2. Curved (cylindrical) surface
As shown in Fig. 2(a), spherical detectors with diameter of 3 mm were positioned along the central arc of a cylinder phantom (water equivalent, diameter, and height of 83 mm) every 15 • . A phase space file of 6 MV x-ray beam with field size of 100 × 100 mm 2 was adopted to irradiate the cylindrical phantom centrally from the side at SSD = 1000 mm. One hundred million primary particles were launched and simulated, while energies of charged particles in these detectors were logged. of images of radiation dose and local intensity ofČerenkov radiation were compared directly for corresponding field size.
2.B.2. Curved (cylindrical) surface
The cylindrical phantom defined in Sec. 2.A.2 was voxelized with size of 0.5 × 0.5 × 0.5 mm 3 . 6 MV x-ray beam with field size of 100 × 100 mm 2 was simulated to irradiate centrally from the side at SSD = 1000 mm. One hundred million primary particles were launched and simulated and the radiation dose and local intensity ofČerenkov radiation were logged for each voxel. At the boundary, the number recorded by each voxel was weighted based on the fraction of the volume inside the cylinder. Six planes of voxels adjacent to the central transection, representing a thickness of 3 mm, were isolated and median filtered along the central axis of the cylinder to generate images of radiation dose and local intensity of Cerenkov radiation of the central transection. Images of the central transection were smoothed by bilateral filtering. From the images of the central transection, for a cylindrical layer along the arc of 3 mm thickness underneath the side surface [indicated by dashed lines in Figs. 2(c) and 2(d)], profiles of radiation dose and local intensity ofČerenkov radiation were compared.
2.C. Simulation: Sampling region in layered skin models
Thickness and optical properties of layers of the skin have been reported in several papers, and here we used the well characterized model by Meglinski et al. 43 This layered skin model (flat phantom with size of 1000 × 1000 × 100 mm 3 ) was built in GAMOS with each layer having the corresponding thickness and optical properties (refractive index, absorption, and scattering coefficient) at entrance and exit plane. Three types of skin [skin 1: lightly pigmented skin (∼1% melanin in epidermis), skin 2: moderately pigmented (∼12% melanin in epidermis), skin 3: darkly pigmented (∼30% melanin in epidermis)] have been investigated. Pencil beams were generated by sampling the energy distribution of the phase space file (6 MV, 100 × 100 mm 2 ). While irradiating (100 × 10 6 primary particles) the surfaces of the phantom normally with the pencil beam, Cerenkov photons were generated and tracked through processes including Mie scattering, absorption, reflection, and refraction at the boundary. The generation ofČerenkov photons and transport of optical photons have been explained in detail by previous studies. 31,40 For anyČerenkov photon escaping the entrance surface of the phantom, initial positions and final energy were recorded. The depth of all theČerenkov photons escaping the entrance surface was logged in a histogram and was fitted by a single exponential decay. 44 The effective sampling depth (depth where the detection sensitivity drops to 1/e) was calculated. Sampling depth tuning based on spectral filtering can be discerned from the results of this simulation.
2.D.1. Flat homogeneous phantom
A slab (1000 × 1000 mm 2 , varying thickness from 0.1 to 100 mm) of homogeneous water equivalent phantom was defined in GAMOS [ Fig. 5(a)]. Pencil beams were adopted to irradiate the slab phantom. The final directions of anyČerenkov photon escaping the surfaces (entrance and exit plane) were logged. Angular distributions with respect to the normal of the surfaces were calculated by histogramming the directions and compared with the Lambertian distribution for ideal diffusive medium. Factors affecting the angular distribution, including incident angle (0 • -85 • ), optical properties (1%-5% blood + 1%-3% intralipid), 31 refractive index (1.1-1.5), tissue thickness (0.1-100 mm), beam energy (sampled from the 6 MV x-ray phase space file and monoenergetic from 2 to 10 MV) and scattering model ((1 − α) × Rayleigh + α × Mie), were investigated. As indicated in bold in Table II, the default conditions are incident angle = 0 • , optical properties = 1% blood + 1% intralipid, refractive index = 1.33, tissue thickness = 100 mm, beam energy = sampled from the 6 MV X-ray phase space file, and scattering model = 100% Mie scattering. While varying one of the conditions, the other conditions were set to the default.
2.D.2. Flat surface of layered skin models
Similar to Sec. 2.C.1, slab phantom of layered skin (1000 × 1000 × 100 mm 3 ) was built in GAMOS [ Fig. 5(a)]. Pencil beams sampled from the 6 MV x-ray phase space file was adopted to irradiate the slab phantom. The final directions of anyČerenkov photon escaping the surfaces (entrance and exit plane) were logged. Angular distributions with respect to the normal of the surfaces were calculated by histogramming the directions and compared with the Lambertian distribution for ideal diffusive medium. The three types of skin mentioned before were investigated, while incident angle varying from 0 • to 70 • .
2.D.3. Curved (cylindrical) surface of layered skin models
Similar to the setup described in Sec. 2.A.2, cylindrical phantoms of layered skin (41.5 radius, 41.5 mm height) were built in GAMOS and detectors (3 mm width, 5 • along the central arc) were placed on the surfaces [ Fig. 5(b)]. For anyČerenkov photon escaped the surfaces and reach the detectors, direction was logged. The angular distribution with respect to the normal of the detectors was calculated and compared with a Lambertian distribution to investigate how it being affected by the curvature of the surfaces.
2.E.1. Flat phantom
As shown in Fig. 6(a), the optical imaging system was created from a time domain gating ICCD camera (PI-MAX3, Princeton Instrument) with a Canon EF (55-250 mm, f/4-5.6) lens. The LINAC works in pulsed mode and the radiation burst lasts for approximately 3 μs at a frequency near 180 Hz. By synchronizing the ICCD gate to the radiation burst,Čerenkov radiation emitted from the surfaces was imaged, while the contribution of the signal from ambient light was significantly reduced. 38 A solid water equivalent phantom (Plastic Water, CNMC) of 300 × 300 × 40 mm 3 was irradiated by 6 MV x-ray beams (600 MU/min) at SSD = 1000 mm. The ICCD camera was mounted 2.5 m away and 1 m above the surfaces of the phantom and a computer, which was used to remotely control the camera outside the radiotherapy room. Images were processed by background subtraction, median filtering over a stack of repetitive frames of images with each frame of image an accumulation of certain number (50-1000) of radiation bursts to remove stray radiation noise which results in saturated pixel values. 45 Affine transformation was implemented based on chosen points on the image to correct the perspective distortion. 44 Finally, each image was smoothed by bilateral filtering and self-normalized by the maximum to the range of [0, 1]. Acquisition speed and signal to noise ratio for different acquisition procedure were investigated.
2.E.2. Breast shaped phantom
To simulate whole breast radiotherapy, an anthropomorphic phantom [ Fig. 7(a)] was made of silicone and placed on a torso phantom at the correct clinical position while irradiating with 120 × 80 mm 2 , 6MV x-ray beam, at an incident angle of 80 • (10 • upward with respect to the horizontal plane). The ICCD camera was placed at the foot of the patient couch, at the same height of the breast phantom and about 3.25 m away. Three positions of the side surface (entrance, exit, and tangential) were imaged. Images were taken with ambient light on and off to validate that most of the ambient light can be rejected by the time domain gating technique. Figure 1 shows the results of the energy spectra of charged particles and validation of localČerenkov emission as a surrogate of radiation dose for 100 × 100 mm 2 , 6 MV gamma beam. Figures 1(b) and 1(c) show the simulated energy spectra of charged particles at different regions [indicated in Fig. 1(a)] on the entrance and exit planes. The spatial homogeneity of these spectra suggests localČerenkov emission can be used as a surrogate of radiation dose within small discrepancy. Comparing Fig. 1(b) with Fig. 1(c), the energy spectra of charged particles is more spatially homogeneous on the entrance plane than that on the exit plane, suggesting that the discrepancy between localČerenkov emission and radiation dose should be smaller on the entrance plane. To quantify the spatial homogeneity of energy spectra of charged particles, the absolute average discrepancy (mean value of the absolute difference between two self-normalized energy spectra by the maximum to the range of [0, 1]) of the energy spectra of charged particles simulated at position 1-9 [ Fig. 1(a)] were calculated with respect to the spectra simulated at position 1 (center of the beam field) and listed in Table I for field sizes of 20 × 20, 40 × 40, 100 × 100, and 200 × 200 mm 2 , showing a trend of increasing with field size. Figures 1(d) and 1(e) show the simulated images of superficial dose and localČerenkov emission for entrance plane. CP and IP profiles were shown in Fig. 1(f). Figure 1(g) shows the discrepancies between these profiles. Figures 1(h)-1(k) show the same results for the exit plane. The maximum and absolute average discrepancies between localČerenkov emission and radiation dose were calculated and listed in Table I for CP and IP profiles on entrance and exit plane for field sizes of 20 × 20, 40 × 40, 100 × 100, and 200 × 200 mm 2 . With largest discrepancy within 5% at the edges of the beam field and absolute average discrepancy within 2%, the discrepancy shows a trend of increasing with field size. Agreeing with the results of the energy spectra of charged particles, the average discrepancy for entrance plane is generally smaller than that of the exit plane.
3.A.2. Cylindrical phantom
As shown in Fig. 2(b), based on the similarity shared with each other, the energy spectra along the central arc on the surfaces of the cylindrical phantom can be divided into two groups [entrance plane (0 • -90 • ) and exit plane (90 • -180 • )], suggesting that localČerenkov emission could be taken as surrogate of radiation dose for both entrance and exit plane independently. The absolute average discrepancies of the selfnormalized energy spectra of charged particles (with respect to data measured at 0 • for entrance plane and 180 • for exit plane) were listed in Table I Table I.
3.B. Sampling region in layered skin models
The sampling depth distribution ofČerenkov photons and corresponding exponential fitting for average emission depth of origin on the entrance plane is shown in Fig. 3(a), for the three types of increasing skin pigment. Figure 3(b) shows the spectra ofČerenkov emission on the entrance plane, with the predominant emissions in the red and infrared wavelengths, and increasing overall emission for decreasing skin pigment, as might be expected. Effective sampling depths for different wavelength ranges [400-900 nm (overall), 400-500, 500-600, 600-700, 700-800, and 800-900 nm] are listed in Fig. 3(c), illustrating that the sampling depth changes substantially with wavelength range. In fact, wavelength range changes affect the emission sensitivity depth by more than an order of magnitude, whereas skin pigment changes alter this value by less than a factor of 2. Similar results were shown in Fig. 4 for the exit plane. The build-up effect [ Fig. 3(a) obvious for the entrance plane, which leads to larger sampling depth than that of the exit plane.
3.C. Angular distributions ofČerenkov emission on the surfaces
Figure 5(c) shows the angular distribution ofČerenkov emission on the entrance surface for the three types of skin. All of the profiles look similar to the Lambertian distribution, while the discrepancy increases for increasing skin pigment. Figure 5(d) shows that the angular distribution is insensitive to incident angle, because of high scattering ofČerenkov photons in the tissue. As shown in Fig. 5(f), if the layer of the tissue is too thin (<1 mm), which meansČerenkov photons will not be scattered enough, the angular distribution has a large discrepancy (Table II) with respect to a Lambertian distribution. For a curved surface, Fig. 5(e) shows the angular distribution is close to a Lambertian distribution and not sensitive to the curvature. All the other conditions mentioned in Sec. 2.D were investigated for the entrance and exit planes, and the absolute average discrepancy with respect to a Lambertian distribution is listed in Table II. In most of the cases, the absolute average discrepancy varies around 5%, suggesting that a Lambertian distribution is usually a reasonable approximation for angular correction ofČerenkov images.
3.D.1. Flat phantom imaging
To image enoughČerenkov photons, each frame of the image was measured by accumulating theČerenkov emission from many radiation bursts delivered by the LINAC. Figure 6(b) shows the acquisition speed of the time domain gating system for a frame of image, with accumulations from 1 to 1000 radiation bursts. To increase the SNR of theČerenkov image, several frames of images with the same accumulation were taken together as a stack and median filtered over it. After background subtraction, image transformation and image smoothing with bilateral filtering, each image was 1024 × 1024 pixels. Irradiating with 100 × 100 mm 2 , 6 MVX-ray beam, different acquisition procedures were investigated. A square region (100 × 100 pixels) in the center of images was chosen to calculate the SNR (mean pixel value over the standard variance). As shown in Fig. 6(c), the SNR increases with the number of accumulations and the number of frames of images included in median filtering. For example, from Fig. 6(b), an image of 50 accumulations takes about 0.21 s and from Fig. 6(c), median filtering over 10 frames of images gives a SNR over 35, suggesting the possibility of real time or semireal time (depending on SNR) superficial dose monitoring. By setting the acquisition procedure to be 50 accumulation each frame of image and median filtering over a 2 ) and incident angles (from 0 • to 70 • with field size to be 100 × 100 mm 2 ) were shown in Figs. 6(d) and 6(e). Figure 7 shows theČerenkov images of the breast shaped phantom (described in Sec. 2.E.2) from different angles [entrance, tangential, and exit as indicated in Fig. 7(a)] during EBRT. The acquisition procedure was set to be 50 accumulations each frame of image and 10 frames of images for median filtering over the stack. Figures 7(c)-7(e) validatedČerenkov emission could be imaged and thus superficial dose could be estimated for complex surface profiles within the process of EBRT. As shown in Figs. 7(e) and 7(f), images of the exit plane measured with and without ambient light [ Fig. 7(b)] were similar to each other, suggesting imaging with reasonable level of ambient light during EBRT is possible. There was a slight offset shown between them, which is suitably smaller than the dynamic range, and so can be subtracted off as needed in postprocessing.
DISCUSSIOŇ
Cerenkov radiation is intrinsically generated in tissue during irradiation. Different from conventional superficial dose measurement techniques, superficial dosimetry imaging based onČerenkov radiation does not require any detector to be placed on patient or any clinical intervention within the process of EBRT. Instead of small region measurement, this technique is able to image a large field of view or focus on the region of interest, which provides global as well as detailed information about superficial dose distribution. As shown in Figs. 6(b) and 6(c), the acquisition time of images with reasonably good quality and SNR (about 35) is approximately 2 s. Comparing this to the time scale of typical radiotherapy (about 10-20 s at dose rate of 600 MU/min), real time monitoring of patient movement, deformation and the corresponding effects to superficial dose delivery is possible. Instead of correlatingČerenkov images to superficial dose distribution, this technique could also be used for quality assurance of the radiotherapy beam for hot or cold spot detection.
AlthoughČerenkov imaging has shown certain advantages for superficial dose assessment, several important issues exist which needs to be clarified. First, local intensity ofČerenkov radiation is proportional to radiation dose under the approximation that energy spectra of charged particles is spatially independent. This approximation was validated with maximum discrepancy within 5% and average discrepancy within 2% ( Fig. 1 and Table I) for flat phantom with different field sizes from 20 × 20 to 200 × 200 mm 2 . For curved phantoms, as shown in Fig. 2 and Table I, this approximation holds for most of entrance and exit plane and has the largest discrepancy (within 15%) near the tangential region, which means thať Cerenkov images of entrance and exit regions should be interpreted independently. It is worth noting that the images could under-or overestimate radiation dose several percent, especially near the edge of the beam field or in tangential regions. Calibration of this issue requires detailed information about energy spectra of charged particles at different regions, which is potentially possible but computationally intense. In practice, the easiest solution is that regions such as beam edges and tangential surfaces with respect to the direction of the incident radiation beam could be eliminated from the image, or interpreted with caution for superficial dose estimation.
Unlike conventional superficial dose measurement techniques,Čerenkov emission samples the superficial dose several millimeters (0-5 mm) underneath the surfaces with the sampling region being sensitive to the optical properties [ Figs. 3(a) and 4(a)]. The detectedČerenkov intensity is also correlated to optical properties [ Figs. 3(b) and 4(b)]. One potential way to solve this issue is including noninvasive optical properties techniques such as reflectance spectroscopy 46 to measure optical properties of the skin accurately and adopted for sampling region simulating, sampling depth tuning by spectral filtering [Figs. 3(c) and 4(c)] andČerenkov intensity to absolute dose calibration. This will be investigated in following clinical studies focusing on whole breast treatment and correlate theČerenkov images to superficial dose and skin reactions for different types of skin.
For complex surface profiles (breast or head and neck tumor treatment), the angular correction ofČerenkov images is important. For highly scattering media such as human tissue, optical photons could be scattered sufficiently and lose their initial angular distributions. As summarized in Table II for flat and curved tissue mimic phantom, the angular distribution of emission is close to the theoretical Lambertian distribution for perfect scattering medium. Lambertian distributions essentially simplify the angular correction to a trivial monotonic function with a known analytic expression, which is especially important for complex curved surfaces, because the intensity changes due to the curvature of the surfaces will be exactly compensated by the corresponding changes of the solid angle. The results in this study suggest that Lambertian correction would be a reasonably good normalization factor to correct the emission intensity. While complete angular correction is a challenge because angular distributions is affected slightly by the surfaces profiles combined with all the potential factors, many of these issues can be potentially addressed by 3D surface capture techniques 47 and systems (e.g., AlignRT R , Vision RT; Catalyst TM , C-Rad). In the following clinical trial of whole breast radiotherapy, combining 3D surface profiles with the treatment plan of radiation delivery and measured optical properties of patient skin will be investigated. By coupling 3D surface profiles and measurements of optical properties withČerenkov images, factors such as curvatures of the surfaces, incident angles, and optical properties could be decided for the treatment region and adopted to simulate the angular distribution, which will be later used for intensity corrections due to viewing angles and curvatures. In practice, a reasonable level of ambient lighting is essential for the patient's comfort and for the radiation therapy technicians to do their job. This ambient light could easily affectČerenkov images if imaging with standard cameras. To solve this problem, time-gating of image acquisition was demonstrated in this study. By synchronizing the camera to short radiation bursts (3 μs at a repetitive frequency about 180 Hz), ambient light which is generally continuous in time will be reduced to less than 5% of the signal. As shown in Figs. 7(e) and 7(f), with the ambient light level to be what shows in Fig. 7(b), theČerenkov images are not significantly affected. Further improvement will be implemented by coding the camera to take background images during the gating, while radiation burst is just off and subtract the background image fromČerenkov image automatically.
CONCLUSION
We have shown that localČerenkov emission can be used to estimate radiation dose for flat and curved surfaces. Simulation of the sampling region ofČerenkov emission in layered skin models suggested the possibility of sampling depth tuning based on spectral filtering. Angular distributions of Cerenkov photons escaping the surfaces are close to the well-known Lambertian distribution, because of the tissue's high optical scattering, simplifying the angular correction of Cerenkov image for flat and curved surfaces. The concept of superficial dose imaging based onČerenkov emission for MV EBRT x-ray beams was demonstrated in breast phantoms by time domain gating, suggesting real time superficial dose monitoring with reasonable ambient light level. While this work focuses on Monte Carlo simulations and phantom studies, it is clear that this signal is emitted from all tissue, and in vivo superficial dosimetry via quantitative imaging will be investigated with further development in a whole breast radiotherapy clinical trial.
|
v3-fos-license
|
2021-05-12T13:18:28.879Z
|
2021-04-11T00:00:00.000
|
234366261
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/scn/2021/5558873.pdf",
"pdf_hash": "5ea23a8d1ec0c92b3e6abf281cff4ec17803cad6",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1206",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "6682c0ae6ed364510e4d10822bcd2ad8ff176deb",
"year": 2021
}
|
pes2o/s2orc
|
Reversible Privacy Protection with the Capability of Antiforensics
In this paper, we propose a privacy protection scheme using image dual-inpainting and data hiding. In the proposed scheme, the privacy contents in the original image are concealed, which are reversible that the privacy content can be perfectly recovered. We use an interactive approach to select the areas to be protected, that is, the protection data. To address the disadvantage that single image inpainting is susceptible to forensic localization, we propose a dual-inpainting algorithm to implement the object removal task.-e protection data is embedded into the image with object removed using a popular data hiding method.We further use the pattern noise forensic detection and the objective metrics to assess the proposed method. -e results on different scenarios show that the proposed scheme can achieve better visual quality and antiforensic capability than the state-of-the-art works.
Introduction
Photo sharing has become a widespread user activity with the advent of intelligent mobile devices and online social networks (OSN). Image distributions cause privacy concerns and the requirement to modify permissions since the shared content contains sensitive data of users. By providing unique rights to selected communicating parties in OSN, users' security and privacy can be strengthened. A well-established form of privacy protection is to blur a part of an image, which can be achieved by various image processing techniques, for example, blurring, mosaic, masking, and object removal, as shown in Figure 1. In these methods, the first three must introduce a significant amount of distortion to hide the underlying content. Object removal provides more natural viewing conditions and is able to protect the content.
is process is reversible such that the original data can be accessed with permissions [1].
After object removal in an image, the broken parts can be inpainted using the surrounding contents. Generally, image inpainting algorithms can be divided into three groups, including the statistical-based, the diffusion-based, the patch-based, and the deep generative models-based methods [2,3]. Statistical methods use parametric models to describe textures but fail when additional intensity gradients are applied [4]. Diffusion-based methods propagate pixels from the known areas of the image [5][6][7] using smoothness priors; however, blurring occurs when large and high-frequency regions need to be inpainted. Patch-based and deep generative models are the most widely used, where the former fills the holes in the image using the patch from local or global search regions [8][9][10][11][12] and the latter exploits semantics learned from large-scale datasets [13][14][15]. None of the inpainting algorithms have considered the secrecy of the inpainted areas from the security perspective. e inpainted images are easy to be detected and located by forensic algorithms.
In this paper, we propose a new privacy protection scheme using image inpainting and data hiding, which realizes the antiforensics capability. When considering the undetectability of edge inpainting, we use the algorithm of the DFNet network [16]. e regions around the broken edge are inpainted twice, and the inpainting results are fused to achieve the capability of antiforensics. By combining image dual-inpainting and data hiding, a privacy protection scheme with antiforensics capability is realized. We combine local variation within and between channels and use the popular data hiding algorithm HILL [17] to embed the protection data. e rest of this paper is organized as follows: we introduce the related works in Section 2. e proposed method is depicted in Section 3. Experimental results and analysis are provided in Section 4. Section 5 concludes the whole paper.
Related Works
In this section, we introduce the works that are related to the proposed method, including the image inpainting, the data hiding, and the image forensics.
Image Inpainting.
Image inpainting is a method to fill the missing information in an image and is quite important in the field of image processing. Nowadays, the deep generative models-based methods are widely used in the field of image inpainting [14,[18][19][20][21][22][23]. Numerous methods can be divided into two categories [24]. One approach is to use an effective loss function or construct an attention model to fill in the missing regions to try to make the content more realistic. ey use the content in the background to fill, and a better way is to fix the unknown region by partial convolution [18]. e other approach focuses on structural consistency. To ensure the continuity of the image structure, these approaches usually adopt edge-based contextual priors. For example, [19] designed an edge linking strategy that can well solve the image semantic structure inconsistency problem.
Regardless of the inpainting method, there is a discontinuous transition zone at the edge of the inpainting. is area will become a forensic object and thus easy to locate the inpainting area by someone who is interested, which is quite unsafe. In order to not only achieve a good visual effect but also secure safety, a smooth transition needs to be achieved in advance. An iterative method to optimize the pixel gradients in the edge transition regions is proposed in [25],. e quality of fusion depends on whether the incorporated content is consistent with the original content in terms of gradient changes. us, Hong et al. [16] design a learnable fusion block to implement pixel-level fusion in the transition region, which is named deep fusion network for image completion (DFNet). e results show that DFNet has superior performances, especially in the aspects of harmonious texture transition, texture detail, and semantic structural consistency.
Data Hiding.
To further optimize the data embedding problem in information hiding, adaptive embedding algorithms are widely proposed. Among them, STC (Syndrome Trellis Coding) [26] based adaptive architectures are most preferred by researchers.
is method uses a predefined distortion function to minimize the additive distortion between stego and cover. For the multiscale characteristics of the image space, the design of the distortion function has attracted more and more attention. For instance, Li et al. [17] proposed a new distortion function for image information hiding. e cost function is composed of a high-pass filter and two low-pass filters. e high-pass filter is used to locate the difficult-to-predict parts of an image and then employ the low-pass filters to make the low-cost values more clustered. Furthermore, the methods of MiPOD (Minimizing the Power of Optimal Detector) [27] and ASO (Adaptive Steganography by Oracle) [28] were proposed one after another. In addition, a number of distortion functions have been proposed for JPEG steganography as well, such as IUERD (Improved UERD) [29], UED (Uniform Embedding Distortion) [30], and RBV (Residual Blocks Value) [31].
In addition, some work uses machine learning algorithms to design steganalysis tools to detect steganography. Most of these approaches learn a general steganography model through a supervised strategy and then use it to distinguish suspicious images [32][33][34][35]. With the rapid development of deep learning, the performance of steganalysis has been greatly improved [36][37][38]. However, depth features still have limitations in steganalysis [39]. For example, the truncation and quantization operations in the feature extraction process are difficult to be learned by existing networks. erefore, feature extraction is still a challenge in steganalysis, and many rich feature sets have been used for JPEGY steganalysis. e main available feature sets include JPEG rich-model [40], DCTR GFR (Gabor filter residuals) [41], and DCTR (Discrete Cosine Transform Residual) [42]. In the classification process, the ensemble classifier is considered to be effective in measuring the feature set [43,44].
Image Forensics.
Currently, there are two forensic methods of detecting image inpainting [45,46]. In [45], the authors find that the Laplacian operations along the isophote direction in the inpainted regions are different from the other regions. Accordingly, the inpainted regions can be identified by exploring the changes of local variances between intra-and interchannels. In [46], noise pattern analysis is used to locate the inpainted regions. For the images captured by one camera, the noise patterns in each image are approximately the same and vice versa. erefore, the noise pattern can be used as the fingerprint for a camera, which is widely adopted in image forensics. e noise pattern analysis algorithm in [46] is popular. In this model, the pixel values can be constructed by ideal pixel values, multiplicative noises, and various additive noises, which can be expressed by where I and O are the actual pixel and ideal pixel value of the natural scene, a is the sum of various additive noises, f(•) is the camera processing like CFA interpolation, and K is the coefficient for noise pattern. In equation (1), the multiplicative noise K·O is the theoretical expression of the noise pattern, which is a multiplicative noise in the high frequencies related to the image contents. Generally, we can use a low-pass filter to remove the additive noises. e residual noise is then used to estimate the noise pattern [47], as shown in the following equation: where F(•) is the low-pass filter and p is the estimated noise pattern. e noise pattern can be used to distinguish the content from different images. erefore, the inpainted region can be detected after extracting the noise pattern from each part of the image.
During inpainting, since there are limited pixels around the damaged regions, each diffusion is smoothed based on the surrounding pixels to accomplish the diffusion. erefore, the pixels located in the inpainted region satisfy I n t (i, j) � 0, which means that the results of Laplacian operation on this position remain unchanged along the isophote direction after the diffusion-based inpainting. e Laplacian variation along the isophote direction can be calculated by where ΔI(i, j) is the (i, j)-th Laplacian value and ΔI(i v , j v ) is the result of Laplacian operation on a virtual pixel on (i v , j v ). e virtual pixel is located at the direction of ∇I ⊥ (i, j), and its distance to the pixel I(i, j) is identical to 1.
Proposed Method
In this section, we present an antiforensic framework to perform object removal in images using dual-inpainting and data hiding. As shown in Figure 2, the proposed framework contains four parts. We first select the protected area interactively and calculate the percentage of the area in the whole image. en, the background with the missing protected area was inpainted. In order to achieve a satisfactory visual effect and be as forensic-free as possible, an image dual-inpainting algorithm is proposed, as shown in Figure 3 and described in Section 3.1-3.3. For the inpainted image, region segmentation is performed based on the changes of local variances between the intra-and interchannels. Meanwhile, the protected region is embedded into the background after converting it into a bitstream by combining the HILL embedding algorithm and considering the segmentation. On the recipient side, we can extract the embedded data, fuse it with the background image, and recover the original image.
Protection Region Selection.
We interactively specify the area in an image to be protected, which also means that the hidden area is determined. After that, we calculate the number of the pixels to be hidden, including the values and coordinates of these RGB pixels. e pixels are converted into bit stream for embedding. We define the bits of each pixel as 5 × 9, in which "5" stands for pixel values in three channels, horizontal and vertical coordinate values, and "9" means that we convert each decimal to 9 bits. In a color image, information can be embedded in all three channels at each position. us, the maximum amount of embeddable information is three times the image size. e maximum embedding ratio T is calculated to be 6.66% per image. Let t be the proportion of the selected protection region. e proportion should be smaller than a predefined threshold T. An example of the interactive region selection is shown in Figure 4.
Background Processing.
After specifying the protection area, we remove the contents in this area and inpaint the image. When inpainting large areas, it is often not possible to perfectly blend the inpainted area with the existing content, especially in the edge areas [16]. To fill this gap, the DFNet network [23] introduces a fusion block, which combines the structural and texture data and smoothly blends them during the inpainting process. As shown in Figure 5, I is the input image, F k is the feature maps from k-th layer, and I k is resize of I. e learnable function M is designed to extract the raw completion C k from feature maps F k , which is as follows: where M denotes the channel conversion operation, which converts n channel feature maps into 3-channel images under the condition of constant resolution. In addition, another learning function A is used to generate the alpha composition map a k : Map a k usually is obtained by synthesis from a single channel or 3 channels for imagewise alpha composition. Previous experience has demonstrated that channelwise alpha composition performs better. A is a convolutional module which consists of 3 convolutional layers with kernel sizes of 1, 3, and 1, respectively. e final result I k ′ is achieved by e fusion block makes the image inpainted by the DFNet network almost visually free of edge discontinuity. Although the DFNet network achieves good visual results, it is not suitable for privacy protection since it can be easily localized for forensics. For example, pattern noise of the image detection reveals clear artifacts in the restoration edge area. To conceal these traces and achieve the privacy-preserving, further manipulation of the inpainting image is required. Security and Communication Networks e detection area is mostly found in the edge area of the restoration, so we consider secondary processing of the edge area to eliminate the traces left during the restoration process. In this process, we used the mathematical morphology of the dilation operation and the erosion operation. In the dilation operation, the structural element B is used as an external window to increase the overall boundary of the target image. In the erosion operation, the structural elements serve as the internal windows to eliminate the boundary of the image. e dilation operation is expressed by equation (7) and erosion operation can be expressed by equation (8): e specific dual-inpainting process is shown in Figure 3. Firstly, the background image should be inpainted using the DFNet network. en, we apply a mathematical morphological dilation operation on the edges of the broken region mask map. Based on this mask map, secondary inpainting of the primary inpainted image is performed in the region. In addition, mathematical morphology erosion operation is then applied to the secondary inpainted region, leaving only a portion of the region close to the edge. Note that the dilation operation uses a larger size of structural elements than that of the erosion operation to ensure the results of the secondary inpainting of the lower edge are preserved. e results of the secondary inpainting of the edge region are fused with the primary repair map to obtain a graph of the experimental results of antiedging detection.
Area Segmentation and Data Hiding.
To hide the secret data of the protection region, we employ the popular data hiding framework which can be achieved by STC [17]. We improve the popular cost function HILL for STC to fit the requirements in our method.
In the STC framework, the theoretical minimum steganography distortion D for the marked image with an embedding amount of c (bits) can be defined as where p + i,j and p − i,j are the probabilities of adding 1 or subtracting 1 on c i,j , 0 < p + i,j + p − i,j < 1, and ρ i,j stands for the distortion values used to measure the effects of modification. e parameter λ (λ > 0) is used to make the ternary data entropy of the modification probability identical to the capacity c, as shown in the following equation:
Security and Communication Networks
To achieve the minimum distortion D, STC encoding is where y l ∈ {0, 1} MN is the least significant bits of the stego image, C(m) � {z ∈ {0, 1} MN |Hz � m} is the companion set of m, and H ∈ {0, 1} c×MN is a predefined low-density parity test matrix related to embedding speed and embedding efficiency. e embedded bits m can be extracted simply by a matrix multiplication operation: To fit the requirements in our method, we improve the popular cost function HILL for STC by combining variations within and between adjacent pixel channels. Specifically, we divide the cover image into four regions (marked with green, blue, black, and red in Figure 6) using the cost values of HILL and edge connectivity. e pixel complexity of the four regions decreases in the order of green, blue, black, and red. In other words, the green region has the most complex pixels and is the best embedding region for the whole image. erefore, secret bits are embedded into the green region preferentially.
Experimental Results
is section presents the experimental evaluation results. Firstly, we introduce the database employed and the corresponding parameters.
en, experiments for each part are presented in turn and their validity is demonstrated.
Performance for Antiforensics.
To evaluate the performance of antiforensics, we randomly select images from the database for validation and interactively select the areas to be protected, as mentioned in Section 2.
In each image, the selection of the protected area is irregular shape generally. For later embedding of data, we strictly controlled the ratio of protected areas to the image to less than 6.66%. We use two separate forensic approaches for the forensic analysis of our results: one is pattern forensics by pattern noise, and the other one is based on changes between and within adjacent pixel channels.
Firstly, we select 50 landscape images sized 512 × 512 from Today's Headlines. As shown in Figure 7, we selected four of them, I1, I2, I3, and I4 in turn. Table 1 lists the space proportion t and the number of pixels to be embedded in the whole image of the corresponding protection area of the four images in Figure 7. Figure 7(b), we find that Figure 7(d) has obvious traces at the repair edges, which makes the repair region easy to be forensically located. While our method overcomes this drawback well, it is difficult to forensically locate our tampered region from the pattern noise forensic aspect only. It shows that our aspect has a good antipattern noise forensic effect.
In Figure 8, we show the experimental results for five images (M1, M1, M3, M4, and M5) in the UCID database, sized 384 × 512. Table 2 lists the space proportion t and the number of pixels to be embedded in the whole image of the corresponding protection area of the four images in Figure 8. Two traditional methods and a deep learning method are used for comparison, where the traditional methods are edge-oriented and Delaunay-oriented provided by G'MIC [48], a full-featured open-source framework for image processing.
e deep learning-based one is the DFNet method mentioned in [16].
Security and Communication Networks
Comparing from the subjective vision, both our experimental results and the deep learning method outperform the traditional method and achieve good visual connectivity at the edges. In particular, in row 7 of Figure 8, the effect at the red petal achieves a good visual effect after blending with the primary restored image by our secondary processing of the restored edges.
In addition, we localized the inpainted image for forensics by the forensic algorithm proposed in [46], as shown in the even rows of Figure 7. e traditional restoration-based algorithm is easy to be detected and located, and the DFNet-based restoration also achieved good antiforensic results. However, the images obtained by our method are more suitable to hide the area to be protected. In particular, the results are better when the area to be protected accounts for less than 4% of the whole image.
In Table 3, we show the F1 values of the five images in Figure 8, where a smaller F1 value indicates a worse ability to correctly locate the image and indicates that we have a better antiforensic effect. We can see from Table 3 that our method is superior in terms of objective indicators.
Experiment Setup.
In our experiments, we use the free user-shared image dataset provided by Today's Headlines, which contains a large number of people landscapes, and various life images. We also use the UCID database. Based on the maximum amount of data that can be embedded in an image, it can be calculated that the size of the protected area must not exceed 6.66% of the whole image (T � 6.66%) no matter how large the image size is. For the structural elements for the mathematical morphology of the background process, the circular structure is employed since it has a Security and Communication Networks smoother edge where the structure size is 10 for the dilation operation and 5 for the erosion operation.
To evaluate the performance of image dual-inpainting against detection and localization, we adopt F1-score, peak signal-to-noise ratio (PSNR), and mean square error (MSE) objective indicators to evaluate the inpainting results: where TP (true positive), FN (false negative), and FP (false positive) stand for the number of detected inpainted pixels, undetected inpainted pixels, and wrongly detected untouched pixels, respectively: where A(i, j) and B(i, j) are the original image and the inpainted image, respectively.
Reversibility Analysis.
In this section, we show that our privacy protection method is effective during communication or sharing. Meanwhile, our method is fully reversible, which enables data to be extracted when it reaches the recipient side.
In Figure 9, we show five sets of comparisons between the recovered images and the original images. e first two of which are from the Today's Headlines database and the last three from the UCID database. In the prerecovery and embedding image operations, there is no damage or tampering to the regions other than the region to be protected. erefore, under the condition of having the pixel values and coordinates of the region to be protected, the original images can be recovered. Table 1: e percentage of protected areas in the whole image(t) and the total number of pixels in the protected area(p), I1, I2, I3, and I4 represent the four pictures in Figure 7, respectively. Figure 8: Examples from the UCID database. Rows 1, 3, 5, 7, and 9: from left to right, the first image is the original image, and the second to the fifth images represent the inpainted image by references [16,48] and our method, respectively. Rows 2, 4, 6, 8, 10: from left to right, the first image is ground truth, and the second to the fifth images represent the localization result calculated by forensic algorithm 2.
Conclusion
Currently, most of the privacy protection methods only focus on visual quality, while the real protection needs to be considered from the perspective of image security analysis. We propose a reversible privacy protection scheme using image dual-inpainting and data hiding, in which the original image can be perfectly recovered.
Experimental results show that after the inpainting of the image with the removal of the area to be protected by the dual-inpainting algorithm, antiforensics for the two current methods for target removal forensics can be achieved. e later embedding and extraction of the protected region also achieve an effective combination of the two research directions of antiforensics and steganography. In addition, reversible privacy protection not only effectively stops snooping but also guarantees that the original image can be recovered when needed.
Data Availability
In our experiments, we use the free user-shared image dataset provided by Today's Headlines, which contains a large number of people landscapes and various life images. We also use the UCID database.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. Table 2: e percentage of protected areas in the whole image (t) and the total number of pixels in the protected area (p), M1, M2, M3, M4, and M5 represent the four pictures in Figure 8,
|
v3-fos-license
|
2019-04-18T13:05:17.876Z
|
2004-01-01T00:00:00.000
|
120381448
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1367-2630/6/1/010",
"pdf_hash": "e483c6578c003231eeec0e335ece154a3cd23592",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1207",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"sha1": "d6cbbb20e26df037c082c5de32743aac588d6765",
"year": 2004
}
|
pes2o/s2orc
|
Nanophase separation in side chain polymers: new evidence from structure and dynamics
New evidence for a nanophase separation of incompatible main and side chain parts in amorphous poly(n-alkyl methacrylates) with long alkyl groups are presented. Independent indications for the existence of alkyl nanodomains with a typical dimension in the 1 nm range from studies on dynamics and structure are reported. Results from nuclear magnetic resonance (NMR) experiments are compared with data from different relaxation spectroscopy methods on poly(n-decyl methacrylate). The NMR results in combination with relaxation spectroscopy data support the existence of an independent polyethylene-like glass transition, αPE, within the alkyl nanodomains in addition to the conventional glass transition a at higher temperatures. X-ray scattering data show that the situation in homopolymers is similar to that for random poly(n-alkyl methacrylate) copolymers with the same average length of the alkyl group in the side chains. Scattering data for a series of n-butyl methacrylate samples with polymerization degrees reaching from P=1 to 405 indicate that nanophase separation is chain-length independent above P=25, while the nanophase separation tends to disappear below P=6. Insensitivity of structural aspects in nanophase-separated poly(n-alkyl methacrylates) to changes in the molecular microstructure and consistency of NMR results with independent conclusions from relaxation spectroscopy underline the general importance of nanophase separation effects in a broad class of side chain polymers.
Introduction
In a recent paper [1], we have shown that a nanophase separation of incompatible main and side chain parts on a length scale of about 1 nm is a common feature of several series of side chain polymers with long alkyl groups. Structural data from x-ray scattering indicate that alkyl nanodomains with a typical size of 0.5-2 nm are formed in the amorphous melt by aggregation of alkyl groups from different monomeric units, which can belong to one and the same or different polymer chains. Within these alkyl nanodomains an independent relaxation process with the typical features of a dynamic glass transition-called polyethylene-like glass transition α PE -has been observed. In calorimetric data, this second glass transition can be seen at low temperatures, in addition to the conventional glass transition at higher temperatures related to the softening of the entire system [2]- [4]. As in the case of microphase-separated block copolymers, the co-existence of two glass transitions is a strong indication for a demixing of incompatible components.
A comparison of data for different series of polymers with long alkyl groups shows that the main features of nanophase-separated side chain polymers do not depend significantly on the microstructure, flexibility and glass temperature of the main chain: (i) two glass transitions have been observed for poly(n-alkyl methacrylates) [2], poly(di-n-alkyl itaconates) [3], hairy rod polyimides [4] and poly(n-alkyl acrylates) [1]. For all these series, the glass temperature for the polyethylene-like glass transition T g (α PE ) increases with increasing side chain length and depends mainly on the number of alkyl carbons per side chain C or equivalently on the alkyl nanodomain size. For a given C number, T g (α PE ) is basically main-chain-independent [1], although the difference in the conventional T g is up to 200 K. (ii) A prepeak in x-ray scattering data-being the structural reflection of the nanophase separation in side chain polymers-shifts systematically with the number of alkyl carbons per side chain C. Qualitatively, the scattering curves of poly(n-alkyl methacrylates), poly(n-alkyl acrylates) [1], hairy rod polyimides [4] and poly(di-n-alkyl itaconates) [5] show the same behaviour. Quantitatively, the prepeak shift for different polymer series is at least similar [1]: clear indications for a prepeak are observed, beginning with the butyl members (C = 4) where the prepeak maximum corresponds to an equivalent Bragg spacing of about 1.2 nm. For the decyl members (C = 10), equivalent Bragg spacings of about 1.8-2.0 nm are reported. Different pictures for an understanding of the scattering behaviour can be considered. One can think about a more local, quasi-one-dimensional picture starting from two single-polymer chains or, on a more abstract level, in three dimensions consider approximately the average distance of alkyl nanodomains in the melt (figure 1). Both pictures do not exclude each other and certain aspects of both views might be relevant. Although details of the morphology of nanophase-separated side chain polymers are not yet clear, the common aspect of both views is the aggregation of alkyl groups in alkyl nanodomains.
The results described for different series of side chain polymers support the idea [6] that nanophase separation is a general phenomenon in materials that consist of molecules with incompatible parts. Nanophase-separation effects have been reported for small molecule liquids [6], metallic glasses as well as semi-crystalline, liquid-crystalline [7]- [9] and amorphous [2,3] polymers.
An interesting point is the interrelation between structure and dynamics in nanophaseseparated polymers. The polyethylene-like glass transition α PE in these polymers can be understood as a hindered glass transition in self-assembled confinements [1,10]. The small dimensions of these domains in the 1 nm range are attractive for the background of the ongoing discussion regarding the existence and size of dynamic heterogeneities in glass-forming materials [11,12]. The dependence of the polyethylene-like glass transition α PE on the alkyl nanodomain size gives interesting information about the influence of spatial restrictions on the co-operative dynamics similar to experiments on liquids constrained in external confinements such as nanoporous glasses [13,14] or zeolites [15].
The aim of this paper is to support the interpretation that the dynamics of nanophaseseparated poly(n-alkyl methacrylates) is characterized by two co-existing glass transitions a and α PE based on NMR experiments, which can detect selectively the mobility of different main and side chain carbons in poly(n-decyl methacrylate). Random poly(n-alkyl methacrylate) copolymers and a series of n-butyl methacrylates reaching from the monomer to long polymer chains are studied by different x-ray scattering techniques to learn more about the influence of other molecular parameters on the nanophase separation in side chain polymers.
Scattering methods.
Wide angle x-ray scattering (WAXS) measurements in the range 3.5 nm −1 < q < 32 nm −1 for the scattering vector q = 4π/λ sin( /2) were performed using the Guinier method with a film camera (Huber diffraction equipment). X-ray scattering data in the range 2 nm −1 < q < 16 nm −1 were obtained from measurements with a two-dimensional (2D) detector (SIEMENS HI star) installed on a rotating anode. An Anton Paar Kratky camera was used for SAXS experiments (0.06 nm −1 < q < 7.44 nm −1 ) in combination with an x-ray generator Seiffert ID 3000 having a sealed x-ray tube operating at 40 kV and 30 mA. Cu Kα radiation with a wavelength of 0.145 nm −1 was selected with a Ni filter (20 µm) combined with the pulse-height-discrimination method. The intensity was recorded with a proportional counter in the step-scanning mode. The beam intensity was controlled by measuring a LUPOLEN standard. The variation in transmission factor was 20%. The entrance slit had a width of 100 µm and the height-determining slit length was 16 mm. Peak maxima and peak widths were taken from curves after correcting the error due to the slit-like cross-section of the primary beam in the Kratky system using a home-made desmearing program. All x-ray measurements were carried out at 298 ± 1 K. Samples are polymer films with a thickness of about 1.5 mm prepared using a hydraulic press at T ≈ T g + 50 K. Monomer, dimer and oligomers with very low viscosity are measured in thin-walled glass capillaries (Hilgenberg glass no. 140, wall thickness 0.01 mm and diameter 1 mm). , whereas for T > 300 K, single-pulse excitation rather than CP was applied. The MAS spin rates were set at 2 kHz and controlled by a standard DOTY controller. The temperature was carefully calibrated for each temperature and spinning rate by a standard procedure [17]. The uncertainties in temperature and spin rate are 1 K and 2 Hz, respectively. At temperatures lower than 220 K, the temperature uncertainty is about 5 K. Broad signals between 100 and 200 ppm in the low-temperature MAS spectra are artifact signals from the low-temperature rotor caps. The MAS spectrum (magic-angle spinning) of an I = 1/2 nucleus (e.g. 13 C) in a rigid solid, spinning with a MAS rate ν ROT , consists of spinning side bands (ssb), centred around the isotropic chemical shift of the nucleus and separated by ν ROT . The periodicity of the time-domain MAS-NMR signal, imposed by the macroscopic rotation of the sample, is transformed into ssb in the spectral dimension. In the time-domain signal, the so-called rotational echoes are formed at multiples of T R = 1/ν ROT that transform into the narrow ssb of the MAS-NMR spectrum. Ssb of observable intensity appear in the spectral range that is approximately equal to the span of the powder pattern owing to the anisotropy of the chemical shift (CSA) σ in a non-spinning sample. The relative intensities of the ssb carry information about the elements of the CSA tensor, which can be obtained by standard procedures [18].
The occurrence of a dynamic process with a characteristic time τ c of the order of T R disturbs the MAS periodicity of the NMR signal, leading to the so-called dynamic broadening of the ssb [19]- [21]. Starting from low temperatures where the correlation time of the relevant motions τ c is much larger than the MAS period T R , the line widths of the individual ssb start to increase and reach a maximum value for τ c ≈ T R , followed by a decrease in the line widths at shorter τ c (high temperatures). Thus, the information about the relaxation dynamics can be taken from the temperature of the maximum dynamic broadening and the relation between the MAS rate and the correlation time of the dynamic process [21]. NMR results are compared with relaxation spectroscopy data at ω max = 2πν ROT . Since a MAS rate of ν ROT = 2 kHz was used, the temperatures at which the maximum dynamic broadening occurs corresponds to log(ω/rad s −1 ) = 4.1.
Relaxation spectroscopy.
A commercial Novocontrol setup based on a Schlumberger SI1260 response analyser was used to measure the complex dielectric function ε * (ω) = ε − iε in the frequency range from 0.1 Hz to 1 MHz. Relaxation frequencies are taken from Havriliak The complex shear modulus G * (ω) = G + iG of PnDMA was measured in the frequency range from 1 to 100 rad s −1 with a Rheometrics Scientific RDAII instrument. The experiments were performed in stripe geometry (1.5 × 10 × 25 mm 3 ). The uncertainty of the absolute values for G * is large in this case (∼ ± 30%) owing to uncertainties concerning the sample geometry. T g for PnDMA is far below room temperature. Undercooled samples were quickly mounted at room temperature. Afterwards, the sample was cooled down step by step ( T = −3 K) and equilibrated at each temperature for 100 s before the isothermal frequency sweep was started.
Heat capacity measurements were performed using a non-commercial setup of the 3ω method. Two different heater sizes were used: in the low-frequency range (0.2-20 Hz), large heaters (5 × 10 mm 2 ) were used to obey the boundary conditions (heater size > thermal wavelength); in the high-frequency range Hz), small heaters (1.5 × 5 mm 2 ) were used to get sufficient signals. TMDSC data were used to calculate dynamic heat capacities c * p (ω, T ) from the originally measured effusivities ρκc * p (ω, T ) where ρ is the mass density and κ the heat conductivity. Details of the experimental setup and data evaluation are described elsewhere [22].
Dynamic aspects
Data from dielectric, shear and heat capacity spectroscopy for poly(n-decyl methacrylate) (C = 10) are shown in figure 2 as a representative example for the relaxation behaviour of nanophase-separated homopolymers. Temperature-dependent shear data (G (T )) measured at a frequency of 10 rad s −1 indicate the occurrence of two relaxation processes in the relaxation spectrum ( figure 2(a)). The dielectric loss ε (T ) at the same frequency indicates that only the conventional glass transition a at higher temperatures is observed with a significant intensity (figure 2(a)). Measurements with the 3ω method of heat capacity spectroscopy (at slightly higher frequencies) show that there are two co-existing processes in calorimetric data c * p (ω, T ). There is a very broad step in the real part and two well-separated peaks in the imaginary part ( figure 2(b)). The imaginary part can be fitted to a sum of two Gaussian functions. This function approximates usually the dynamic glass transition peak in c p (T ) quite well. The dispersion width of both peaks is about δT = 20 K, similar to the values observed for other strong glasses. The relaxation strengths c p (a) and c p (α PE ) are similar. Although the uncertainty of the peak temperatures for both processes is relatively large, it is clear from an Arrhenius plot (figure 2(c)) containing data from shear, dielectrics and heat capacity spectroscopy that the two peaks in c p (T ) correspond to the processes observed in the shear modulus G . The high-temperature process is related to the conventional main transition a, where the material softens dramatically and undergoes a glass-to-rubber transition. It is calorimetrically active and can also be observed in dielectric data, since the complete monomeric unit-including the carboxyl group where the main dipole moment is located-will move. The second process at lower temperatures is obviously also a dynamic glass transition, since it can be detected in calorimetric data, which is a typical feature of glass transitions in contrast to the behaviour of secondary relaxations in glasses showing only a tiny response in calorimetric signals [24]. This view is supported by the temperature dependence of the relaxation frequencies in the Arrhenius plot. In an Arrhenius approximation log ω = log · exp(−E A /RT ), one obtains unrealistic prefactors log( /rad s −1 ) ≈ 38, which is much larger than log( /rad s −1 ) ≈ 14 as expected for secondary relaxations in glasses. The temperature dependence of the relaxation frequency of the α PE process indicates non-Arrhenius behaviour and corresponds to the situation in a strong glass with a fragility (steepness index) of m ≈ 37 [10]. The missing signal in the dielectric response shows that the carboxyl group with the main dipole moment of the monomeric units is not involved in the motions relevant for the α PE process. All findings are consistent with the idea that the α PE process is a 'polyethylenelike glass transition' in alkyl nanodomains formed by self-assembling in the melt. To prove this interpretation of the dynamics of side chain polymers with long alkyl groups, selective NMR measurements to detect the mobility of the different carbons in the monomeric units were performed. Figure 3(a) shows MAS spectra of PnDMA for different temperatures. The line assignment is given at the T = 223 K spectrum; asterisks mark the ssb. The non-resolved lines between 20 and 40 ppm are resonances belonging to the side chain aliphatic carbons. Dynamic broadening of the carboxyl ssb at high temperatures is most obvious; however, a careful inspection of the resonances C5-C12 reveals a temperature-dependent line width for all carbons, but at different temperatures. Figure 2(b) shows the temperature dependence of the full width-at-half-maximum height (fwhmh) for the main-chain CH 2 -carbon (C1) as well as for the carboxyl group (COO) and the alkyl carbons (C5-C12). Since the individual side chain aliphatic lines cannot be resolved, the total widths of the overlapping resonances is plotted for the latter. The rather different temperature dependences of the line widths of the main chain carbons and carboxyl group on the one hand and of the alkyl carbons on the other hand is obvious: while the former broaden at temperatures higher than 300 K, the latter exhibit a maximum line width at T ≈ 230 K. The dynamic broadening of the side chain resonances is close to the limit of detection and can hardly be realized from the spectra in figure 2(a). This is probably owing to the unfavourable ν ROT / σ ratio and/or small amplitudes of the molecular reorientations that constitute the dynamic process. Lowering the MAS rate will probably increase the dynamic broadening effect but in turn would result in line overlap, making the interpretation of the spectra rather difficult. However, the effect is outside the margins of error and thus an important experimental finding supporting and complementing the relaxation spectroscopy data. Using ω = 2πν ROT and inserting the NMR data in the Arrhenius plot (figure 2(c)) reveals that the broadening of the main chain and carboxyl resonances is due to the conventional glass transition a, whereas those of the alkyl carbons must be due to the polyethylene-like glass transition α PE .
It is worthwhile to note that the maximum line widths for the main chain and carboxyl carbon seem to appear at slightly different temperatures. Subsequently, a more mobile carboxyl group as compared with the main chain may be anticipated. Unfortunately, low signal-to-noise ratio and the merging of the resonances of the carbon C1 with the O-CH 2 carbon C4 of the side chain do not permit a closer evaluation (see spectra for T > 300 K in figure 3(a)).
Structural aspects
X-ray scattering curves for representative poly(n-alkyl methacrylate) homopolymers (PnBMA, PnHexMA, PnHepMA, PnDMA) are shown in figure 4. The scattering intensity shows two significant peaks. The peak at large scattering vectors q I ≈ 13 nm −1 corresponds to the van der Waals' peak (I), which is related to the average distance of non-bonded atoms in the melt. In a Bragg approximation (d = 2π/q max ), one obtains for the homopolymers repeating distances in the range d I = 0.48-0.495 nm, e.g. d I values which are nearly independent of the side chain 4) and is interpreted as a structural indication for a nanophase separation of incompatible main and side chain parts. As expected, the samples are fully isotropic and continuous rings in the 2D intensity patterns (insets in figure 4) are observed. Representative curves for several random PnAMA copolymers are shown in figure 5(a). The shift of the prepeak (II) with the number of alkyl carbons per side chain C is clearly visible. Seemingly random copolymers made from comonomers with not too different side chain lengths behave like homopolymers if the average side chain length is the same. For example, the scattering curve for the P(nBMA-nHexMA) copolymer with an average side chain length C = 4.9 and the curve for PnPenMA (C = 5) are nearly identical. In general, the shift of the peak maximum obtained from desmeared scattering curves is related to the average number of alkyl carbons per side chain and d II (C) corresponds to the behaviour of the homopolymers ( figure 5(b)). The width of the prepeak q for the copolymers seems to be also similar to those for the homopolymers. The general trend in the amorphous PnAMAs is obviously a broadening of the prepeak with increasing C number ( figure 5(c)).
A detailed analysis of the scattering data for amorphous PnAMAs below decyl (C < 10) indicates that the d II values have a slightly non-linear dependence on the number of alkyl carbons per side chain ( figure 5(b)). Obviously, the slope in the d II versus C dependence is significantly different from the behaviour expected for alkyl groups in an all-trans configuration, which is indicated by the dashed line in figure 5(b). A simple linear approximation [5], d II = d lin 0 + C · d lin , gives d lin 0 = 0.8 nm for the 'main chain diameter' and d lin = 0.105 nm for the slope (average length per CH 2 unit). An alternative equation [23], d II = d Gauss 0 + C 1/2 · d Gauss , implying that the alkyl groups behave like Gaussian chains, yields d Gauss 0 ≈ 0.2 nm and d Gauss ≈ 0.5 nm. The 'main chain diameter' d Gauss 0 is unrealistically small in this case, although the d Gauss value is in reasonable agreement with Ferry's structure length for various vinyl polymers of about a F ≈ 0.5-0.8 nm. Generally, it is questionable whether one of these simple models can be used to approximate the C number dependence of d II in a wide range.
Another interesting aspect is the C number dependence of the scattering intensity in the small angle range 0.3 nm −1 q 1 nm −1 . The experimental finding is that the intensity in this plateau region increases systematically with the side chain length for the amorphous PnAMA with C 10 ( figure 5(d)). Note that the scattering intensity in the region q < 0.3 nm −1 increases significantly for all the investigated polymers ( figure 5(a)). A similar behaviour is reported for several other non-crystalline materials and discussed in the context of long-range density fluctuations [25] in glasses.
X-ray scattering curves for poly(n-butyl methacrylates) with different degrees of polymerization P, including the butyl methacrylate monomer and the dibutyl ester of 2,4-dimethyl glutaric acid as dimer analogues, are compared in figure 6. Wide-angle x-ray scattering measurements on polymers and oligomers with the 2D detector (figure 6(a)) show that the general features in the scattering curves are very similar down to PnBMA chains with only six monomeric units. Especially, the prepeak (II) indicating the nanophase separation is observed for all these samples. Its maximum position is nearly independent of the chain length in the range P 25. Looking in some more detail on the prepeak using SAXS, one observes that the intensity of the prepeak starts to decrease significantly below P ≈ 25. This tendency is accompanied by a slight shift of the peak maximum to larger q values related to smaller repeating units d II ( figure 6(b)). While d II ≈ 1.25 nm is observed for all high-molecular-weight samples (P 25), the scattering data for the hexamer (PnBMA 6) indicate a repeating unit of only d II ≈ 0.9 nm. For the dimer and the monomer, no pronounced peak maximum could be observed in the SAXS data and the prepeak tends to disappear ( figure 6(b)). The maximum position of the van der Waals' peak (I) is nearly independent of the chain length for all samples with 6 < P < 405, indicating a nearly constant average distance of the non-bounded neighbour atoms in the melt of about d I = 0.495 nm. Preliminary WAXS data for the dimer indicate that the prepeak intensity is indeed small and that the van der Waals' peak shifts significantly to larger q values, i.e. the average distance d I decreases in the small-molecule liquids.
Discussion
The results reported here provide additional evidence for the demixing of incompatible main and side chain parts in polymers with long alkyl groups and they support the existence of alkyl nanodomains in the amorphous melt with an independent dynamics and a typical dimension of 0.5-2 nm. Temperature-dependent NMR line widths for different carbons in PnDMA strongly support the concept-deduced previously from a comparison of dielectric data with shear and calorimetric data-that the backbone and carboxyl group are involved only in the conventional dynamic glass transition a, whereas the dynamics of the side chain carbons C5-C12 is characterized by an independent relaxation process within the alkyl nanodomains at low temperatures, which is basically decoupled from the immobile main chain. In combination with the calorimetric activity and the non-Arrhenius behaviour, these results support the interpretation of this low temperature process as a polyethylene-like glass transition α PE in PnDMA. In this sense, the NMR results are also a confirmation of the nanophase separation picture for side chain polymers with alkyl groups. Co-operative motions within isolated alkyl groups are impossible and aggregated alkyl groups seem to be a natural assumption considering the lack of mobile main chain carbons and moving dipoles in the frequency-temperature range of the α PE process.
In an oversimplified picture, the situation in nanophase-separated side chain polymers at low temperatures (T g (α PE ) < T T g (a)) can be described as follows. There are already very mobile alkyl nanodomains interrupted by immobile main chains. Co-operative motions within these alkyl nanodomains are possible and give rise to the low-temperature glass transition α PE . The continuous, relatively rigid 'main chain phase' causes the still relatively high modulus of the material in this state (G 10 8 Pa). The final glass-to-rubber transition occurs after the main chains become mobile and complete monomeric units can move. This happens in the region of the conventional dynamic glass transition a. The influence of the alkyl volume fraction on the main chains is reflected by a shift of the conventional T g to low temperatures explained usually by internal plasticization [27]. However, several points are not clear, since details of the morphology are not yet understood. However, it is clear that there should also be changes in the morphology with side chain length or volume fraction of the alkyl groups. For short side chains, one would expect isolated alkyl nanodomains, whereas for really long alkyl groups, two continuous phases should occur. In comparison with block copolymers, the situation in nanophase-separated side chain polymers with a high degree of polymerization is a priori asymmetric considering the volume fraction , since the existence of long main chains causes the occurrence of at least one continuous phase for all values. Aggregates as discussed in the quasi-one-dimensional picture (figure 1, l.h.s.) appears to be a realistic approach on very-shortlength scales, but extended layer-like structures with a large lateral dimension should not exist. This can be concluded from the absence of higher-order maxima belonging to the prepeak in the scattering curves of all side chain polymers investigated. This indicates less regular structures compared with the high degree of order observed in microphase-separated block copolymers [28]. The finding that neither a linear dependence nor the Gauss-like approximation describes the d II (C) values for amorphous PnAMAs is understandable, since for short alkyl groups, Gauss behaviour is unexpected, whereas for longer alkyl groups in the amorphous state, an all-trans configuration is unlikely.
In any case, the existence of rigid main chains will be a restriction for the cooperative dynamics in small alkyl nanodomains at low temperatures. Since typical features of a dynamic glass transition are observed, one can discuss (at least for C > 6) the α PE process as a hindered glass transition in a self-assembled confinement [1]. The possibility to control the size of small nanodomains opens new perspectives and might allow more detailed answers to important questions about cooperativity and dynamic heterogeneities in glasses. The first, more traditional question might be: at which domain size the dynamic glass transition of the confined liquid will differ from the behaviour of the bulk liquid? For the investigated systems, the question is: at which alkyl domain size will the α PE process become similar (identical) to the glass transition in amorphous polyethylene? It was shown recently that there is at least a tendency in this direction. With increasing alkyl nanodomain size, T g (α PE ) and steepness index m(α PE ) approach the values reported for amorphous polyethylene [1]. However, one should note that there are possibly additional complications because of orientation and finally crystallization effects in polyethylene-like materials. The second important question which one can address concerning the confined dynamics is: at which domain size will the relaxation process inside the domains become really a dynamic glass transition? It was shown that the α PE process in the butyl member, where clear indications for an additional α PE peak in shear data were observed first, is Arrhenius-like. With increasing side chain length, a strong-to-weak transition was observed [1], i.e. the non-Arrhenius character of the α PE process increases. An important experimental question is then whether the α PE process shows from the very beginning a significant calorimetric response or it is becoming calorimetrically active at a certain alkyl nanodomain size. This would add new experimental information to the discussion about early stages of cooperative motions and their relation to more localized secondary relaxations. Such relations are discussed in different approaches to understand the nature of the dynamic glass transition [29,30]. Unfortunately, calorimetric experiments of this type are complicated, since the small volume fraction of small alkyl nanodomains causes less pronounced signals in the relevant experiments. The hope is that the increasing sensitivity of newly developed calorimetric methods will help to solve this experimental problem.
The x-ray scattering data presented here suggest that nanophase separation effects are stable against significant variations in the molecular microstructure. Random PnAMA copolymers with a not too different side chain length behave in x-ray scattering experiments like homopolymers with the same average C number. All trends concerning the prepeak are similar to the trends observed for the homopolymers. There is also no dramatic peak broadening, indicating that short and long alkyl groups are mixed in most of the alkyl nanodomains. The number of alkyl groups and the intermixing of different lengths in each nanodomain are, at least for small C number differences, good enough to have mainly one nanodomain size. This does not mean that there is no distribution in the alkyl nanodomain sizes caused by copolymerization, but obviously this effect is not dominating for the investigated copolymers. The fact that the scattering curves are not dominated by one of the comonomers seems to be understandable in the case of amorphous systems having a strong tendency to maximize the density. A large degree of similarity between random PnAMA copolymers and homopolymers in the amorphous state is also observed concerning other physical properties like relaxation effects [31] and the typical low-temperature anomalies in the range below 1 K [32]. Otherwise, it is known that block copolymers made of two different poly(n-alkyl methacrylates) show, in many cases, microphase separation [33]. This suggests that there could also be a tendency in random copolymers with significantly different side chain lengths favouring the aggregation of only one type of alkyl groups in each nanodomain. First experiments on random butyl-lauryl methacrylate copolymers seem to support this idea [34].
Systematic increase in the scattering intensity with increasing side chain length in the plateau region 0.3 nm −1 q 1 nm −1 might be an additional indication for the occurrence of a nanophase separation in amorphous PnAMAs. Although we have no absolute values for the scattering intensity so far, there seem to be strong indications for excess scattering in addition to the contributions from density fluctuations [35] in this q range. A possible explanation would be additional contributions due to concentration fluctuations [36] in nanophase-separated systems. A decrease in the scattering intensity in the same q range observed for PnAMAs with longer alkyl groups (C 12) seems to be related to side chain crystallization within the alkyl nanodomains, i.e. lack of contributions from fluctuation scattering if the alkyl nanodomains are partly crystalline. Further details of the scattering intensity at low q values seem to be also interesting in connection with the discussion about long-range density fluctuations [25] and dynamic heterogeneities [30] in glasses. Thus, more detailed information about different contributions to the scattering intensity and absolute scattering intensities would be important.
Scattering data for nBMA systems reaching from the nBMA monomer to long PnBMA chains with P = 405 monomeric units underline the robustness of the nanophase separation phenomenon. Obviously, the main features of the nanophase structure are nearly identical for all polymers and oligomers with >25 monomeric units. Note that these PnBMAs have glass temperatures between 303 K (close to the x-ray measurement temperature of 298 K) and 281 K, i.e. the structures in the melt and in the glassy state are comparable. The structure is significantly different for the shortest oligomers (P 10), especially for the monomer and the dimer. Typical indications for a nanophase separation in x-ray scattering curves, namely the prepeak, disappear systematically with decreasing polymerization degree of the system. This shows that the strong tendency of different parts of the molecule to demix is a common feature of all polymeric systems, whereas for very short oligomers and small molecules, this tendency is reduced or at least very irregular structures are formed. For nBMA systems, the chain structure is obviously an important condition for well-pronounced nanophase-separation effects. Further details of the transition from polymer chains to small molecule liquids seem to be an interesting topic for further investigations.
Conclusions
In summary, we have shown in this paper that nanophase separation is a general feature of poly(n-alkyl methacrylates) independent of details of their molecular microstructure. Alkyl nanodomains formed in homopolymers are similar to those in random PnAMA copolymers with the same average number of alkyl carbons per side chain. The alkyl nanodomains in all poly(n-butyl methacrylates) with >25 monomeric units are practically identical. Only at much lower degrees of polymerization, nanophase separation effects tend to disappear. NMR data for poly(n-decyl methacrylate) support the existence of an independent polyethylene-like glass transition α PE within alkyl nanodomains of size ∼1 nm. Insensitivity of the main findings to significant changes in the molecular microstructure and consistency in the results from different dynamic methods strengthen our opinion that nanophase separation effects are important for an understanding of complex materials in nanoscience and nature.
|
v3-fos-license
|
2015-09-18T23:22:04.000Z
|
2015-04-29T00:00:00.000
|
16823678
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/16/5/9635/pdf",
"pdf_hash": "518ceff86314077727687c847a27cc07aad552af",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1208",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "518ceff86314077727687c847a27cc07aad552af",
"year": 2015
}
|
pes2o/s2orc
|
The miRNA Transcriptome Directly Reflects the Physiological and Biochemical Differences between Red, White, and Intermediate Muscle Fiber Types
MicroRNAs (miRNAs) are small non-coding RNAs that can regulate their target genes at the post-transcriptional level. Skeletal muscle comprises different fiber types that can be broadly classified as red, intermediate, and white. Recently, a set of miRNAs was found expressed in a fiber type-specific manner in red and white fiber types. However, an in-depth analysis of the miRNA transcriptome differences between all three fiber types has not been undertaken. Herein, we collected 15 porcine skeletal muscles from different anatomical locations, which were then clearly divided into red, white, and intermediate fiber type based on the ratios of myosin heavy chain isoforms. We further illustrated that three muscles, which typically represented each muscle fiber type (i.e., red: peroneal longus (PL), intermediate: psoas major muscle (PMM), white: longissimus dorsi muscle (LDM)), have distinct metabolic patterns of mitochondrial and glycolytic enzyme levels. Furthermore, we constructed small RNA libraries for PL, PMM, and LDM using a deep sequencing approach. Results showed that the differentially expressed miRNAs were mainly enriched in PL and played a vital role in myogenesis and energy metabolism. Overall, this comprehensive analysis will contribute to a better understanding of the miRNA regulatory mechanism that achieves the phenotypic diversity of skeletal muscles.
Introduction
Skeletal muscle is the major organ, by weight, in the body, accounting for approximately 40% of the body's mass. It plays an important role in exercise and energy metabolism [1], and is a heterogeneous tissue comprising fibers that can be broadly classified as red (oxidative), intermediate (oxidative-glycolytic), and white (glycolytic) fiber types. Each fiber type is characterized by increased levels of different types of myosin heavy chain (MHC): red fibers, Myh7 and Myh2; intermediate fibers, Myh1; and white fibers, Myh4 [2][3][4]. Red fibers contain higher levels of mitochondria, capillaries, myoglobin, and lipids than white fibers. White fibers have higher levels of glycolytic enzymes than red fibers; for example, lactate dehydrogenase A (LDHA), which is one of the key metabolic enzymes for glycolysis in skeletal muscle [3]. Intermediate fibers have intermediate characteristics between red and white fibers, and display both oxidative and glycolytic capacities. The diversity of muscle fibers play important roles in metabolic health and disease. Whole-body insulin sensitivity and insulin-stimulated glucose transport are positively correlated with the proportion of oxidative fibers [5], while glycolytic muscle fibers show greater atrophy than the oxidative fibers in response to food deprivation [6].
miRNAs are small non-coding RNAs of ~22 nucleotides (nt) in length that regulate gene expression by specifically binding target mRNA and mediating mRNA degradation and/or translational inhibition. Emerging evidence has demonstrated that miRNAs play a critical role in skeletal muscle differentiation and metabolism [7,8]. Recently, several studies have found that, because of the metabolic needs of oxidative and glycolytic skeletal muscles, they shared most muscle-specific miRNAs that expressed at distinct levels [9,10]. Interestingly, miR-499 and miR-208b are positively associated with oxidative red fibers as they repress transcriptional repressors of slow-twitch contractile protein genes, such as Sox6 [11].
Pigs (sus scrofa) have considerable agricultural significance in meat production. Skeletal muscle is a highly heterogeneous tissue, nonetheless previous studies of miRNA trancriptome differences only focused on two fiber types. To better understand and elucidate the major determinants for phenotypic properties of various muscle types at the miRNA level, we screened and selected three muscles that typically represented each muscle fiber type (i.e., red, intermediate, and white) from 15 candidates, based on differences in their muscle fiber composition and metabolic capacity, and then investigated the differences in their miRNA transcriptomes using a deep sequencing approach. Illuminating the miRNA-based post-transcriptional regulatory mechanism in different fiber types will enrich our knowledge of the roles of miRNA in muscle biology, and help us to further understand the characteristics of distinct muscle fiber types.
The Characteristics of Skeletal Muscle Fiber Types
To determine the muscles that typically represent each muscle fiber type, we collected 15 porcine skeletal muscles in different anatomical locations. qRT-PCR was performed to quantify the content of four MHC isoforms (Myh1, Myh2, Myh4, and Myh7 genes) in 15 skeletal muscles. Although the mRNA sequences of these four genes showed high identity (>75%) with each other, Sanger sequencing for the PCR products of MHC isoforms confirmed the specificity and reliability of our qRT-PCR primers (data not shown). As shown in Figure 1a, hierarchical clustering analysis showed that distinct muscle types are divided into three clusters: red fiber (Myh7 and Myh2), intermediate fiber (Myh1), and white fiber (Myh4) based on the ratios of MHC isoforms, which were consistent with previous classification of muscle fibers [12][13][14]. Meanwhile, mitochondrial contents and relative expression levels of LDHA were measured to distinguish the differences in metabolic capacity for distinct muscle fibers. Results showed that red fibers have the highest mtDNA copy number (Figure 1b), while white fibers have the highest LDHA expression levels ( Figure 1c). Intermediate fibers exhibited intermediate levels for both measures.
Among these muscles, peroneal longus (PL) contained the highest number of copies of mtDNA per cell and lower LDHA expression (Figure 1b), suggesting its higher oxidative capacity compared to other skeletal muscles. In contrast, the longissimus dorsi muscle (LDM) exhibited the highest abundance of LDHA expression and relatively lower mtDNA copy number, suggesting it to be more proficient in anaerobic glycolytic metabolism ( Figure 1c). Intriguingly, the psoas major muscle (PMM) that was regarded as a typical red muscle fiber type previously [4,[15][16][17], was found to be an intermediate muscle fiber type ( Figure 1a) in this study, with moderate mtDNA copy number and LDHA expression level. Moreover, the color of PMM was intermediate between red (PL) and white (LDM), which further confirmed the intermediate phenotype of PMM ( Figure 1d). Therefore, PL, PMM, and LDM were selected as the most representative muscles for red, intermediate, and white fiber types respectively, in the subsequent analyses.
Summary of Deep Sequencing Data
To further identify the miRNA transcriptome differences between the three fiber types (i.e., PL, PMM, and LDM), a deep sequencing approach was applied [18]. As a result, we obtained 9.17 million (M), 18.46 M, and 16.62 M raw reads for PL, PMM, and LDM, respectively. More than 98.58% (98.83% ± 0.26%, n = 3) of the raw reads in each library passed the quality filters (see Experimental Section) and were considered "mappable reads". Length distribution analysis showed that the majority of reads ranged from 21 to 23 nt in length, and the 22 nt small RNA was the most abundant (61.50%), followed by 21 nt (13.37%) and 23 nt (18.33%) ( Figure S1a). These results indicate the reliability of our small RNA sequencing, thus the mappable reads were selected as reliable miRNA candidates for subsequent analysis.
The vast majority (81.14%) of the mappable sequences were mapped to known precursor miRNAs (pre-miRNA) (miRBase19.0) ( Table S1). The identified precursor and mature miRNAs were then divided into three groups using the following alignment criteria: (1) Known porcine miRNAs: 424 miRNAs mapped to 342 porcine known pre-miRNAs (Table S2.1); (2) Conserved miRNAs: 152 miRNAs mapped to 135 other known mammalian pre-miRNAs and these pre-miRNAs then mapped to the pig genome (Table S2.2); (3) Candidate miRNAs: 397 miRNAs (longer than 18 nt and unmapped to any known mammalian pre-miRNAs) encompassing 329 candidate pre-miRNAs that were predicted RNA hairpins derived from the pig genome (Table S2.3). Notably, there are the distinct pre-miRNAs coding the identical mature miRNAs, which resulted in 973 miRNAs (i.e., reference sequence) corresponding to 912 unique miRNA sequences. Known porcine miRNAs represented by three or more sequence reads (n = 365) were used for the following analyses to ensure the high reliability of the reported results (Table S3).
Notably, 79.35% (292 out of 365) of the known unique porcine miRNAs were expressed in all three libraries, while only 4, 26, and 6 of the unique miRNAs were specifically expressed in PL, PMM, and LDM, respectively, and the vast majority of these tissue-specific miRNAs were at low abundance (3-27 reads). Therefore, known porcine miRNAs with high abundance and shared between all three libraries were used for the following analysis. In addition, the small RNA sequencing data showed a significant positive correlation with qRT-PCR results (Pearson's r = 0.780, p < 10 −6 ), highlighting the reliability of the small RNA-sequencing approach (Figure S1b).
Universally Abundant miRNAs across the Three Muscle Types Are Associated with the Metabolic Pathways of Myogenesis and Angiogenesis
A small number of miRNAs dominated the total miRNA pool [19], thus we first analyzed the most abundantly expressed (top 10) unique miRNAs in each library. The top 10 unique miRNAs with high abundance accounted for more than 70% of the total unique miRNA reads (Figure 2a), indicating that they might play important regulatory roles in the functional maintenance of skeletal muscle (e.g., proliferation and differentiation). Notably, four miRNAs (miR-133a, miR-143, miR-27b, and miR-10b) were in the top 10 most abundant miRNAs in all three libraries (Figure 2b-d).
miR-133a, a muscle-specific miRNA involved in myogenesis, showed little difference (<1.5-fold) among the three libraries [20], indicating it may play a potential housekeeping role in the three muscle tissues [21,22]. In contrast, miR-143 was differentially expressed (>1.5-fold) among the three libraries, suggesting it might be a dominant miRNA contributing to the physiological functional differences between the fiber types. Through analysis of its target genes (Table S4), we found that miR-143 was primarily involved in metabolic pathways (e.g., mitochondrial, pyruvate metabolism, glycolysis/gluconeogenesis) [23]. In addition, the differential expression of mtDNA copy number (Figure 1b) and the metabolic pathway marker genes (i.e., pyruvate metabolism: LDHA (Figure 1c Interestingly, compared with PMM and LDM, both miR-27b and miR-10b were upregulated (>1.5-fold) in PL. miR-27b was found to be upregulated during myogenic differentiation and directly targets Pax3 and MSTN [29,30]. miR-10b was also found to be a regulator of myogenesis [25]. Taken together, we propose that there might exist certain differences in myogenesis among the three muscles. Furthermore, the myogenesis marker genes (Myf4, MyoD, BMP4, Myf5, and SRF) [31][32][33][34], were significantly more highly expressed in PL, which confirmed that PL had higher myogenesis capacity (Figure 2g). The activation of myogenic progenitors (e.g., satellite cells) contributed to the myogenensis of adult skeletal muscle tissues [35], thus we propose that the PL may contain more active myogenic progenitors relative to PL and LDM. Moreover, miR-10b is required for angiogenesis by indirectly inducing extracellular matrix remodeling and cell migration [36,37]. PGC-1α and VEGFA were proved to play an important role in angiogenesis [38][39][40][41][42], and in this study these two genes showed a significantly higher expression in PL (Figure 2h), suggesting that PL possesses a higher capillary load.
In summary, analysis of the four miRNAs abundantly expressed in all three muscles (miR-133a, miR-143, miR-27b, and miR-10b) not only suggests common characteristics of skeletal muscle, but also points to differences between the different fiber types with regard to metabolic pathways (e.g., mitochondrial, pyruvate metabolism, glycolysis/gluconeogenesis), myogenesis, and angiogenesis (Figures 1c and S2). Further investigation is encouraged to better understand the influence of miRNAs on the phenotypes of fiber types.
Identification and Functional Analysis of Differentially Expressed miRNAs among Three Muscle Fibers
To further compare the miRNA expression patterns among the three muscles, we analyzed, using IDEG6, the 292 known porcine miRNAs that were found expressed in all three muscles. Of these 292, 155 were found differentially expressed (DE) (p < 0.001) among the three libraries (Table S5). It is well known that miRNAs function in a dose-dependent manner [4], thus the higher abundance miRNAs (reads ≥ 10,000) were considered to be more important. Therefore, in addition to the four "top 10" miRNAs shared by all three muscles (miR-133a, miR-143, miR-27b, and miR-10b), 44 miRNAs were identified as both high abundance and upregulated in one of three tissues (>1.5-fold when compared with the other two libraries, simultaneously) and used for subsequent functional analysis ( Figure 3). There were 37 DE miRNAs enriched in PL, but few miRNAs enriched in PMM or LDM. This may be explained by the high similarity of the expression patterns of PMM and LDM ( Figure S3). For functional enrichment analysis, we gathered target information for the upregulated DE miRNAs from previous reports that had experimentally validated these targets in muscle tissues and/or cells (Table S6). The target genes of the remaining miRNAs, whose functions had not been previously reported, were predicted using the specific algorithms of MiRanda and TargetScan software based on our in-house dataset of the porcine skeletal muscle 3' untranslated region (UTR) and a previous report [23] (Table S7).
Numerous studies indicate that the insulin-like growth factor (IGF) pathways act as positive regulators of myogenesis [32,47,48]. We found six PL-enriched miRNAs (miR-125b, miR-126, miR-128, miR-486, and miR-99a/b) involved in the insulin signaling pathway (Figure 4b). miR-486 promoted the insulin signaling pathway [49], while the others, especially miR-128 (>14-fold), miR-99a (>25-fold), and miR-99b (>15-fold) repressed the insulin signaling pathway. To further study downstream effects on this pathway, we measured the relative expression levels of four insulin signaling pathway marker genes (IGF1, PI3K, Akt1, and mTOR) ( Figure S4). IGF1 and PI3K exhibited higher expression levels in PL, which indicated that PL muscle is better able to stimulate glucose transport [5,50]. In contrast, the expression levels of Akt1 and mTOR, which promote skeletal muscle hypertrophy, especially in fast muscle [51,52], were significantly higher in PMM. Combining the miRNA and mRNA expression data, these conflicting results indicate that regulation of insulin signaling is a complicated process in these three tissues. Nonetheless, we identified numerous DE miRNAs that were involved in myogenesis, and remarkably, most of them were especially enriched in PL and directly targeted slow muscle repressors or fast muscle genes, indicating their vital roles in the development of oxidative red muscle. The different expression levels of myogenesis marker genes confirmed that PL had higher muscle hypertrophy and differentiation capacity.
Energy Metabolism-Related miRNAs Enriched in PL
Interestingly, we found that two angiogenesis-related miRNAs (miR-26a and miR-126), shown to be involved in angiogenesis by targeting SMAD1/4 [53] and Spred-1 [54], respectively, were highly expressed in PL (Figure 5a), suggesting that PL contains abundant capillaries. Additionally, we also found that two miRNAs (miR-100 and miR-199a, >10-fold), known to be involved in reducing hypoxic damage by targeting Hif-1α/Sirt1 and FGFR3 [55], respectively (Figure 5b), were significantly enriched in PL, suggesting that PL might have a relatively higher oxygen content than the other muscles. Combined with our above results, nine miRNAs mainly associated with angiogenesis (miR-10b, miR-26a, and miR-126), reducing hypoxic damage (miR-100 and miR-199a) and slow muscle formation (miR-499, miR-208b, miR-30a, and miR-23a) were found highly expressed in PL (Figure 5c). Among them, two miRNAs implicated in linking muscle fiber type to energy metabolism [11,46], were highly expressed in PL: miR-208b, encoded within the intron of Myh7 gene (Figure S5a), and miR-499, encoded within the intron of Myh7b (i.e., another encoded the slow-tonic MHC gene [56]) ( Figure S5b). Collectively, these miRNAs play a critical role in energy metabolism for red fibers, through enhanced capillary load. This could result in transport of more nutrients (e.g., glucose) and oxygen, and, coupled with higher levels of mitochondrial content, could result in improved glucose use for mitochondrial oxidative metabolism. It is worth noting that most of our above hypothesis/conclusions are based on the previous reports from multiple model organisms, and there are still some conflicting results between our hypothesis and previous reports from other organisms. For example, the Prdm1 gene, which is regarded as an activator of the fast muscle program in our current study, was proved to promote the slow muscle formation in zebrafish [57]. Notably, a recent study in mouse indicated that there was no conservation of function for the evolutionarily conserved Prdm1 in the control of the slow twitch myogenic program between teleosts and mammals [58]. It is therefore reasonable to assume that these conflicting results may be due to the species-specific molecular regulation network. Further studies focusing on species-specific regulation of miRNAs are needed to elucidate the complicated epigenetic mechanism underlying the formation-and function-variations among distinct muscle types.
Animal Ethics Statement
All research involving animals was conducted according to the Regulations for the Administration of Affairs Concerning Experimental Animals (Ministry of Science and Technology, China, revised in June 2004) and approved by the Institutional Animal Care and Use Committee in the College of Animal Science and Technology, Sichuan Agricultural University, Sichuan, China under permit No. DKY-B20110807. Animals were allowed access to food and water ad libitum under normal conditions, and were humanely sacrificed as necessary, to ameliorate suffering.
Animals and Tissue Collection
Given the plasticity and maturation processes of porcine myofiber [59], the 210 days old female Landrace pigs, that were in the young adult stage of their lifespan [60,61] with stable myofiber composition in skeletal muscles, were selected as research objects to investigate the miRNA transcriptome variations underlying the physiological and biochemical differences of porcine distinct skeletal muscle types. Fifteen muscles were obtained from three female Landrace pigs (210 days old), and immediately frozen in liquid nitrogen, then stored at −80 °C until RNA extraction.
Quantitative Real-Time PCR: mRNA, miRNA Expression, and mtDNA Copy Number
Total RNA was isolated from the muscle tissues using TRIzol reagent (Takara, Dalian, China) according to the manufacturer's protocol. For mRNA, cDNA was synthesized using the PrimeScript RT Master Mix kit (Takara) following the manufacturer's recommendation. qRT-PCR was performed using the SYBR Premix Ex Taq kit (Takara) on a CFX96 Real-Time PCR detection system (Bio-Rad Laboratories, Richmond, CA, USA). Porcine PPLA, RPL4 and YWHAZ were simultaneously used as mRNA endogenous control genes [62]. The mRNA primers are shown in Table S8. For miRNA, the expression levels were validated using the SYBR PrimeScript miRNA RT-PCR Kit (Takara) on the CFX96 Real-Time PCR Detection System. Three miRNA endogenous control genes (U6 snRNA, 18S rRNA, and 5S rRNA) were used in this assay [63]. The forward primer of miRNAs was identical in sequence and length to the miRNA itself (i.e., the most abundant isomiR) based on our sequencing results. The 2 −ΔΔCt method was used to calculate the relative expression levels of mRNAs and miRNAs.
The ratio of MHC isoforms was expressed as [64]: Total DNA was isolated from muscle tissues using TIANamp Genomic DNA Kit (Tiangen, Beijing, China). The number of mtDNA copies/cell was quantified using qRT-PCR as previously described [65]. We selected three mitochondrial DNA-specific genes (ATP6, COX1, and ND1) and a single-copy nuclear DNA gene (glucagon gene, GCG) [66] to calculate the number of mtDNA copies per cell using the following formula: [(No. of copies of the mtDNA gene)/(No. of copies of GCG)] (2)
Validation of the Specificity of Myosin Heavy Chain Gene Primers
For four MHC genes (Myh1, Myh2, Myh4, Myh7), the presence of PCR products was confirmed by 2% agarose gel electrophoresis. Subsequently, the PCR products were cloned into the pMD19-T Vector (Takara), and then randomly selected clones (n = 3) were sequenced using Sanger sequencing approach to validate the specificity of the PCR products (Huada Company, Beijing, China).
Small RNA Library Construction and Sequencing
Total RNA was extracted from PL using the mirVana™ miRNA isolation kit (Ambion, Austin, TX, USA) following the manufacturer's procedure. The quantity and purity of total RNA were monitored by NanoDrop ND-1000 spectrophotometer (Nano-Drop Technologies, Wilmington, DE, USA) at 260/280 nm (ratio > 2.0). The integrity of total RNA was monitored by the Bioanalyzer 2100 and RNA 6000 Nano LabChip Kit (Agilent Technologies, Palo Alto, CA, USA) with RIN number > 6.0. Equal quantities (5 µg) of small RNA isolated from three female Landrace pigs were pooled. Briefly, approximately 15 µg of small RNA was used for library construction and sequencing. Small RNA fragments (between 10 and 40 nt) were isolated by polyacrylamide gel electrophoresis (PAGE) and ligated with proprietary adaptors (Illumina, San Diego, CA, USA). The small RNA fractions were then converted to cDNA by RT-PCR and the cDNA was sequenced on the Genome Analyzer GA-II (Illumina) following the recommended manufacturer's protocol. The small RNA sequence data have been uploaded to NCBI's Gene Expression Omnibus (GEO) (Accession Number GSE64523).
In Silico Analysis of Small RNA-Sequencing Data
The small RNA-sequencing data of LDM and PMM were obtained from the same pigs in our previous study [9]. The raw reads of all these three tissues (PL, PMM, and LDM) were then subjected to a series of additional strict filters (i.e., the following reads were removed: 3' adapter not found; length less than 16 bases or more than 29 bases; junk reads). Then the high-quality reads were mapped to the pig genome (Sscrofa10.2) using NCBI Local BLAST following three steps in order: (1) map the high-quality reads to the precursor miRNAs of pig and 24 other mammals in miRBase 19.0; (2) map the mapped high-quality reads to pig genome (Sscrofa10.2) to obtain their genomic locations and annotations using NCBI Local BLAST; (3) cluster the unmapped sequences in step 1 that mapped to the pig genome as putative novel miRNAs, and predict their hairpin RNA structures from the adjacent 60 nt sequences in either direction from the pig genome using UNAFold [67].
Differentially Expressed (DE) miRNAs
The expression of miRNAs in the three samples was normalized by total mappable reads, and the program IDEG6 was employed to detected DE miRNAs among the three libraries (http://telethon.bio.unipd.it/bioinfo/IDEG6_form/). A unique miRNA is considered to be differentially expressed when it simultaneously obtained p < 0.001 under three statistical tests (Audic-Claverie test, Fisher exact test and Chi-squared 2 × 2 test with Bonferroni correction by pairwise comparison).
Functional Analysis
Three prediction programs (PicTar [68], TargetScan human 6.2 [69], and MicroCosm Targets Version 5.0 [70]) were used to predict target genes of miRNAs, and the intersection of results from the three programs comprised the final predicted targets. The predictions were made based on the interactions of human mRNA-miRNA, as porcine miRNAs were not available in the current version of the above-mentioned algorithms. The gene ontology (GO), biological process (BP), molecular function (MF), cellular component (CC) terms, and KEGG pathways enriched for predicted target genes were determined using the DAVID bioinformatics resources [56].
Conclusions
In this study, we determined the muscle fiber composition of 15 types of porcine muscle tissues derived from distinct anatomical locations, and classified them into red, intermediate, and white muscle types. Then, the peroneal longus muscle (PL), psoas major muscle (PMM), and longissimus dorsi muscle (LDM) were selected as the typical tissues for red, intermediate, and white muscle types, and subjected to miRNA transcriptome investigation. As a result, muscle type-specific enriched miRNAs were identified and implicated in promoting the specific formation of distinct muscle fibers. DE and functional enrichment analysis showed that the DE miRNAs among distinct muscle types were mainly related to low-oxidative myofiber formation, angiogenesis, energy metabolism, and reduced hypoxic damage, which reflected the intrinsic characteristics of the physiological and metabolic roles of different muscle types. In addition, the expression pattern of a set of miRNAs (miR-10b, miR-26a, miR-126, miR-199a, miRNA-208b, and miRNA-499) linked the capacity of myogenesis and energy metabolism levels with distinct fiber types. Our study performed here will aid the further understanding of miRNAs with their biological functions in different muscle fiber types.
|
v3-fos-license
|
2019-04-10T13:03:28.505Z
|
2019-04-03T00:00:00.000
|
102348048
|
{
"extfieldsofstudy": [
"Medicine",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41377-019-0146-x.pdf",
"pdf_hash": "505b9bd5b52170365cf038fa73ca7fa6a0a7f5ba",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1211",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "72dfe24a590a1a3024529d7ee36079d09390c586",
"year": 2019
}
|
pes2o/s2orc
|
Quo vadis, plasmonic optical tweezers?
Conventional optical tweezers based on traditional optical microscopes are subject to the diffraction limit, making the precise trapping and manipulation of very small particles challenging. Plasmonic optical tweezers can surpass this constraint, but many potential applications would benefit from further enhanced performance and/or expanded functionalities. In this Perspective, we discuss trends in plasmonic tweezers and describe important opportunities presented by its interdisciplinary combination with other techniques in nanoscience. We furthermore highlight several open questions concerning fundamentals that are likely to be important for many potential applications.
Introduction
One half of the Nobel Prize in Physics for 2018 was awarded to Arthur Ashkin, "for the optical tweezers and their application to biological systems." This was truly well-deserved, as optical tweezers (Fig. 1a) have been an important scientific tool in many fields 1 , especially for precise force measurements in biophysics. In this Perspective article, we discuss the use of surface plasmon nanostructures to surpass the limits of conventional optical tweezers, an approach termed "plasmonic tweezers." Plasmonic tweezers concentrate light into deeply sub-wavelength scales and thus produce narrower and deeper potential wells than conventional tweezers. This capability permits the trapping of nanoparticles at relatively low optical powers with a precision (in position) in keeping with their size. Traditional optical tweezers struggle to achieve this. A small particle near the focused beam of traditional optical tweezers (Fig. 1a) will experience scattering forces (radiation pressure and spin curl force) and the gradient force. The latter is proportional to the gradient of the intensity and is the source of the trapping potential that draws the particle to the laser beam focus. However, the gradient force and trapping potential also vary with the cube of the particle diameter (for a nanosphere), so small particles require high laser powers for stable trapping. Furthermore, the diffraction limit means that the trapping potential well of traditional tweezers can be no narrower than roughly half the wavelength.
An early demonstration of plasmonic tweezers was by Kwak et al., who milled nanoholes in a gold film on a glass substrate and trapped fluorescent latex nanoparticles in water via the enhanced gradient forces produced by the spatially confined fields in the nanoholes (Fig. 1b, ref. 2 ).
In other examples of early work, plasmonic tweezers were demonstrated that consisted of pairs of gold particles on glass substrates 3 and gold nanopillars protruding from a gold film 4 . For the latter, the substrate (silicon) acted as a heat sink, thereby reducing the temperature rise resulting from ohmic losses associated with plasmon excitation. We argue that these and other early works have laid the groundwork for several exciting opportunities in nanoscience for plasmonic tweezers. The structure of this paper is as follows. We begin by discussing these opportunities in the context of the latest advances in plasmonic tweezers. We contend that there are several further challenges that need to be overcome, even in the fundamentals of the trapping mechanism, to realize these possibilities. We conclude by describing our vision of the future potential of plasmonic tweezers.
Opportunities
Early works on plasmonic tweezers emphasized the demonstration that small particles could be trapped (e.g., refs. [2][3][4][5]. It was shown that plasmonic tweezers could also sense the presence of the trapped particle (e.g., ref. 5 ). Recent work has expanded the repertoire of sensing modalities of plasmonic nanotweezers to enable them to characterize the trapped object rather than just detect its presence. This broadens the opportunities that plasmonic nanotweezers afford for examining the nanoworld. Wheaton et al., for example, trapped single nanoparticles with a double nanohole (DNH) plasmonic tweezer illuminated by a pair of lasers with slightly different wavelengths. This enabled the determination of the Raman-active acoustic modes of the nanoparticle by measuring fluctuations in the light transmitted through the DNH as a function of the frequency separation of the lasers (Fig. 2a, ref. 6 ). This represents a powerful means to identify unknown nanoparticles (e.g.quantum dots, proteins, and viruses) via their acoustic vibrations. Another example in the theme of examining the nanoworld relates to the analysis of chiral molecules, i.e., those that cannot be superimposed with their mirror images. These forms ("enantiomers") can interact very differently with other molecules. Enantiomeric purification is thus very important in drug manufacturing, such as in ref. 7 Circularly polarized light (CPL) illuminates a coaxial gold nanoaperture plasmonic tweezer, which in turn exerts a force on a chiral atomic force microscope tip that depends on the handedness of the CPL. Reprinted with permission from Nature Nanotechnology: ref. 8 , Copyright 2017. c Simulated trapping potentials for particles (enantiomers, with particle chirality κ ¼ ± 0:6) at 20 nm above a plasmonic coaxial aperture illuminated with circularly polarized light (wavelength 751 nm, transmitted power 100 mW). Reprinted with permission from ref. 9 . Copyright 2016 American Chemical Society was a chiral atomic force microscope tip, but one could envision the concept being applied to other nanomaterials, e.g., to sort them by chirality. Such an application was theoretically explored in ref. 9 , which showed that opposite enantiomers experience different trapping potentials, with one trapped in a deep potential well and the other repelled with a potential barrier (Fig. 2c). We next provide glimpses into two more opportunities afforded by plasmonic tweezers. The next opportunity is in the field of laboratory-on-a-chip. Flow cytometers have applications ranging from basic research to the diagnosis of health disorders such as blood cancers. In such systems, cells (or other materials) are suspended in liquid and passed through a detection system to enable measurements such as integrated fluorescence or brightfield/ darkfield imaging. These measurements are made on the cells one at a time but at very high speed, thereby enabling rich information to be gleaned about heterogeneous populations. Flow cytometers are thus found in many modern biological laboratories. A current challenge for flow cytometry is the analysis of nanoscale biological materials (e.g., exosomes and viruses). Nanoscience laboratories of the future might contain plasmonic nanotweezer flow cytometers. These would combine plasmonic nanotweezers with lab-on-a-chip microfluidics. The unique sensing capabilities of plasmonic nanotweezers (e.g., refs. 6,8,9 ) are not readily available with traditional optical approaches, and thus nanotweezer flow cytometers could present new opportunities for analyzing heterogeneous populations of nanomaterials. Another potential role in lab-on-a-chip is in addressing the challenge faced by sensor devices based on microfluidic chips that the analytes to be sensed need to diffuse from the center of the channel to its surface (on which the sensors are formed). Mobile plasmonic tweezers that sweep out the three-dimensional volume of the channel could trap nanomaterials and deliver them to sensors (e.g., plasmonic nanotweezers) formed on the surfaces of the channel for precise analysis. Recent work demonstrates this principle. Ghosh and Ghosh demonstrated mobile plasmonic nanotweezers 10 comprising helical ferromagnetic nanostructures with surfaces decorated by silver nanoparticles. Colloidal beads were trapped by the Ag nanoparticles and then transported to a new location by applying a rotating magnetic field to move the entire helical nanostructure (Fig. 3a).
The third opportunity we suggest for plasmonic tweezers is that of integrated structures for cold atom trapping. There is currently much interest in quantum information networks based on trapped ultracold atoms coupled to nanoscale optical cavities. In some of these demonstrations, the atoms were trapped with optical tweezers in a free-space configuration (i.e., with light focused by a lens) 12 . The use of plasmonic tweezers represents an interesting alternative that offers the possibility of higher levels of integration. Stehle et al. made an important first step in this direction by observing the interaction between Bose-Einstein condensates with the optical near-fields above plasmonic structures (Fig. 3b 16 . It was noted that heating could produce convection streams that "…may play a significant role in the trapping pro-cess…". In 2010, Ploschner et al. 17 performed a computational study of trapping with a plasmonic nanoantenna and suggested that the particle localization reported in ref. 18 "… may have been due to means other than optical forces…" such as heating. As discussed in the introduction, Wang et al. demonstrated that integrating a heat sink into a plasmonic tweezer drastically reduces the issue of heating 4 . It was also shown that illumination of a gold disk on glass (i.e., without the heat sink) at an intensity sometimes typical of plasmonic tweezers (8 mW/μm 2 ) could result in boiling of water 4 . The situation is less problematic for plasmonic nanotweezers based on nanoapertures (in metal films), as the metal film itself facilitates heat dissipation 6 . Xu et al. performed simulations that predicted that a temperature rise of~6 K would result from illumination of a nanoaperture in a gold film at an intensity of 6.67 mW/μm 2 (at λ 0 ¼ 1064 nm) 19 . This would represent a modest value for many applications but might be too much for some experiments. In such cases, the challenge of heating that accompanies plasmonic tweezers remains, and non-plasmonic approaches can be considered. Xu et al. recently demonstrated an optical nanotweezer based on a dielectric nanoantenna 20 . While the optical forces were smaller than those of many plasmonic designs, heating was substantially less 20 . Lastly, we note that, rather than representing a problem that needs to be overcome, heating can be favorable in some applications. Ndukaife et al., for example, demonstrated that the combination of plasmonic heating and an applied electric field can result in fluid motion that can be employed for particle transport 21 . While considerable progress has been made in addressing the challenge of heating since the early days of plasmonic tweezers, understanding its influence and how to control it remain important questions. This is true for both when it is desired and when it is not. A second challenge facing plasmonic tweezers is the fundamental understanding of the trapping process. One of the reasons that conventional optical tweezers have proved useful for many applications (e.g., ref. 1 ) is the availability of models that can accurately predict the behaviors of particles near a focused beam. Rohrbach, for example, demonstrated very good quantitative agreement between theory and experiment on the trapping behavior of spheres of sub-wavelength diameter 24 . However, such agreement has eluded plasmonic tweezers. This can be understood by considering the Langevin equation for the motion of a particle in an optical trap (e.g., refs. 19,25 ): where m p andr are the mass and position vector of the particle, respectively. The termsF D ,F g ,F B , andF opt are the drag force, gravity force with buoyancy, Brownian force, and optical force, respectively. We note that Eq. (1) does not explicitly incorporate thermophoresis, except for the case where heating results in fluid flow and thus an additional force on the particle due to Stokes drag. In a conventional optical tweezer, the optical force can be predicted a priori because the fields of the focused beam are known from vector diffraction theory and their interaction with a sphere can be understood by Mie scattering. As the trapping is performed in an (approximately) unbounded medium, the standard expression for the Stokes drag force (proportional to the sphere diameter, sphere velocity, and water viscosity) is applicable 26 . The Brownian force is a random Gaussian process. In conventional optical tweezers, one can furthermore drop the inertial term (left-hand side of Eq. (1)) and the gravity with buoyancy term 26 . All terms of Eq. (1) can thus be predicted a priori for conventional tweezers that trap spherical particles in homogeneous media, provided that the laser power, microscope lens numerical aperture, particle properties (diameter and refractive index), and refractive index of the medium are known. Indeed, Eq. (1) can be Fourier transformed so that the power spectrum of the trapped particle can also be predicted a priori 26 . Why this cannot be readily performed for plasmonic tweezers can be understood by re-examining the force terms of Eq. (1). In plasmonic tweezers, particles are not in an unbounded medium (such as water) but are instead trapped near a surface with a complex morphology for which no compact analytical solution for the drag forceF D exists. An additional term also needs to be included in Eq. (1) to describe the force experienced by the particle when it encounters the surface. The electromagnetic fields have a complex distribution that is associated with the plasmonic nanostructure. The optical forceF opt as a function of particle position is thus similarly complex, unlike conventional optical tweezers, for which it can be simply represented by the product of trapping stiffness and particle position. In addition, the presence of the particle will inevitably modify the field distribution. This further complicates the situation. It has been argued that this effect can be beneficial 5 , although this is again complicated by the nature of the particle (e.g., dielectric vs metallic, ref. 27 ). This uncertainty leads to the question of what approach can be taken to predict the trapping process and thus the design of new approaches to plasmonic tweezers that will, for example, allow the opportunities described above to be realized. One is to model Eq. (1) numerically 25 . Xu et al. made a first step in this direction by simulating the Brownian motion of nanoparticles in the vicinity of a DNH (Fig. 4a, ref. 19 ). Trajectories with durations of 100 μs were modeled in three dimensions (e.g., upper panel of Fig. 4a). In Fig. 4a (lower panel), the vertical position (i.e., normal to substrate) is shown as a function of time for three nanoparticle trajectories, showing that the nanoparticles are mostly within the DNH (i.e., À100nm<z<0nm). While this approach shows promise, there exists considerable scope for further development. This includes predicting particle trajectories over much longer time intervals (to allow comparison to experiment), modeling the drag force accurately, and including other forces such as the particle-surface interaction.
As discussed above, a priori prediction of the behavior of a particle in an optical trap relies on knowledge of the particle's properties (i.e., diameter and refractive index for a spherical particle). One might expect this to be a trivial matter, but recent work 22,23 suggests otherwise. We therefore contend that a third challenge facing plasmonic tweezers is how to characterize/model the particle being trapped. This is crucial for future progress, as without it there can be no systematic way of predicting the performance of (and thus evaluating) new types of plasmonic tweezers. One key difficulty is as follows. As noted by Rodríguez-Sevilla et al. 22 , optical tweezers models thus far generally consider the nanoparticle to have a sharp and well-defined interface with the surrounding medium (top panel, Fig. 4b). A more realistic model (lower panel, Fig. 4b) would take the coating layer into consideration. This could comprise coating molecules intentionally added during nanoparticle synthesis or the charge cloud induced in the nanoparticle surroundings, which can be described by the electric double-layer approximation 22 . The need for a realistic model can be understood from the work of Jauffred et al. 23 , who measured the spring constants of (conventional) optical tweezers for the manipulation of colloidal quantum dots. Little connection between the spring constant and nanoparticle diameter were observed (Fig. 4c), even though one would expect an approximately cubic dependence in a conventional nanoparticle model (upper panel, Fig. 4b). Rodríguez-Sevilla et al. 22 recently made an important step toward resolving this apparent contradiction. They measured the trapping efficiency (Q, ref. 28 ) as a function of nanoparticle zeta potential. The latter is indicative of the net charge on a nanoparticle 29 . The measured trapping efficiency vs zeta potential follows a clear trend (Fig. 4d) that is consistent with a model (red line of Fig. 4d) that assumes that the trapping efficiency is proportional to the net charge. One might expect a nanoparticle with a greater net charge to have a larger effective polarizability (for trapping), although (as noted in ref. 22 ) the assumption of proportionality is simplistic, albeit reasonable as a first approximation. We anticipate that future studies testing ð Þ¼ð 0; 0; À110 nmÞ, i.e., centered over DNH and at 10 nm from the gold surface. Reprinted with permission from ref. 19 . Copyright 2018 American Chemical Society. b The optical force on a nanoparticle will be different if the particle is ligand-free (top) or if it contains ligands (bottom). Reprinted with permission from ref. 22 . Copyright 2018 American Chemical Society. c Measured trapping spring constants of quantum dots vs total diameter d based on values for the diameter given by the manufacturer (black circles) or by transmission electron microscopy (gray triangles). Reprinted with permission from ref. 23 . Copyright 2010 American Chemical Society. d Measured trapping efficiencies of nanoparticles vs zeta potential. Red line: model. Reprinted with permission from ref. 22 . Copyright 2018 American Chemical Society this model (or a more sophisticated version of it) for plasmonic tweezers (rather than conventional optical tweezers) could be a fruitful contribution to completing our understanding of the physics of the trapping process.
Future potential
In the (roughly) decade and a half since their introduction, research on plasmonic tweezers has advanced from basic demonstrations to new interdisciplinary applications. Despite this progress, many exciting applications are yet to be fully implemented. We have described some that mainly relate to the field of nanoscience, namely, for examining the nanoworld, laboratory-on-achip, and atom optics. Realizing these opportunities will require various challenges to be overcome, such as heating and a fundamental understanding of the physics of the trapping process, even including how to accurately model the nanoparticle (being trapped). In our opinion, the opportunities (and challenges) presented by plasmonic tweezers described in this Perspective article only scratch the surface of what could be possible. It is likely that other possibilities could result from rethinking the common approach to plasmonic tweezers. One example is Brownian motion. Most plasmonic tweezers aim to generate trapping potentials that are as deep as possible to counter the effects of Brownian motion. However, rather than countering it, perhaps Brownian motion could be harnessed by "rectifying" it with a plasmonic nanostructure so that it preferentially occurs in one direction? This would be interesting not only as a fundamental study but also as an alternative means for nanoparticle transport in lab-on-a-chip devices. A demonstration of microparticle transport via this concept using silicon photonic crystals was recently reported 30 . However, no experimental demonstration has been made using plasmonic structures. With many avenues open for investigation and driven by both curiosity and real-world applications, we anticipate that plasmonic tweezers will continue to be actively pursued for some years to come.
|
v3-fos-license
|
2021-10-14T06:24:04.602Z
|
2021-09-28T00:00:00.000
|
238746037
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/26/19/5875/pdf",
"pdf_hash": "3179f3d7f8d3628be5362839a16ccb91e0e03f6f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1214",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "266a8a848fe0aa8ab75fe4d4449c6f80fc01d4ae",
"year": 2021
}
|
pes2o/s2orc
|
Crystal Structure and Solid-State Packing of 4-Chloro-5H-1,2,3-dithiazol-5-one and 4-Chloro-5H-1,2,3-dithiazole-5-thione
The crystal structure and solid-state packing of 4-chloro-5H-1,2,3-dithiazol-5-one and two polymorphs of 4-chloro-5H-1,2,3-dithiazole-5-thione were analyzed and compared to structural data of similar systems. These five-membered S,N-rich heterocycles are planar with considerable bond localization. All three structures demonstrate tight solid-state packing without voids which is attributed to a rich network of short intermolecular electrostatic contacts. These include Sδ+…Nδ−, Sδ+…Oδ−, Sδ+…Clδ− and Sδ+…Sδ− interactions that are well within the sum of their van der Waals radii (∑VDW). B3LYP, BLYP, M06, mPW1PW, PBE and MP2 were employed to calculate their intramolecular geometrical parameters, the Fukui condensed functions to probe their reactivity, the bond order, Bird Index and NICS(1) to establish their aromaticity.
Intramolecular Geometry
Crystals of the dithiazolone 2b were grown by sublimation under a static vacuum (1.6 Pa) at 30 • C. Dithiazolethione 2c demonstrated polymorphism: polymorphs 2c-α and 2c-β were obtained by slow evaporation of concentrated solutions in pentane and benzene, respectively. Suitable single crystals of dithiazoles 2b, 2c-α and 2c-β were then loaded on a goniometer and their crystal structure and solid-state packing were determined at 100 K by single-crystal X-ray diffractometry (Table S1 in Supplementary Information). Below, IUPAC numbering (not a crystallographic one) is used to assist the comparison between the new 5H-1,2,3-dithiazoles reported herein and those reported in the literature.
The C5-O bond length 1.208(2) Å in dithiazolone 2b is similar to that reported for dithiazolone 3 [37] and is typical of a C=O double bond (1.21 Å) [41]. The angles around C5 (Table 1) support a sp 2 -hybridized C of a carbonyl group [41]. The endocyclic C4-C5-S1 angle of 108.7(1) • is narrower and accounts for the five-membered ring strain. The other two C4-C5-O 126.6(2) • and S1-C5-O 124.7(1) • are wider with the one next to the Cl being slightly larger possibly due the steric interactions between the lone pairs of Cl and O atoms.
Crystal Packing and Short Contacts
1,2,3-Dithiazol-5-ones/thiones 2b and 2c-α crystal pack in the highly symmetrical Pbca space group with eight symmetry operators in operation, primarily a series of 2-fold screw axis and glide planes (Table S3 in Supplementary Information). The second polymorph of dithiazolethione 2c-β is of lower symmetry (P-1) with only two symmetry operators in effect (identity and inversion).
There is a rich network of structure-directing intermolecular interactions in the crystal packing of dithiazoles 2b, 2c-α and 2c-β (Figures 1-3). These mainly electrostatic interactions optimize contacts between electronegative and electropositive regions in neighboring molecules. Inside these five-membered rings the S-N and C-N bonds are polar due to the difference in electronegativity of their atoms; 2.58 for S and 3.04 for N and 2.55 for C. The S-N and C-N bonds should therefore be considered polarized in the sense of S δ+ . . . N δ− and C δ+ . . . N δ− . It is expected that the location of two electropositive S atoms next to each other will create a strong electropositive region near the S atoms ( Table 2). The presence of lone pairs on the N, Cl and on O and S atoms of the C=O carbonyl and C=S thione groups create pockets of electronegative regions. To better understand the electrostatic contribution to bonding we calculated the molecular electrostatic potential maps (MEP) for dithiazol-5-ones/thiones 2b and 2c at the B3LYP/def2-TZVPD level of theory (Table 2); red color corresponds to a maximum negative charge value of t, i.e., electronegative character, while blue color corresponds to a maximum positive charge value of 3.0 × 10 −2 esu, i.e., electropositive character.
The MEP for 2b and 2c are as expected blue near the endocyclic electropositive S atoms and red in the vicinity of N, Cl, O, S where the lone pairs of these atoms are located, and a build-up of partial negative charge is expected. Consequently, close intermolecular contacts between the endocyclic S and the rest of the electronegative atoms (N, Cl, O and exocyclic S) should be electrostatically favorable. It should be noted that the exocyclic S atom in 2c has areas that are red, i.e., negatively charged, where the lone pairs are expected to reside and an area in the center of the atom along the C=S bond axis that is green. Our calculations on Fukui functions (Section 2.3.2) predict an ambivalent chemical behavior which shows the thione S atom to be a site for both nucleophilic and electrophilic chemistry. . Each carbon atom of the C-C bond has a significant partial positive charge stemming from the polarization of the C-Cl and C=O bonds due to the difference in electronegativity (C δ+ -Cl δ− , C δ+ =O δ− ). The location of the oxygen atom is, therefore, ideal to create a bifurcated set of C δ+ … O δ− contacts ( Figure S1a in Supplementary Information). These C … O contacts are within the VDW [3.25 Å (minor), 3.67 Å (major)] [44].
The network of intermolecular contacts for dithiazolone 2b is concluded with a S … S contact 3.4771(7) Å [∠C-S … S, 149.75(6)°] between two endocyclic S atoms from neighboring molecules. This represents a short contact due to the proximity of the S atoms. Both atoms are expected to have a partial positive charge S δ+ but since the electron cloud around S is highly polarizable this could also be an electrostatic interaction.
Molecules of the dithiazolone 2b stack along the c-axis ( Figure S1a The crystal packing of S,N-rich heterocycles is dominated by the presence of short S … N intermolecular contacts. S-N bonds are strongly polar (S δ+ … N δ− ) and therefore electrostatically favored.
The planar geometry of this molecule results in a short intramolecular S … N contact inated by S … Cl, S … N and S … S contacts. Inside the supramolecular triangles, molecules of 2c-β are arranged around a non-crystallographic three-fold axis and are connected by a series of in-plane contacts ( Figure 3). The supramolecular triangles pack next to each other to form a planar infinite 2D sheet ( Figure S3a in Supplementary Information). The contacts inside and between the triangles are not of equal length due to the low symmetry of the crystal. While each triangle has the same number and type of contacts, their length varies. . While this type of interaction is also seen in 2c-a, the highly polarizable nature of sulfur allows for the formation of interactions between the exocyclic thione S atom and the more electronegative N and Cl atoms. This type of interaction originates from the positive σ hole of Intermolecular interactions are usually considered to be contacts considerably shorter than the sum of the van der Waals radii (∑ VDW ) of the atoms participating in these contacts. They are usually 8-20% shorter than the sum of the equilibrium radii [42]. For some atoms van der Waals radii exhibit significant anisotropy. N and O atoms have almost spherical shapes but for S and Cl the ellipticity and therefore the anisotropy increases [43]. For these anisotropic atoms the minor radii (minor axis ca. 0 • or 180 • ) correspond to contacts close to the plane of the molecule and the major radii (major axis ca. 90 • ) for contacts perpendicular to the molecular plane [43]. The sum of the minor and major van der Waals radii (∑ VDW ) of the intermolecular contacts present in the crystal packing of 2b, 2c-α and 2c-β are 3.20, 3.14, 3.18 and 3.20 Å (minor) and 3.63, 3.57, 3.81 and 4.06 Å (major) for S δ+ . . . N δ− , S δ+ . . . O δ− , S δ+ . . . Cl δ− and S δ+ . . . S δ− (endocyclic S to exocyclic thione S), respectively. For the discussion below, we provide both the sum of the minor and major van der Waals radii (∑ VDW ) since the angle of the intermolecular atom approach is somewhere along the 0-180 • range.
Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl In plane, into the S-S bond Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl Waals radii (∑VDW) since the angle of the intermolecular atom approach is somewhere along the 0-180° range. Table 2. B3LYP/def2-TZVPD potential energy maps (MEP) of the dithiazolone 2b and of the thione 2c; the isovalue for both surfaces is 0.001.
Perspective 2b 2c
Above the plane of the ring (translucent) Above the plane of the ring In plane, across the N-Cl-X atom axis (X= O and S for 2b and 2c, respectively) In plane, into the S-S bond Dithiazolone 2b crystallized in the Pbca space group with one molecule in the asymmetric unit cell. Figure 1 shows all the intermolecular interactions expanded for the central molecule in the asymmetric unit cell. This crystal packing is dominated by S … O and S … Cl The network of intermolecular contacts for dithiazolone 2b is concluded with a S . . . S contact 3.4771(7) Å [∠C-S . . . S, 149.75(6) • ] between two endocyclic S atoms from neighboring molecules. This represents a short contact due to the proximity of the S atoms. Both atoms are expected to have a partial positive charge S δ+ but since the electron cloud around S is highly polarizable this could also be an electrostatic interaction.
Molecules of the dithiazolone 2b stack along the c-axis ( Figure S1a . This is a near triangular interaction of the same covalently bound Cl to two S atoms of neighboring dithiazoles (Figure 2). The distances of the S . . . Cl contacts in 2c-α are similar to those in dithiazolone 2b. It should be noted that while dithiazolethione 4 has a terminal C-Cl bond, no S . . . Cl contacts appear in its crystal packing [38]. Instead, there is a highly symmetrical bifurcated S . . . S contact of 3.3894(7) and 3.3143(8) Å between the exocyclic C=S sulfur atom and the endocyclic S-S atoms [38]. The thione C=S bond is weakly polar and easily polarizable as sulfur is slightly more electronegative than carbon (2.58 for S vs. 2.55 for C) and is therefore expected to bear partial negative charge. The
4-Chloro-5H-1,2,3-dithiazole-5-thione (2c-β)
The β-phase of dithiazolethione 2c-β crystallized in the P-1 space group with six molecules in the asymmetric unit cell and three crystallographically independent trimers. Figure 3 shows the formation of supramolecular triangles and all the intermolecular interactions expanded for the molecules in the asymmetric unit cell. This crystal packing is dominated by S . . . Cl, S . . . N and S . . . S contacts. Inside the supramolecular triangles, molecules of 2c-β are arranged around a non-crystallographic three-fold axis and are connected by a series of in-plane contacts (Figure 3). The supramolecular triangles pack next to each other to form a planar infinite 2D sheet ( Figure S3a in Supplementary Information). The contacts inside and between the triangles are not of equal length due to the low symmetry of the crystal. While each triangle has the same number and type of contacts, their length varies.
The thione S atoms in 2c-β form bifurcated contacts with the S-S atoms of the dithiazole ring (Figure 3). These S . . . S contacts are in the range of 3.071(4)-3.370(4) Å [∠C-S . . . S, 96.7(4)-136.6(4) • ] and are well within the ∑ VDW [3.20 Å (minor), 4.06 Å (major)]. While this type of interaction is also seen in 2c-a, the highly polarizable nature of sulfur allows for the formation of interactions between the exocyclic thione S atom and the more electronegative N and Cl atoms. This type of interaction originates from the positive σ hole of the S atom. In the C=S bond, some of the electronic charge of the S atom is polarized toward the bond region, leading to a redistribution of electronic density from its outer region (along the extension of the bond) on its equatorial sides [45]. Therefore, negative electrostatic potential is developed around the sites of the S atom while its outer portion along the C=S bond becomes more positive (σ hole). This is evident from the MEP of dithiazolethione 2c (Table 2) Table 3 provides experimental, B3LYP, BLYP, M06, mPW1PW, PBE and MP2 calculated bond lengths for the dithiazolone 2b. Referring to the bond length data, both DFT and MP2 bond lengths compare well with the experimental values. For example, the B3LYP and BLYP C5-S1 bond length has the greatest difference from experimental value where B3LYP is short by 0.038 Å or 2.1% and BLYP is long by 0.079 Å or 4.5% (Table S4 in Supplementary Information). M06 shows a mild improvement as the C5-S1 bond length is longer by 0.036 Å than the experimental one or overall 2.0%. The results are similar for mPW1PW, PBE and MP2. The S1-S2, N3-C4 and C5-O bond lengths are accurately predicted by PBE, M06 and BLYP, respectively, as these methods provide calculated bond lengths within the experimental range.
Computational Bond Length Analysis
In addition to a basic comparison of experimental bond lengths and bond angles to the measured X-ray crystallography values we also provide a comparison by estimated standard deviation. In this approach we determine whether two bonds are significantly different. The convention is that measured values are said to be 'significantly different' if the difference in their lengths is greater than three times the weighted standard deviation (WSD). Table S5 in Supplementary Information provides an assessment of the difference between the experiment and computational approach performance in terms of a multiple of the weighted standard deviation (WSD). A value > 3 or <−3 indicates a significant difference between the computed bond length and the experimental one. Table S5 shows that the C5-S1 is consistently the most challenging for all the computational methods as these values range from 39.5 times the WSD using BLYP to 9.5 times the WSD for PBE and MP2. The calculated S1-S2 bond has a large range where the worst case is 75.3 times the WSD using BLYP but only −0.4 times the weighted standard deviation for PBE. Table 4 provides a comparison between the experimental bond angles and the DFT and MP2 bond angles of the dithiazolone 2b. The difference between the experimental bond angles and computed bond angles are mostly < 1 • except for the S2-N3-C4 bond angle. This latter bond angle ranges from −1.6% error for B3LYP and M06 to −0.2% for MP2. On average the differences of either the DFT or MP2 are < 1% (Table S6 in Supplementary Information). Table S7 in Supplementary Information provides the same assessment of the bond angles as was provided in Table S5 for the bond lengths.
As with the bond lengths, a value > 3 or <−3 indicates a significant difference between the computed bond angle and the experimental value. The selected computational methods perform well for many of the bond angles except the S1-C5-O where the multiple of the WSD ranges from −15.0 for BLYP to a value of −7.9 for MP2. The C5-S1-S2 bond angle also has a large range of WSD from −19.2 for BLYP to 3.3 for MP2. The biggest discrepancy for all methods is observed for the S2-N3-C4 bond angle, whereas the calculated C4-C5-S1 bond angles are all within 3 times the WSD for every computational method used. Table 5 provides experimental and computed bond lengths of the dithiazolethione 2c and Table S8 in Supplementary Information provides the percent differences. Replacement of the O in the dithiazolone 2b by S in thione 2c does little to the relative differences between our crystallographic bond lengths and the computed bond lengths. Once again B3LYP and BLYP obtain a C5-S1 bond length only slightly longer than the experimental value, by 0.009 and 0.032 Å, respectively. The largest relative difference for B3LYP and BLYP is the S1-S2 bond where B3LYP is 0.029 Å shorter or 1.4% and BLYP is 0.064 Å shorter or 3.1%. This is not true for the other functionals or for MP2. Rather the largest difference between experiment and calculated mPW1PW and PBE is in the S2-N3 bond length but these are all only 1% shorter. MP2 provides the most significant deviation from the experimental bond lengths in the N3-C4 bond where the computational difference is longer by 0.026 Å or 2.0%. Arguably, there are more calculated bond lengths within the experimental range for 2c rather than 2b. In terms of the WSD, the S1-S2 is well above 3 for B3LYP, BLYP and M06, below −3 for MP2 and is 0.1 for mPW1PW and −1.2 for PBE (Table S9 in Supplementary Information). Arguably, mPW1PW and PBE do quite well at determining the bond lengths where only the WSD of S2-N3 is below −3 or −4.3 for mPW1PW and −4.4 for PBE and the WSD of C4-Cl is −3.7 for mPW1PW and −4.1 for the PBE. Table 6 gives the experimental and the DFT and MP2 calculated bond angles of the dithiazolethione 2c. The difference between experimental bond angles and computed bond angles are, with a few exceptions, nearly 1 • of experiment (Table S10 in Supplementary Information). The notable exception is the S1-C5-S angle where the difference between experiment and calculation is 2.1 • , 2.8 • , 2.1 • , 1.7 • , 1.7 • and 1.6 • smaller for the B3LYP, BLYP, M06, mPW1PW, PBE and MP2, respectively. On average the percentage difference between the experimental and computational bond angles for each method are <0.1 • (Table S10). Table S11 in Supplementary Information provides the WSD of the computed bond angles for of the dithiazolethione 2c. MP2 performs closest to 3 × WSD of angle S1-C5-S being −7.9 up to 4.1 for S1-S2-N3. Of the DFT methods, PBE compares favorably with MP2 in that this DFT method produces angles with WSD that range from −8.6 for S1-C5-S to 5.1 for the N3-C4-Cl.
While crystal packing can influence the ring geometries, our gas phase calculations reproduce the experimental bond lengths and angles within a few percent. Thus, the gas phase computations provide a useful analysis of the electronic properties of the rings.
Fukui Condensed Functions
We calculated the condensed electrophilic (f + ), nucleophilic (f − ) and radicalary attack (f 0 ) Fukui functions based on NBO population analysis [46] with the B3LYP, BLYP, M06, mPW1PW and PBE1PBE functionals for both dithiazolone 2b (Tables S12-S14 in Supplementary Information) and the dithiazolethione 2c (Tables S15-S17 in Supplementary Information). Fukui functions act as reactivity indices that give information about which atoms in a molecule can either loose or accept an electron [47]. The information allows the determination of atoms more prone to undergo nucleophilic or electrophilic attack, e.g., the atom with the largest f + value indicates where the onset of a nucleophilic reaction will take place. The calculated Fukui functions of each atom in the ring are consistent and of similar value among the various methods we used (Tables S12-S17). Each condensed Fukui function clearly identified a preferred site of nucleophilic or electrophilic attack.
For the dithiazolone 2b, the site of highest electrophilicity is the S2 atom (f + 0.305-0.353, Table S12) followed by the S1 atom (f + 0.209-0.223). This is not surprising as sulfur is easily polarizable and S2 is involved in the weakest bonds (S-S and S-N; 430.03 and 467 kJ mol −1 , respectively [48]) of the heterocycle. Interestingly, the S2 atom is also most prone to radicalary attack (f 0 0.249-0.268, Table S14). The LUMO of dithiazolone 2b has significant orbital density on both S2 and S1 atoms ( Figure 4). These two S atoms show considerable blue coloration (positive electrostatic potential) in the MEP (Table 2) further supporting their electrophilic nature. Reactions of dithiazolone 2b with nucleophiles are expected to proceed via attack at either S2 or S1 as these atoms have similar f + values. These reactions will proceed with probable fragmentation of the S-S or S-N bonds, i.e., cleavage of the heterocycle. The high electrophilic Fukui functions (f + ) on S2 and S1 support the observed intermolecular interactions. As discussed above (Section 2.2.1), the crystal packing of 2b is dominated by S . . . O and S . . . Cl intermolecular contacts. A closer look at Figure 1 and Figure S1 in SI show that both S atoms act as acceptors of electron density from the lone pairs of O and Cl. In particular, the S2 atom is participating in both S . . . O and S . . . Cl contacts while S1 only in S . . . O. In reactions with electrophiles, dithiazolone 2b will have the possibility to react from either S atoms (f − 0.181-0.192 for S1 and f − 0.182-0.193 for S2) or from N3 (f − 0.160-0.172) since these three atoms have near equal nucleophilicities.
In contrast to dithiazolone 2b, the dithiazolethione 2c appears to be most prone to either nucleophilic, electrophilic and radical attack at the exocyclic thione S atom with f + , f − and f 0 of~0.3-0.4 (Tables S15-S17). In support, both the HOMO and LUMO of 2c have significant orbital density on the thione S atom (Figure 4). The MEP ( Table 2) shows both red and green coloration for the thione moiety, a consequence of a potential σ hole formation, indicating the ambivalent behavior of this exocyclic sulfur atom [49]. This is further evident in the intermolecular contacts of polymorphs 2c-α and 2c-β. In 2c-α the exocyclic thione S atom behaves as a nucleophile donating its lone pairs to form contacts with the endocyclic electron poor S-S atoms. While this type of interaction is also present in the crystal packing of 2c-β, the ambivalent character of the thione S atom is evident by the formation of S . . . Cl and S . . . N contacts wherein the thione S atom behaves as an electrophile through its σ hole, accepting contacts from the lone pairs of Cl and N.
We also calculated the condensed Fukui functions for Appel's salt 1 (Tables S18-S20 in Supplementary Information) and found that S2 is the site of highest electrophilicity. This agrees with the prominent positive charge for S2 recently reported by Bartashevich and coworkers [50]. Our Fukui calculations indicate that S1 is the site of highest activity towards radical attack and the two Cl atoms are the sides of high nucleophilicity.
ing of 2b is dominated by S … O and S … Cl intermolecular contacts. A closer look at Figures 1 and S1 in SI show that both S atoms act as acceptors of electron density from the lone pairs of O and Cl. In particular, the S2 atom is participating in both S … O and S … Cl contacts while S1 only in S … O. In reactions with electrophiles, dithiazolone 2b will have the possibility to react from either S atoms (f − 0.181-0.192 for S1 and f − 0.182-0.193 for S2) or from N3 (f − 0.160-0.172) since these three atoms have near equal nucleophilicities.
Bond Order, Bird Index and NICS(1)
The Wiberg bond orders [51] are calculated within the NBO analysis. This approach to computing bond orders results in values usually close to formal bond order for most chemical systems. Table 7 provides the B3LYP/def2-TZVPD bond orders for the Appel's salt 1, dithiazolone 2b and the dithiazolethione 2c. The bond orders of the Appel's salt 1 reflect the delocalization around the ring where the S1-S2 has a bond order of 1.08 and is the least delocalized of all of the bonds. The bond orders of the dithiazolone 2b and the dithiazolethione 2c show significantly less delocalization. In dithiazolone 2b the N3-C4 bond order is 1.68 and in dithiazolethione 2c the analogous bond order is 1.64. In dithiazolone 2b the C5-O bond order is 1.77 and in dithiazolethione 2c the analogous bond order albeit the C5-S is 1.66. These bond orders suggest significant double bond character. By contrast, the analogous bonds in the Appel's salt 1 have bond orders of 1.48 for the N3-C4 and 1.22 for the C5-Cl2, i.e., less double bond character. The S2-N3 of the dithiazolone 2b has a bond order of 1.12 and the same bond order in the dithiazolethione 2c is 1.13 suggesting a single bond. The same is true for the C5-S1 bond orders in both the dithiazolone 2b and dithiazolethione 2c where the values are 1.06 and 1.15, respectively. The latter bond orders are larger in the Appel's salt 1 being 1.29 and 1.36, respectively. According to the bond orders, Appel's salt 1 is more delocalized than either the dithiazolone 2b or dithiazolethione 2c. The experimental and calculated bond lengths and bond angles of Appel's salt 1 can be found in Tables S21 and S22 (Supplementary Information). Our calculated NBO bond orders for Appel's salt 1 compared favorably with the recently reported QTAIM-based bond orders [50]. The latter were calculated using experimental electron density. The NBO and QTAIM-based bond orders for the endocyclic bonds C5-S1 (1. To further explore the delocalization, we computed the Bird Index (I n ) [52] along with the nucleus-independent chemical shifts (NICS) [53,54]. The Bird Index, I n evaluates the aromaticity of a ring in terms of the statistical evaluation of the deviations in peripheral bond orders. NICS(1) evaluates the aromaticity of cyclic systems and is defined as the negative value of the absolute shielding computed at 1 Å above the ring centroid. From the experimental bond lengths our computed I n for dithiazolone 2b is 36.1, for the dithiazolethione 2c is 39.9 and for the Appel's salt 1 is 62.2 (I 5 = 38 for oxazole and I 5 = 62 for 1,2-dithiolium [52]). Using the B3LYP bond orders as shown in Table 7, the resulting Bird Indices are only slightly larger or 37.5 for dithiazolone 2b 44.3 for dithiazolethione 2c and 63.6 for Appel's salt 1. Table 8 provides the NICS(1) for the Appel's salt 1, dithiazolone 2b and dithiazolethione 2c. Comparison of the Appel's salt 1 with dithiazolone 2b and dithiazolethione 2c suggests the Appel's salt cation to be more aromatic than either dithiazolone 2b or dithiazolethione 2c. Using the NICS(1) as an index of the aromaticity, the relative ordering of the compounds in terms of aromaticity of the π electrons is Appel's salt 1 > dithiazolone 2b > dithiazolethione 2c. Table 8. Nucleus-independent chemical shifts (NICS) taken 1 Å over the center of the ring of Appel's salt 1, dithiazolone 2b and dithiazolethione 2c rings.
Conclusions
We examined the intramolecular crystal structure and solid-state packing of two neutral monocyclic 4-chloro-5H-1,2,3-dithiazoles, the 5-one 2b and the two polymorphs of 5-thione 2c (2c-α and 2c-β). Both molecules are planar and the intramolecular geometrical parameters support localization of bonds inside the ring. Their crystal packing is dominated by S . . . N, S . . . O, S . . . Cl and S . . . S short intermolecular interactions of primarily electrostatic nature. Condensed Fukui functions indicate electrophilic sites at S2 and at a lesser degree on S1 for the dithiazolone 2b and the thione exocyclic S for the dithiazolethione 2c. The nucleophilic sites for dithiazolone 2b are S1, S2 and N3 on par and the exocyclic S thione for 2c. The ambivalent character of the S thione in 2c is evident by the rich intermolecular contacts it participates with both positive (endocyclic S-S atoms) in polymorph 2c-α and negative (O, N, Cl) atoms in 2c-β. This can be best explained by the formation of a σ hole which polarizes the S atom in regions of negative and positive electron density. Bond orders, Bird Indices and NICS(1) calculations showed that dithiazolone 2b and dithiazolethione 2c do not have an extensive delocalization and are, therefore, less aromatic that Appel's salt 1.
Computational Methodology
Our crystallographic data provided the initial geometry for the pair of structures. Each structure was energy optimized by density functional theory. The combinations of exchange and correlation functionals used in this work include B3LYP [58][59][60][61][62], BLYP [59,63], mPW1PW [64], PBE1PBE [65] and M06 [66] as implemented in Gaussian09 [67]. The methods used in this work are proven to represent similar heterocyclic compounds. B3LYP is a standard of DFT for gas phase molecules and is a well proven compromise between computational cost and accuracy [68,69]. BLYP and PBE are also common functionals applied to gas phase molecules and yield comparable results where the difference is that BLYP and PBE are gradient corrected functionals and B3LYP is a hybrid function. These approaches have been shown to provide good results for magnetic, vibrational, and electronic properties of molecules as compared to DFT functionals that include extensive parameterization [70]. The mPW1PW fits the exact exchange energy of isolated atoms and the differential exchange of ideal gas dimers significantly improving the long-range behavior of the exchange functional [71]. The M06 functional introduces empirically optimized parameters into the exchange-correlation functional and provides good geometry, energy and property data [66]. MP2 is a standard ab initio computational method that includes correlation and is free from the spurious self-interaction of electrons and naturally includes dispersion. The basis set utilized in this work consisted of a single def2-TZVPD. This basis is optimized for properties and computational cost specifically for density functional theory [72]. The calculated nucleophilic (f − ), electrophilic (f + ) and radical attack (f 0 ) characterize the electron reorganization that results in site electrophilic or nucleophilic activation/deactivation [47]. Calculation of the Fukui reactivity-indices was performed for each DFT from the NBO [46] populations using the UCA-FUKUI software [73].
|
v3-fos-license
|
2024-03-15T15:58:05.636Z
|
2024-03-12T00:00:00.000
|
268409316
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3626184.3635284",
"pdf_hash": "3e815407d2687a61430cba29eed97a628f127416",
"pdf_src": "ACM",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1217",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "c524abfe26ff831d12d088e36b82ae3f324942fa",
"year": 2024
}
|
pes2o/s2orc
|
Warpage Study by Employing an Advanced Simulation Methodology for Assessing Chip Package Interaction Effects
A physics-based multi-scale simulation methodology that analyses die stress variations generated by package fabrication is employed for warpage study. The methodology combines coordinate-dependent anisotropic effective properties extractor with finite element analysis (FEA) engine, and computes mechanical stress globally on a package-scale, as well as locally on a feature-scale. For the purpose of mechanical failure analysis in the early stage of a package design, the warpage measurements were used for the tool's calibration. The warpage measurements on printed circuit board (PCB), interposer and chiplet samples, during heating and subsequent cooling, were employed for calibrating the model parameters. The warpage simulation results on full package represented by PCB-interposer-chiplets stack demonstrate the overall good agreement with measurement profile. Performed study demonstrates that the developed electronic design automation (EDA) tool and methodology can be used for accurate warpage prediction in different types of IC stacks at early stage of package design.
INTRODUCTION
Latest advancements in technology have created demand for higher performance and compact electronic devices, which has led to the growing demand for increasing functionality, and density.For these needs, the technology of multi-stacking of IC chips, namely 3D stacking, emerges as a solution and is driving the need for thinned substrates, as well as new interconnects such as, copper pillars, TSVs (through Silicon Vias) or hybrid bonding.These structures may cause thermomechanical stress that originates from high temperature die packaging step due to mismatch in thermomechanical properties between die and package materials, which is termed as chip package interaction (CPI).Copper pillars, TSVs, die edges may induce mechanical stress locally.On a global scale, thinned dies and substrate packages can increase thermomechanical stress.CPI-induced stress may generate unexpected variations in device performance and reliability problems.The stress effect on device performance is referred here as electrical CPI (eCPI).Here, CPI-induced stress can shift carrier mobility, which changes parameters of devices, and results in parametric failure of circuits.We have previously reported the development of a physicsbased multi-scale EDA tool that determines across die stress variations that are caused by 3D package fabrication, by employing a multi-scale simulation methodology that resolves down to the order of a layout feature size [1][2][3].The tool's eCPI analysis capabilities have been demonstrated in [3] that includes a two step multi-scale stress analysis, a SPICE netlist back annotation of the obtained stress components for accurate circuit simulation.It was also demonstrated that the tool can be calibrated against electrical measurements.Another known effect of stress on chip reliability is related to fracture in interconnects -we call it mechanical CPI (mCPI).As an example, stress can cause cracking in ultra-low k (ULK), or extreme-low k (ELK) dielectrics that are adopted for reducing interconnect delay, but have deteriorated mechanical properties due to incorporation of porosity [4].The other issue is the out of plane displacement, or warpage, which is a growing concern as dies and package substrates tend to become thinner in order to improve electrical performances.In addition, the in-plane size tends to be wider particularly for high performance computing applications, which is a clear risk in term of warpage [5].A study shows that severe warpage may cause problems in the manufacturability of IC packages, and can degrade reliability of devices and circuits [6].For predicting and analyzing warpage, FEA has been widely used.
To improve the accuracy of the warpage simulation on a substrate, many studies have tried to include the layout effects of metal patterns, whose thermomechanical properties are quite different from those of insulators.The layout-induced effects include metal's non-uniform distribution as well as anisotropy, which may influence warpage behavior differently [6][7].However, these warpage studies were mainly focused on package substrate block alone, and did not extend to a die-substrate stack structure, in which CPI induced stress effects on a die can be analyzed.To expand the usage of our tool to mCPI issues, we now apply our CPI stress analysis tool to the study of warpage that is observed in a package stack.One of the obstacles is the tool calibration procedure.Unlike the tool's earlier practice that has been successfully done for eCPI analysis, the tool calibration on the electrical measurements may not be available, as it is performed at early, or pre-design stage during process development.Therefore, the tool calibration must be done with an alternative measurement.In the present study, a possibility to perform the tool calibration against warpage of a package and a die is demonstrated.To optimize the model parameters, the simulation results were compared with experimental measurements on INTACT 3D package that consists of six chiplets on top of an active silicon interposer [8].In the next sections, the warpage measurement samples, and measurement procedure will be briefly described, which will be followed by tool calibration and warpage simulation procedures: the altitude measurements are collected on individual package blocks, such as PCB, and a chiplet while heating and cooling, and are employed for the tool calibration.Once the calibration is complete, the tool's warpage prediction on fully stacked sample is made, and compared with altitude measurements.
TEST SAMPLE DESCRIPTION
The analyzed multi die stack, INTACT is designed with a chipletbased 3D technology for high performance computing.The detailed description can be found in [8].The package consists of the following three main layers: -six identical chiplets with 28 nm technology, -active interposer with TSVs, based on 65 nm technology, and, -PCB.
There are bump layers between each chiplet and interposer, and interposer and PCB.The image of the test sample package and the die stack are shown in Figure 1.
WARPAGE MEASUREMENTS
The warpage is measured by Altisurf © 520 (Altimet) with a hot plate that is an add-on feature.For altitude, or height measurements, a free-standing sample is put on a hot plate.Then the height of the top surface is measured at pre-selected temperatures during heating, and subsequent cooling.For post processing of the measurements, Gaussian filtering is employed.The measurement procedure was repeated on (1) PCB without ball grids and bumps,
SIMULATION MODEL
Developed FEA tool has a capability to simulate strain/stress fields everywhere in the stack, which are generated by: -high temperature package assembly process, -non-uniform temperature distribution during chip's operation condition, with inputs from chip power management tool, and, -externally applied force under the setting of four-point package bending test, a popular method for the device model calibration and validation by measurements of deterministic mechanical straininduced variations in device characteristics [3], and for package reliability experiments and simulations [9][10].
Prior to running FEA for stress calculation, the tool extracts material properties of composite-like blocks included in the package (BEoL interconnect, bump layer).The anisotropic effective material properties extractor (EMP) employs the rules of mixtures from the theory of anisotropic composite materials [11], and adopts a binbased approach.Each layer of a die or a substrate is divided into square bins.The bin size is user-defined, and needs to be as small as the feature size to be analyzed.The layout processor identifies the metal objects within each bin, and calculates area density.Then the density-dependent effective properties are calculated for each bin.The anisotropy is also considered by taking metal routing directions into account [3].The EMP extractor eliminates the detailed geometry building in FEA, reduces memory consumption, and greatly enhances the performance [2].In one of the FEA runs on chiplet, the virtual memory size was 3 giga bytes.
In the present study, the tool was employed for simulating warpage phenomena that is caused by thermo-mechanical stress due to temperature gap between temperature to be analyzed, and the highest uniform temperature the sample has undergone during package assembly process, such as solder reflow.At this highest temperature, the sample is considered to reach at stress-free state.The tool flow is shown in figure 4, in which the simulation is performed on a package-scale.The schematic of model package structure for simulation is displayed in figure 5. PCB is represented by a three layer block: a thick core layer consisting of fiber-polymer composite separates top and bottom layers in which multi-level copper lines exist [6].Both interposer and chiplets consist of two layers -silicon (or Si/TSV) and BEoL.For each of these layers, EMP extracts uniform smeared properties.Table 1 summarizes the extraction results of averaged, or smeared properties, which will be refined during calibration.Table 1: Layer initial properties employed for simulation.E is Young's modulus, is coefficient of thermal expansion, and is Poisson ratio.Anisotropic properties are represented by three (x, y, z) components.
PCB Sample
The properties of PCB are further refined by calibrating against warpage measurements that were made on PCB alone structure.Figure 6(a) summarizes the measured warpage value, Z, that is defined as the height different between center and edge locations, along the two diagonal directions, during heating up to 150 °C, and then subsequent cooling.For both directions, Z profiles are similar during heating and cooling, which allows us to collect all data points for each temperature, and obtain linear regression curve fit for measurements.The resulted linear curve is shown in (b), together with error bars.The figure also demonstrates the good fit between the measurements and simulation results after adjusting parameters.
Chiplet Sample
For chiplet alone sample without -bumps, the measurements were made on BEOL top surface.The sample's planar dimension is 6 x 4 mm 2 with a thickness ~600 m silicon substrate.The calibration procedure that was performed for PCB is repeated here to obtain the linear regression curve fit for measured Z.Then the simulated warpage values are compared against the obtained curve.As shown in figure 7, good agreement between simulated and measured warpage values was obtained.Here, the baseline warpage shape exhibits convex, as opposed to concave at lower temperature results as shown in figure 8.For the parameter calibration for interposer, the warpage data available in an earlier report has been used [8].In order to further improve the difference between the simulated and measured profiles along horizontal cutline, the following additional set of simulations have been performed.
(1) Initial strain: Each FEA stress calculation takes input temperature gap, which is the difference between highest package assembly process temperature, and room temperature, or temperature to be stress analyzed.Each component in the package is considered stress-free at the highest package assembly process temperature, unless initial strain, or pre-existing strain is taken into account.One of the sources for the initial strain can be thermal history of a package component prior to assembly, so that this particular component may not maintain stress-free state at the highest package assembly process temperature.Such initial strain must be accurately estimated and taken into account, in order to improve the simulation accuracy.When the initial strain for each of the package components is supplied as an input, FEA can add the initial strain to the existing strain that is coming from the temperature gap, and obtains the final solution.In our additional simulation, the initial strain has been given to each component by providing different highest processing temperatures: -100 ~ 230 °C for PCB, 452 °C for interposer, 248 °C for chiplet.On all these sets of simulation results, it turns out that the chiplet's top surface curvature along the horizontal cutline did not reverse the sign.
(2) coupled transient thermal and stress simulations: In these simulations, we employed effective thermal properties to perform coupled transient thermal and stress simulations.The temperaturedependent bump properties [12] were employed in the simulation, in order to observe the time-dependent surface profile that could be close to measured surface curvature.
(3) plasticity of the solder joints: bump/underfill layers are assumed to deform plastically, in order to investigate if such plastic deformation could lead to the change in top surface profile of the chiplets.Here, deformation with perfect plasticity (no hardening) and linear hardening were assumed.It was found that the effects of all these additional implementations were not significant for further improving the simulation profiles.
SUMMARY
The warpage measurements on PCB, and chiplet samples, during heating and subsequent cooling, were employed for calibrating the tool's model parameters.After calibration was completed for individual PCB and chiplet blocks, the tool predicts well the global temperature dependent warpage profile as well as local layout dependent surface profile.The warpage simulation results on full stack package demonstrate the overall good agreement with measurement profile when taking measurement error into account.Additional simulations were performed to investigate the effects of prior thermal history of individual package blocks, transient thermal effects, and plasticity of solder joints, on the warpage behavior of the full stack package.The simulation results reveal that these additional factors do not significantly change the height profile of the full stack package.The study demonstrates that the warpage measurements performed on the individual components of the stack can be employed for the tool calibration for mCPI applications, and for the prediction of the warpages of different types of stacked IC packages on early steps of development.
chiplet without bumps, and (3) full stack package with face-down chiplets.The employed measurement grid sizes are (dx, dy) = (1, 50) m for the chiplet sample,(1, 200) m for PCB, and (1, 1000) m for full stack package.At room temperature, the measurement uncertainty can be up to ±2 µm.The uncertainty can increase to ±5 µm when temperature increase up to 200 °C.Figure2displays the 3-dimensional height profile that were measured on a full stack sample at room temperature, where the peaks for 2 x 3 array of chiplets and passive devices are shown.In a separate measurement, Figure3shows the height profiles on a chiplet BEoL (back-end of line) surface at room temperature.The 2dimensional map pattern in (a) represents the copper pillar layer, as these pillars of individual diameter of ~10 m, are exposed to air without underfill.The 1-dimensional height profiles across two diagonal directions are displayed in (b).The height profile for the two curves virtually match well, even though the local pattern may differ in some regions.
Figure 2 :
Figure 2: Measured surface height on a full stack structure at room temperature.
Figure 3 :
Figure 3: Measured height on a chiplet BEoL surface at room temperature: (a) 2-dimensional height map, (b) 1dimensional profiles along two diagonal directions.
Figure 5 :
Figure 5: Package structure employed in the simulation.
Figure 6 :
Figure 6: (a) Measured warpage, Z = Zcenter -Zedge, across two diagonal directions during heating and cooling.See Figure 3 for two different directions, "diag", and "diag2".For each direction, two samples are measured, and numbered as "_1", and "_2".(b) Linear regression curve showing average measured Z as a function of temperature.After parameters adjustment, simulated warpage values provide good agreement with average measured values.Here, the measurements are represented by the error bars at each temperature.
Figure 7 :
Figure 7: Linear regression by employing measured warpage across two diagonal directions during heating and cooling on chiplet's BEoL surface.After parameters adjustment, simulated warpage values provide good agreement with average measured values.
Figure 10 :
Figure 10: 1D profile along (a) horizontal cutline, and (b) vertical cutline, on a full stack package at room temperature.Simulation results are compared with measurements.Measurement uncertainty of ±2 m is indicated as the error bars.
|
v3-fos-license
|
2018-06-12T00:43:32.125Z
|
2018-06-04T00:00:00.000
|
46989450
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7717/peerj.4872",
"pdf_hash": "33e55fec540fa455bd085a78728766c69f9d5fb1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1218",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "33e55fec540fa455bd085a78728766c69f9d5fb1",
"year": 2018
}
|
pes2o/s2orc
|
Cytotoxicity and antimicrobial action of selected phytochemicals against planktonic and sessile Streptococcus mutans
Background Dental caries remains the most prevalent and costly oral infectious disease worldwide, encouraging the search for new and more effective antimicrobials. Therefore, the aim of this work was to study the antimicrobial action of selected phytochemicals (eugenol, citronellol, sabinene hydrate, trans-cinnamaldehyde, terpineol and cinnamic acid) against Streptococcus mutans in planktonic and biofilm states as well as the cytotoxicity of these compounds. Methods The antibacterial activity of the selected compounds was evaluated by the determination of the minimal bactericidal concentration. The resazurin assay was used to assess the metabolic activity of sessile S. mutans. The cytotoxicity was determined using a fibroblast cell line. Results Among the tested phytochemicals, citronellol, cinnamic acid and trans-cinnamaldehyde were the most effective against both planktonic and sessile S. mutans, an effect apparently related to their hydrophobic character. Additionally, these three compounds did not compromise fibroblasts cell viability. Discussion Citronellol, cinnamic acid and trans-cinnamaldehyde demonstrated significant antimicrobial activity and low cytotoxicity proposing their potential as a novel group of therapeutic compounds to control oral infectious diseases. Moreover, their effects are particularly relevant when benchmarked against eugenol, a phytochemical commonly used for prosthodontic applications in dentistry.
INTRODUCTION
Oral diseases continue to be a major health problem worldwide. Dental biofilm formation can lead to the development of oral infectious diseases, such as caries, gingivitis and periodontitis (Chinsembu, 2016;Hwang, Klein & Koo, 2014). Dental caries is one of the most important global oral health problems, which is mainly associated with oral pathogens, particularly cariogenic Streptococcus mutans (Chinsembu, 2016;Gross et al., 2012;Hwang, Klein & Koo, 2014). S. mutans has the ability to metabolize several carbohydrates into organic acids that reduce the pH of dental plaque biofilm, causing the demineralization of tooth enamel and, consequently, leads to the initiation of dental caries. This bacterium is also a crucial contributor to the formation of a matrix of extracellular polymeric substances (EPS) on dental biofilms. Moreover, S. mutans-derived exopolysaccharides, mostly glucans, provide binding sites that promote accumulation of other microorganisms on the tooth surface and further establishment of cariogenic biofilms. The potential of S. mutans to survive under environmental stresses, such as acid conditions, high temperature and osmotic pressure, are other major virulence factors of this microorganism (Kawarai et al., 2016;Kwon et al., 2016;Liu & Yu, 2017;Zhang et al., 2016). Therefore, S. mutans should be a prime target for any therapeutic agent aimed at preventing dental caries.
Currently, the antibacterial agents used on the mouth, as mouth rinses or toothpastes, may inhibit the growth of S. mutans , preventing the development of dental caries. It is well-known that antibacterial mouth rinses are effective in decreasing tooth surface plaque (Quirynen et al., 2001). These mouth rises may contain fluorides, alcohols, detergents, and other synthetic antimicrobials compounds that include povidone iodine products, chlorhexidine and cetylpyridinium chloride. Some toothpastes also contain fluorides and other antimicrobials including triclosan and zinc citrate (Baehni & Takeuchi, 2003;Quirynen et al., 2001;Sheen, Eisenburger & Addy, 2003). However, there is an increasing pressure to substitute synthetic antimicrobials, which already gave rise to concerns regarding their toxicological and ecotoxicological properties (Dann & Hontela, 2011). In parallel, microorganisms will continue acquiring resistance to synthetic antimicrobial agents, which has encouraged the search for alternative products (Allaker & Douglas, 2009;Chuanchuen et al., 2001;Upadhyay et al., 2014).
Nowadays, natural antibacterial compounds in particular plant-derived compounds, are attracting attention to develop novel therapeutics against oral infectious diseases (Allaker & Douglas, 2009;Borges et al., 2016). Eugenol is a good example of natural compounds widely used in dental care as an antimicrobial, analgesic and anti-inflammatory, showing to be active against caries-related oral bacteria (Jadhav et al., 2004;Li et al., 2012;Xu et al., 2013). These plant-derived natural compounds, also referred as phytochemicals, are responsible for plant interactions with the environment. They are an attractive source of eco-friendly, relatively inexpensive and widely available new broad-spectrum antimicrobials with low levels of cutaneous cytotoxicity and environmental toxicity (Abreu et al., 2016;Borges et al., 2014a;Borges et al., 2016;Borges et al., 2017b;Dahiya & Purkayastha, 2012;Upadhyay et al., 2014). Furthermore, the multiple antimicrobial mechanisms of action of phytochemicals can prevent the emergence of resistant bacteria (Dahiya & Purkayastha, 2012;Upadhyay et al., 2014).
Essential oils (EOs) have been thoroughly explored in several studies, showing their broad-spectrum antimicrobial properties against both Gram-positive and Gramnegative bacteria (Bazargani & Rohloff, 2016;Prabuseenivasan, Jayakumar & Ignacimuthu, 2006;Sieniawska et al., 2013;Szczepanski & Lipski, 2014). These phytochemicals have also been reported to possess significant anti-inflammatory, antioxidant, anticancer, immune modulatory and regenerative activities (Bayala et al., 2014;Burt, 2004; De Cassia da Silveira e Sa, Andrade & De Sousa, 2013;Sadlon & Lamson, 2010;Woollard, Tatham & Barker, 2007). Despite the research progress on the antimicrobial activity of some EO components against oral bacteria, such as eugenol, many others remain largely unknown in the field of dentistry. In the present study, in order to provide further evidences on the antimicrobial potential of selected EO components, the antibacterial activity of citronellol, sabinene hydrate, trans-cinnamaldehyde, and terpineol were evaluated against S. mutans in both planktonic and sessile states, using eugenol as reference. The selection of phytochemicals was based on their promising effects in microbial growth control (Borges et al., 2017a;Lopez-Romero et al., 2015;Mith et al., 2014;Sharma et al., 2016;Szweda & Kot, 2017). Additionally, and based in a previous study (Malheiro et al., 2016), cinnamic acid, a phenolic acid, was also included in this study given its efficacy in the control of sessile bacteria, with activity similar to the benchmarked disinfectants, including peracetic acid, sodium hypochlorite and hydrogen peroxide. Furthermore, given the fact that the antimicrobial action of these compounds is known to be strictly correlated with their structure (Borges et al., 2017a), a drug-likeness evaluation based on the the chemical and molecular properties of these compounds was also carried out. Finally, the phytochemicals were evaluated for their cytotoxicity against a fibroblast cell line.
Bacterial strain and culture conditions
S. mutans DMS 20523 was used in all experiments. The bacterium was preserved at −80 • C in Tryptic Soy Broth (TSB, Oxoid, Basingstoke, UK) containing 30% (v/v) glycerol (Panreac, Barcelona, Spain). The bacterial cultures were grown overnight in TSB at 37 • C under 160 rpm of agitation before the experiments.
Phytochemicals
Trans-cinnamaldehyde, sabinene hydrate, eugenol and terpineol (Table 1) were obtained from Sigma-Aldrich (Lisbon, Portugal); cinnamic acid was obtained from Merck (Lisbon, Portugal); citronellol was obtained from Acros Organics (Morris, NJ, USA). The structural and molecular properties of selected phytochemicals were determined with Molinspiration Calculation Software and Chemdraw (Malheiro et al., 2016). The phytochemicals were dissolved in dimethyl sulfoxide (DMSO, Sigma-Aldrich, St. Louis, MO, USA). Each compound was tested at various concentrations in the range of 1-25 mM in DMSO.
Liao et al. (2012), Song et al. (2013), Sova (2012) and Zhang et al. (2014) and incubated at 37 • C for 1 h. Bacterial suspensions with DMSO (5%, v/v) and bacterial suspensions without phytochemicals were used as negative controls. Eugenol was used as positive control. Afterwards, 180 µL of the content of wells was removed and 180 µL of antimicrobial neutralizer composed by lecithin (3 g/L), polysorbate 80 (30 g/L), sodium thiosulfate (5 g/L), L-histidine (1 g/L), saponin (30 g/L) in phosphate buffer 0.25 mol/L at 1% (EN-1276, 1997) was added and allowed to act for 15 min. After that, 10 µL of each well was dropped on TSA plates. Finally, after 24 h of incubation at 37 • C, the plates were analyzed and the MBC of each phytochemical corresponded to the minimum concentration causing no growth on the TSA plates. The experiments were performed in triplicate and repeated three times.
Biofilm formation and control using phytochemicals
Biofilm formation and control was performed according to Borges et al. (2017a). The cell density of the overnight grown bacteria was adjusted to approximately 10 7 cells/mL in TSB. Then, 200 µL of the bacterial suspension were added to a 96-wells polystyrene microtiter plate and incubated at 37 • C during 24 h and under 160 rpm of agitation. After biofilm development, the medium was removed and the wells were washed twice with NaCl solution (8.5 g/L) in order to remove loosely attached bacteria. Then, 190 µL of NaCl solution (8.5 g/L) was added to each well with 10 µL of the phytochemicals at the MBC. Sessile bacteria with DMSO (5%, v/v) and sessile bacteria without phytochemical were used as negative controls. Eugenol was used as positive control. The microtiter plate was incubated at 37 • C and 160 rpm during 1 h. After that, the remaining attached bacteria were analyzed in terms of metabolic activity by the resazurin assay.
Biofilm analysis by the resazurin assay
The metabolic activity of sessile bacteria was evaluated by the resazurin assay (Borges et al., 2014b;Ribeiro et al., 2017). This is a simple and non-reactive assay, where a non-fluorescent blue component is reduced by the living cells to a pink fluorescent component. After 1 h of incubation with the phytochemicals the content of the wells was removed and the wells were washed with NaCl solution (8.5 g/L). Then, 180 µL of fresh TSB was added to the wells. A volume of 20 µL of resazurin was added to each well (10%, Sigma-Aldrich, Portugal). Subsequently, the plate was incubated at 37 • C for 3 h and 160 rpm and the fluorescence intensity was measured in microplate reader (FLUOstar Omega, BMG Labtech, Ortenberg, Germany) at 530 nm excitation wavelength and 590 nm emission wavelength. Control experiments were performed on the growth inhibitory effects of DMSO and no inhibitory effects were found with DMSO at 5% (v/v) (available in the raw data file). The data reported were the average of four samples.
Cytotoxicity of phytochemicals
Fibroblasts cell line L929 were cultured in alpha minimum essential medium (α-MEM; Gibco, Invitrogen, Carlsbad, CA, USA) supplemented with 10% (v/v) fetal bovine serum, 100 IU/mL penicillin, 100 µg/mL streptomycin and 2.5 µg/mL amphotericin B (all from Gibco, Invitrogen, Carlsbad, CA, USA), at 37 • C in a humidified atmosphere of 95% air and 5% CO 2 . At 70-80% confluence, the adherent cells were washed and detached with a trypsin solution (0.05% in 0.25% EDTA; both from Sigma-Aldrich, St. Louis, MO, USA) for 5 min at 37 • C. Cells were seeded on 48-well culture plates (Corning Incorporated, Corning, NY, USA) at a density of 3 × 10 4 cells/cm 2 and incubated for 24 h. Cells were then exposed to the different phytochemicals at MBC for 24 h. Afterwards, cell metabolic activity was evaluated using the resazurin assay (Ribeiro et al., 2017). Briefly, fresh complete medium containing 10% of resazurin (0.1 mg/mL; Sigma-Aldrich, St. Louis, MO, USA) was added to each condition and the plates were incubated for 3 h. The fluorescence intensity was then measured (530 nm excitation; 590 nm emission) using a microplate reader (FLUOstar Omega, BMG Labtech, Ortenberg, Germany). The data reported were the average of four samples. The results of the cell metabolic activity (MA) were expressed as percentage of the control group (DMSO; Sigma-Aldrich, St. Louis, MO, USA) by using the following Eq. (1): where MAp and MAc are the metabolic activity of the phytochemical and the control, respectively.
Statistical analysis
The results were expressed as the average ± standard deviation. The statistical analysis of the results was done using the one-way analysis of variance (One-way ANOVA) followed by post hoc Tukey HSD multiple comparison test. Levels of P < 0.05 were considered to be statistically significant.
Drug-likeness evaluation
A drug-likeness evaluation was carried out focused on selected natural compounds and, for that, the chemical structure and molecular properties of the phytochemicals were assessed. As shown in Fig. 1, all the compounds presented an octanol-water partition coefficient (logP) ≤ 5, a molecular weight ≤ 500 Da (g/mol), a number of hydrogen bond acceptors ≤ 10 and a number of hydrogen bond donors ≤ 5. According to the Lipinski's rule of five these are the requisites for the molecules to be considered as ''drug-like compounds'' (Lipinski, 2004;Lipinski et al., 2001). Additionally, all the phytochemicals presented a number of rotable bonds (n-ROTB) ≤ 5 and a topological polar surface area <40 Å.
Antibacterial activity of phytochemicals on planktonic S. mutans
The antibacterial activity of selected phytochemicals was evaluated by MBC determination. As shown in Table 2, citronellol, cinnamic acid, sabinene hydrate, eugenol, transcinnamaldehyde and terpineol presented antibacterial activity against planktonic S. mutans. Moreover, the phytochemical that showed the lowest MBC was citronellol, being consequently the most effective.
Antibacterial activity of phytochemicals on S. mutans biofilms
The effects of the selected phytochemicals against pre-established 24 h-old S. mutans biofilms were evaluated in terms of metabolic activity. The selection of the suitable concentration of each compound for these anti-biofilm assays was based on the determination of its MBC against planktonic cells. As shown in Fig. 2, citronellol (46%), cinnamic acid (60%) and trans-cinnamaldehyde (50%) caused a statistically significant reduction of biofilm metabolic activity (P < 0.05). On the contrary, eugenol, sabinene hydrate and terpineol were not effective in inhibiting the biofilm (P > 0.05).
Cytotoxicity of phytochemicals
The effect of citronellol, cinnamic acid and trans-cinnamaldehyde was evaluated on the fibroblast cell line L929. Eugenol was not effective in inhibiting S. mutans biofilms. However, it was also used as reference compound. The cells were exposed to the phytochemicals at MBC during a 24 h period. As shown in Fig. 3, the metabolic activity after exposure to phytochemicals was statistically lower than the control (5% DMSO, v/v), except for cinnamic acid, which did not present any statistically significant difference in cell viability (P > 0.05). Despite the slight decrease in metabolic activity for citronellol and eugenol, cell viability of 87 and 89% was obtained, respectively. The percentage of viable cells with trans-cinnamaldehyde was 72%, being the compound causing the most significant loss of cell viability.
DISCUSSION
The present work was undertaken to evaluate the antimicrobial potential of selected phytochemicals on both planktonic bacterial growth and biofilm formation of S. mutans, in order to search for new therapeutic antimicrobials to treat and prevent oral infectious diseases, particularly dental caries. Biofilm cells are known to be physiologically distinct from their planktonic counterparts, being surrounded by extracellular polymeric substances (EPS), which have a major role both in biofilm formation and maintenance through nutritive and protective functions. This peculiar form of biofilm development confers on the associated bacteria great resistance to conventional antimicrobial compounds (Davies, 2003;Del Pozo & Patel, 2007;Flemming & Wingender, 2010;Jagani, Chelikani & Kim, 2009;Nithya, Devi & Karutha Pandian, 2011). Therefore, the impact of antimicrobials on planktonic S. mutans cannot be compared to the effects on biofilm cells. In fact, biofilm resistance is multi-factorial and several mechanisms have been described, i.e., limited diffusion of antimicrobials through the biofilm matrix, enzyme-mediated resistance, distinct levels of metabolic activity inside the biofilm (from active to dormant state), genetic adaptation, efflux pumps and the presence of persister cells (Borges et al., 2016;Singh et al., 2017). It is clear that an accurate characterization of the antimicrobial action of a compound should focus cells in both planktonic and sessile states. The drug-likeness of the selected compounds was evaluated using Lipinski's rule of five (RO5) (Lipinski, 2004;Lipinski et al., 2001). This approach is based on several molecular properties. First, drug-like molecules should have a logP ≤ 5. It is equivalent to the ratio of concentrations of a compound in a mixture of octanol and water, two immiscible phases at equilibrium. The logP is used as a measure of hydrophobicity and therefore affects, among others, the drug bioavailability and mode of action. According to RO5, drug-like compounds should also have a molecular weight ≤ 500 g/mol to facilitate the intestinal and blood brain barrier permeability. Furthermore, the compounds should present a number of hydrogen bond acceptors ≤ 10 and a number of hydrogen bond donors ≤ 5.
If a compound fails the RO5 there is a high probability that oral activity problems will be encountered as bad absorption or metabolism, for example (Lipinski, 2004;Lipinski et al., 2001). According to this rule, the selected phytochemicals assessed in this work have drug-like properties and, consequently, these compounds are potential drug leads.
The hydrophobic status of all the selected phytochemicals allows their interaction with the cell membrane of S. mutans, a Gram-positive bacterium. Contrary to Gramnegative bacteria, they lack an outer membrane but have a very tick cell wall, composed of approximately 90% of peptidoglycan and carbohydrates such as the teichoic acid (Tommasi et al., 2015). Moreover, the TPSA < 40 Å observed for all the compounds allow to conclude that they could have a good ability for penetrating cell membranes, as just the compounds with TPSA > 140 Å tend to have poor permeability (Veber et al., 2002). The most hydrophobic compounds are generally reported to be more toxic and the cytoplasmic membrane is often the primary site of antimicrobial action. Indeed, lipophilic compounds possess a high affinity for cell membranes by inducing changes in the membrane physicochemical properties. This effect is particularly reported for compounds with a logP > 3 (Ultee, Bennik & Moezelaar, 2002). In fact, citronellol (logP > 3) was found to be the most efficient antimicrobial phytochemical against planktonic S. mutans. Additionally, this compound was also effective on sessile bacteria at a concentration of just 3 mM. Previous studies have reported inhibitory effects of citronellol against biofilms of Staphylococcus aureus and Escherichia coli (Lopez-Romero et al., 2015). Another interesting finding of the present study was that this phytochemical presented higher antimicrobial activity than eugenol, a natural compound that has been widely used in dental care (Jadhav et al., 2004;Li et al., 2012;Xu et al., 2013). Citronellol is the only compound used in this study with a linear chemical structure, possessing a highly hydrophobic tail and a hydrophilic head, which makes it more prone to interact with the lipid bilayer of the cell membrane, disturbing the structures and rendering them more permeable (Bakkali et al., 2008). Citronellol is also the compound more flexible as its structure does not include any ring, and the n-ROTB of 5 confirms this higher molecular flexibility compared to all other compounds, which further helps to explain the higher antibacterial action.
Eugenol showed a lower antimicrobial effect than citronellol, with a MBC of 10 mM. This weakest antimicrobial activity could be attributed to its hydrophobicity being lower than 3 (logP = 2.10). Concerning the effects against sessile S. mutans, eugenol was not effective in causing inhibition. However, the antibiofilm potential of this compound has been reported by other authors against Staphylococcus aureus (Yadav et al., 2015), Pseudomonas species (Niu & Gilbert, 2004) and even against S. mutans (Xu et al., 2013). Nevertheless, the concentrations of eugenol tested by these authors against adhered S. mutans were higher than the concentration range used in this work. In a study performed by Malheiro et al. (2016) eugenol was also tested in a concentration of 10 mM against S. aureus, a Gram-positive bacterium, and no biofilm inhibition was observed.
Cinnamic acid, a phenolic acid, was the second compound with highest impact on S. mutans, presenting MBC of 5 mM. Furthermore, this compound also promoted a significant inactivation of sessile S. mutans at 5 mM. However, this phytochemical had a logP of 1.91 and was the compound with the smallest hydrophobicity. These results indicated that other factors than hydrophobicity must be involved. Phenolic acids are organic acids and their antimicrobial action is thought to be dependent on the concentration of undissociated acid. These small lipophilic molecules can cross the cell membrane by passive diffusion in their undissociated form, disturbing or even disrupting the cell membrane structure, acidifying the cytoplasm and causing protein denaturation (Malheiro et al., 2016). Malheiro et al. (2016) also observed that cinnamic acid exhibited significant antibiofilm activity against S. aureus.
Trans-cinnamaldehyde was also an effective compound against both planktonic and sessile S. mutans. This result corroborates previous studies that showed the significant inhibitory effect of trans-cinnamaldehyde against diverse bacterial pathogens (Mith et al., 2014;Sharma et al., 2016). These authors also observed that trans-cinnamaldehyde showed higher antibacterial activity than eugenol. A value of hydrophobicity lower than 3 (logP of 2.10) can help to explain the high MBC observed in this study.
The MBC found for the phytochemicals sabinene hydrate and terpineol was 10 mM and 15 mM, respectively. The lower antimicrobial activity against planktonic bacteria, for both compounds could be attributed to its hydrophobicity (logP < 3). Moreover, sabinene and terpineol were not effective in the control of sessile bacteria. Although, other authors observed the antibiofilm potential of these phytochemicals against Escherichia coli and Staphylococcus aureus (Borges et al., 2017a;Szweda & Kot, 2017). These results suggest that the antimicrobial efficacy of the natural compounds in controlling sessile bacteria appears to be dependent on the bacterial species.
In addition to the antibacterial action of these natural compounds, it is very important to understand their cytotoxicity before being used in humans. No obvious cytotoxic effects were detected for the phytochemicals eugenol, citronellol and cinnamic acid. Similarly, Babich & Visioli (2003) using 5 mM of cinnamic acid observed reduced effects on the viability of human gingival GN61 fibroblasts, human gingival S-G epithelial cells and human carcinoma HSG1 cells. In the present work, it was found that fibroblast cells were more sensitive to trans-cinnamaldehyde when compared to other phytochemicals with a cell viability of around 72%. Brari & Thakur (2015) also showed that cinnamaldehyde reduced viability of BV2 (microglia) cell line in a higher extent than citronellol and eugenol. According to ISO 10993-5 (2009), the differences observed were not significant in terms of toxicity, as cytotoxicity is considered when viability is lower than 70%. Therefore, these results showed that citronellol, cinnamic acid and trans-cinnamaldehyde presented antibacterial effects against planktonic and sessile S. mutans, without compromising the viability of fibroblasts cell line L929.
CONCLUSIONS
Plant-derived molecules may offer a groundbreaking green approach to the discovery of broad-spectrum antimicrobials. The present work focused on the study of the antimicrobial effect of selected phytochemicals on planktonic bacterial growth and biofilm inhibition of S. mutans, as well as their toxic effects on a fibroblasts cell line. The phytochemicals citronellol, cinnamic acid and trans-cinnamaldehyde were the most effective in both inhibiting the growth of the planktonic S. mutans and causing significant biofilm inactivation. Moreover, these three compounds did not compromise fibroblast cell viability, suggesting that they may be new candidates for controlling oral infectious diseases. Data on the chemical properties of the selected phytochemicals propose that the molecular hydrophobicity seems to account for a higher antimicrobial effect.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This work was supported by POCI-01-0145-FEDER-030219; POCI-01-0145-FEDER-006939 (Laboratory for Process Engineering, Environment, Biotechnology and Energy-UID/EQU/00511/2013) funded by the European Regional Development Fund (ERDF), through COMPETE2020 -Programa Operacional Competitividade e Internacionalização (POCI) and by national funds, through FCT-Fundação para a Ciência e a Tecnologia and NORTE-01-0145-FEDER-000005 -LEPABE-2-ECO-INNOVATION, supported by North Portugal Regional Operational Programme (NORTE 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (ERDF). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
v3-fos-license
|
2021-02-17T06:16:48.090Z
|
2021-02-15T00:00:00.000
|
231936327
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/php.13401",
"pdf_hash": "62756681f0b5e7186530815daf5ed0ab902e8f5b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1219",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dcda48f08ccc88bf883b74039fcf34ad214b5694",
"year": 2021
}
|
pes2o/s2orc
|
Impact of High Solar UV Radiant Exposures in Spring 2020 on SARS‐CoV‐2 Viral Inactivation in the UK
Abstract Potential for SARS‐CoV‐2 viral inactivation by solar UV radiation in outdoor spaces in the UK has been assessed. Average erythema effective and UV‐A daily radiant exposures per month were higher (statistically significant, P < 0.05) in spring 2020 in comparison with spring 2015–2019 across most of the UK, while irradiance generally appeared to be in the normal expected range of 2015–2019. It was found that these higher radiant exposures may have increased the potential for SARS‐CoV‐2 viral inactivation outdoors in April and May 2020. Assessment of the 6‐year period 2015–2020 in the UK found that for 50–60% of the year, that is most of October to March, solar UV is unlikely to have a significant (at least 90% inactivation) impact on viral inactivation outdoors. Minimum times to reach 90% and 99% inactivation in the UK are of the order of tens of minutes and of the order of hours, respectively. However, these times are best case scenarios and should be treated with caution.
INTRODUCTION
Spring 2020 was exceptional in the UK. In the context of the coronavirus disease (COVID-19) pandemic, new severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections in the UK increased rapidly from early March reaching a peak in April and then slowly decreased in May (1). At the same time, spring 2020 was the sunniest on record (2), which may have reduced outdoor viral load since solar ultraviolet (UV) radiation, in particular the shortest wavelengths, is the primary virucidal agent in the environment (3)(4)(5)(6). However, sunshine hours are defined by all incident terrestrial solar radiation wavelengths (7) of which only a small proportion is in the UV wavelength range, so increases in sunshine hours do not provide quantitative information on the increases in the UV region.
Analysis using satellite data (6) shows that solar UV has potential to inactivate viruses from the coronaviridae family and that the level of inactivation varies widely depending on location and season. However, detailed analysis using ground-based data relevant for the UK has not yet been published.
Public Health England (PHE) has a network of ground-based solar monitoring sites across the UK and overseas and additional spectral solar monitoring capabilities at Chilton, UK. These are used for health research and the development of advice regarding sun exposure (8)(9)(10)(11)(12).
In this paper, ground-based UK solar UV data are analyzed to determine whether significant increases in solar UV were observed in spring 2020 and if so whether this would be likely to have increased viral inactivation outdoors in comparison with spring 2015-2019. Data from the full six-year period 2015-2020 is then analyzed in order to determine the periods when solar UV is likely to contribute to viral inactivation. In addition, the diurnal variation in viral inactivation is considered.
MATERIALS AND METHODS
Spring 2020 in the UK was the sunniest on record, with 626 sunshine hours, 71 h greater than the previous record in 1948 (2) and 30% more than the average of the preceding 5 years (13) (see Table 1).
Sunshine hours are defined by incident direct solar radiation of > 120 W m À2 (7). Since only a small proportion of terrestrial solar radiation is in the UV wavelength range (280-400 nm), these increases in sunshine hours cannot provide quantitative information on the variation in the UV region.
The PHE solar network sites (14-16) measure UV-A (315-400 nm) and erythema effective irradiances (17) and record these data at five minute intervals. This study utilizes data from eight of the solar network sites in the UK, with latitudes ranging from 50.22°N to 60.14°N, see Fig. 1.
In order to determine whether there were significant increases in solar UV in spring 2020 in the UK, erythema effective and UV-A daily radiant exposures (doses) were calculated for March-May 2015-2020 for each of the PHE solar monitoring stations. The average daily radiant exposure for each month in spring 2020 was compared to the 2015-2019 average daily radiant exposure for the corresponding month. Statistical significance was calculated using the t-test with a significance level of In order to determine whether increases in solar UV levels in spring 2020 may have increased viral inactivation outdoors in comparison with spring 2015-2019, and to determine periods of the year when solar UV is likely to contribute to viral inactivation, the erythema effective irradiance from the PHE solar network needs to be converted to viral inactivation weighted data.
The viral inactivation action spectrum was determined from (3). Spectral irradiance measured with a Bentham DTMc300 double grating monochromator (Bentham Instruments, Reading, UK) over a wide range of solar elevation angles was used to calculate a conversion from erythema effective irradiance to viral inactivation weighted values by applying erythema (17) and viral inactivation (3) weighting functions to spectral irradiance data. This allowed erythema effective irradiance from the PHE solar network to be converted to viral inactivation weighted irradiance. From these data, times to reach thresholds of inactivation could be determined. The standard inactivation threshold used is D 90 , the fluence threshold for 90% viral inactivation. The D 90 threshold used in this study is 6.9 J m À2 from (18) for 254 nm equivalent UV required for 90% inactivation of SARS-CoV-2. This fluence (spherical surface) threshold has been assumed to be applicable to radiant exposure (flat surface) since particles with potential for person to person transmission will normally be on surfaces or airborne but near the ground with low albedo.
Since 90% viral inactivation may not be sufficient to reduce the risk of infection, 99% inactivation has also been considered. In this study, a D 99 threshold of 27.6 J m À2 has been used-four times the fluence, as given in (18). The time to reach 6.9 and 27.6 Jm À2 viral inactivation weighted radiant exposures, that is D 90 and D 99 , has been calculated for each month of the year for 2015-2020 under all weather conditions. Time of day is in UTC.
The proportion of days when D 90 and D 99 could be reached in a single whole day and 1 h has been calculated for all sites for 2015-2020. Any years or months with more than 10% or 20% incomplete data respectively have been removed from the analysis. In addition, further detailed analysis on the diurnal variation in the time to reach D 90 and D 99 has been carried out on five days at Chilton. Tables 2 and 3 for mean values, range and the percentage difference between 2020 and 2015-2019 monthly average daily radiant exposures.
RESULTS AND DISCUSSION
Erythema effective radiant exposures are expressed in SEDs (100 J m À2 ) (17).
1 In March, 4 of 8 sites had statistically significant increases in erythema effective daily radiant exposure of 20% or more compared to 2015-2019 average and 4 of 8 sites had statistically significant increases in UV-A daily radiant exposure with three of these being greater than 20%. 2 In April, all sites had statistically significant increases in erythema effective daily radiant exposure of at least 30% compared to 2015-2019 average and 7 of 9 sites had statistically significant increases in UV-A daily radiant exposure with five of these being 20% or more. 3 In May, all sites except Lerwick had statistically significant increases in erythema effective daily radiant exposure of 20% or more compared to 2015-2019 average and five sites had statistically significant increases in UV-A daily radiant exposure with four of these being 20% or more. 4 The sites with the greatest increases in erythema effective daily radiant exposure were Belfast (+57% in April), Camborne (+56% in May) and Inverness (+54% in April). The sites with the greatest increases in UV-A daily radiant exposure were Belfast (+35% in April), London and Camborne (both + 32% in May).
It is notable that the biggest increases were seen at the shortest UV wavelengths, as indicated by the large increases in erythema effective values and that these are greater than the increases seen in UV-A. The shortest UV wavelengths are also significantly more effective in inactivation of viruses (3).
While UV daily radiant exposures, particularly in April and May 2020, were significantly higher than the 2015-2019 average, monthly maximum erythema effective and UV-A irradiances in spring 2020 were generally in the normal expected range of 2015-2019. For example, the three sites and months with the greatest increase in erythema effective UV daily radiant exposure had either small or negative changes in maximum erythema effective irradiance per month of +3.2%, À9.6% and À4.9% for Belfast (April), Camborne (May) and Inverness (April), respectively. Similarly, the three sites and months with the greatest increase in UV-A daily radiant exposure had small or negative changes in maximum UV-A irradiance per month of +1.6%, À5.9% and À3.3% for Belfast (April), Camborne (May) and London (May), respectively. One exception is Lerwick in March which, although its change in maximum UV-A irradiance is negligible (À1.1%), saw a change in maximum erythema effective irradiance of +45.8% in March 2020. This occurred on the last day of March which is a month where erythema effective irradiance increases rapidly. Specific cloud conditions causing a brief spike in erythema effective irradiance (14) could have contributed to this outlier. The full range of changes to maximum erythema effective irradiance is from À13.5% (London, March) to +18% (Leeds, May) and for UV-A the full range is from À11.9% (Glasgow, May) to +8.2% (Leeds, May).
It is evident that there were significant increases in solar UV in spring 2020 in comparison with spring 2015-2019-however, these were increases in radiant exposure and not irradiance. There was a higher frequency of periods of high irradiance (i.e. more periods of time with clear sunny weather) rather than exceptional increases in peak irradiance.
Spring 2020-viral inactivation
For a threshold of reaching D 90 within a day, the significant increases in solar UV radiant exposure in spring 2020 in comparison with spring 2015-2019 appear to have the greatest effect in April ( Fig. 2A, Table 4). In March, the proportion of days on which D 90 could be reached within a day was generally < 5% for 2015-2019 and was still < 5% for all sites in 2020. In contrast, by May Viral inactivation during the 6-year period 2015-2020 Figure 3A shows that across a whole year, D 90 could be reached within a day on around 40% of days at the highest UK latitudes and around 50% of days at the lowest UK latitudes. There was larger variation across the latitudes for D 90 being reached within an hour, ranging from 10% to 20% of days per year at the highest latitudes to 30-40% of days at the lowest latitudes (Fig. 3B). This distribution is very similar for the percentage of days when D 99 could be reached within a day (Fig. 3C). It was found that D 99 could not be reached within an hour on any day for any site or year in the UK. These findings also show that a viral inactivation threshold of D 90 or greater could not be reached even with a full day of exposure to solar UV for 50-60% of the year (lowest-highest UK latitudes). Across the months of the year, it was found that D 90 could be reached within 1 day from April to September across the UK, with approaching 100% of days reaching this threshold in May, June and Table 4) and the number of days when D 90 could be reached in 1 h and D 99 could be reached in 1 day was negligible (mean and range 0.1% (0.0-2.2%) and 0.1% (0.0-1.4%), respectively).
July. Conversely, for most of October-March, and all of November-January, D 90 could not be reached within a day (Fig. 4A). For reaching D 90 within 1 h or the higher threshold of reaching D 99 within a single day, which show a similar pattern to each other, these thresholds could be reached from April to September at the lower latitudes of the UK for up to around 75% of days. At higher latitudes, these could generally only be reached from May to August for up to around 50% of days ( Fig. 4B and C). Figure 5 shows the diurnal variation in the time to reach D 90 at Chilton in the south of the UK (see Fig. 1) on one clear day in March, April and May 2020, on one variably cloudy day in May 2020 and also on the relatively clear day of 22 June 2020 (very near the summer solstice) which reaches the highest UV Index generally expected to be seen in the UK. Figure 5 shows that the time to reach D 90 is highly dependent on time of day and time of year, with the shorter times to reach D 90 being around or just before noon and on days nearer the summer solstice. Figure 5 also shows that clear, sunny days have greater effectiveness at viral inactivation than cloudy days. It can be seen that the time to reach D 90 on the variably cloudy day with sunny spells of 16 May 2020 is comparable to reach D 90 on the clear day of 23 April 2020, 23 days earlier.
Diurnal variation in time to reach D 90 and D 99
The shortest time to reach D 90 and D 99 for each of these days and the cutoff time-the time from which the threshold cannot be reached-are shown in Table 5.
The times to reach D 90 in the south of the UK are at least of the order of tens of minutes. The shortest time to reach D 99 in the south of the UK is 1 h 15, and at the end of March, it requires nearly 5 h of exposure to reach D 99 . So, the shortest times required to reach D 99 are at least of the order of hours, rather than tens of minutes.
Limitations and Caveats
The timescales for reaching D 90 and D 99 should be applicable for contaminated outdoor spaces with airborne viral load near the ground and surfaces (fomites) that are horizontal and in full view of the sky. However, consideration should be given to certain caveats. For airborne viral load, even over the shortest timescales of tens of minutes considerable dilution and dispersion would have occurred in the outdoor environment which is likely to be a significant contributing factor for reducing onward transmission (19). In addition, Table 5 should therefore be treated with caution since they are best case scenarios. CONCLUSION Solar UV daily radiant exposures, in particular for the shortest UV wavelengths, were significantly higher in spring 2020 in comparison with the preceding 5 years across most of the UK, while irradiance generally appeared to be in the normal expected range of 2015-2019. The month with the greatest increases at the most locations was April, with increases in erythema effective daily radiant exposure of > 30% at all eight UK solar monitoring sites and increases in UV-A daily radiant exposure of > 20% at five sites. The significant increase in UV daily radiant exposures in spring 2020 can be explained by a higher frequency of clear days rather than increases in irradiance.
There is evidence to suggest that the higher solar UV radiant exposures in spring 2020 increased the potential for viral inactivation outdoors in April (for reaching D 90 in one day and 1 h and D 99 in one day) and May (for reaching D 90 in 1 h and D 99 in one day) when compared to the 2015-2019 values.
Assessment of the 6-year period 2015-2020 showed that a viral inactivation threshold of D 90 or greater could not be reached even with a full day of exposure to solar UV for 50-60% of the year (lowest-highest UK latitudes). D 90 in a single day was possible from April to September across the UK and not possible for most of October to March and all of November to January. D 90 in 1 h and D 99 in one day were possible at lower UK latitudes from April to September for up to around 75% of days. At higher UK latitudes, these levels of viral inactivation were possible from May to August for up to approximately 50% of days.
On days when D 90 can be reached, timescales to achieve this are highly dependent on the time of year and time of day. The shortest times to reach D 90 in the UK are of the order of tens of minutes with the latest cutoff time for reaching D 90 before the end of the day being around 15:30 UTC. The shortest times to reach D 99 in the UK are of the order of hours with the latest cutoff time for reaching D 99 before the end of the day being around 14:00 UTC. However, these times assume an ideal scenario of, for example, exposure of a horizontal surface in full view of the sky for the whole day and they should therefore be treated with caution.
Overall, these findings show that for generally at least half the year in the UK solar UV is unlikely to have a significant (at least 90% inactivation) impact on viral inactivation outdoors. These results suggest that sunlight alone cannot always be relied upon to inactivate the virus outdoors in the UK.
|
v3-fos-license
|
2016-09-28T00:20:06.222Z
|
2009-05-29T00:00:00.000
|
15174420
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0005737&type=printable",
"pdf_hash": "f7b074acdebe71f759f3bcb56fc19a547ee8b684",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1220",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f7b074acdebe71f759f3bcb56fc19a547ee8b684",
"year": 2009
}
|
pes2o/s2orc
|
Perfusion Imaging in Pusher Syndrome to Investigate the Neural Substrates Involved in Controlling Upright Body Position
Brain damage may induce a dysfunction of upright body position termed “pusher syndrome”. Patients with such disorder suffer from an alteration of their sense of body verticality. They experience their body as oriented upright when actually tilted nearly 20 degrees to the ipsilesional side. Pusher syndrome typically is associated with posterior thalamic stroke; less frequently with extra-thalamic lesions. This argued for a fundamental role of these structures in our control of upright body posture. Here we investigated whether such patients may show additional functional or metabolic abnormalities outside the areas of brain lesion. We investigated 19 stroke patients with thalamic or with extra-thalamic lesions showing versus not showing misperception of body orientation. We measured fluid-attenuated inversion-recovery (FLAIR) imaging, diffusion-weighted imaging (DWI), and perfusion-weighted imaging (PWI). This allowed us to determine the structural damage as well as to identify the malperfused but structural intact tissue. Pusher patients with thalamic lesions did not show dysfunctional brain areas in addition to the ones found to be structurally damaged. In the pusher patients with extra-thalamic lesions, the thalamus was neither structurally damaged nor malperfused. Rather, these patients showed small regions of abnormal perfusion in the structurally intact inferior frontal gyrus, middle temporal gyrus, inferior parietal lobule, and parietal white matter. The results indicate that these extra-thalamic brain areas contribute to the network controlling upright body posture. The data also suggest that damage of the neural tissue in the posterior thalamus itself rather than additional malperfusion in distant cortical areas is associated with pusher syndrome. Hence, it seems as if the normal functioning of both extra-thalamic as well as posterior thalamic structures is integral to perceiving gravity and controlling upright body orientation in humans.
Introduction
Human species is the only obligate biped among primates. Our brain thus has become remarkably efficient in stabilizing the upright body position in space. The perception of our body orientation is achieved by the convergence of inputs from multiple sources, including vestibular, visual, and somatosensory information [1]. When these sensory channels work properly, their inputs and their integration indicate verticality in a congruent manner. Damage to this system causes diverse disorders of posture and of balance control [2][3][4][5][6][7]. Among them, a very intriguing and severe disorder of upright body position is the ''pusher syndrome'' (for review ref. [8]).
Patients with pusher syndrome suffer from an alteration of their sense of body verticality [9]. They experience their body as oriented upright (subjective postural vertical, SPV) when actually tilted in the coronal (roll) plane nearly 20 degrees towards the side of the lesion [9]. The patients resist any attempt to correct passively the tilted body posture towards earth vertical upright orientation and use the non-paretic arm and/or leg to actively push towards the paralyzed side [10]. In contrast to their disturbed perception of upright body posture, their perception of the visual vertical (subjective visual vertical, SVV) meadiated by visual and vestibular input is largely preserved [9,11]. This dissociation supported the assumption of a neural pathway in humans for sensing the orientation of gravity and controlling upright body posture, separate from the well-known visual-vestibular system for perceiving the orientation of the visual world [9,[12][13][14].
Pusher syndrome is typically associated with unilateral lesions of the posterior thalamus [13,15], while cortical strokes sparing the thalamus [16] or non-stroke neurological aetiologies [17] are rather infrequent. These findings argued for a fundamental role of the posterior thalamus in our control of upright body posture. However, it is not yet well understood whether this disorder of upright body posture associated with thalamic strokes might also be explained by the dysfunction of cortical areas rather than by the neuronal loss in the thalamus itself. In fact, by using positron emission tomography (PET), thalamic infarctions [18][19][20] and thalamotomy [19] have been shown to induce depressed levels of metabolic activity in the cerebral hemispheres. Thus, it is possible that the thalamic lesions of pusher patients indeed cause functional or metabolic abnormalities in cortical areas via diaschisis [19,21] or through vascular dysfunction, and that these (distant) functional abnormalities cause the patients' misperception of body orientation.
Vice versa it is possible that the rather few extra-thalamic strokes that induce pusher syndrome, i.e. the small areas within the posterior insula, superior temporal gyrus, postcentral gyrus, and inferior parietal lobule [16], might induce dysfunction through malperfusion in distant thalamic or other structurally intact neural structures.
Our aim thus was to investigate, by means of perfusionweighted imaging (PWI), the functioning of the structurally intact cortical tissue in patients with thalamic and with extra-thalamic strokes showing versus not showing pusher syndrome. While diffusion-weighted (DWI) and fluid-attenuated inversion-recovery (FLAIR) imaging reveal information about irreversibly damaged neural tissue, PWI measures the amount and latency of blood flow reaching different regions of the brain. PWI thus allows the identification of structurally intact but abnormally perfused brain tissue; i.e. zones that are receiving enough blood supply to remain structurally intact but not enough to function normally.
Subjects
Nineteen patients with first-ever stroke centering either on the thalamus (n = 11) or sparing the thalamus (n = 8) consecutively admitted to the Centre of Neurology in Tübingen were included. Four patients from the latter group without thalamic involvement have also been subjects in a previous study [16] that investigated the structural aspects of brain lesions. Since stenoses are known to produce false-positive depictions of perfusion deficits, especially in time-to-peak perfusion images [22], we excluded those patients with a haemodynamically relevant extracranial stenosis in the internal carotid arteries, i.e. $70%, demonstrated by Doppler sonography. The number of potential participants further had to be limited with respect to proper kidney functions due to the use of contrast agent. The patients were divided into two groups with and without pusher syndrome (cf. Table 1) according to standardised testing for pusher syndrome (see below). All patients gave their written informed consent to participate in the study which has been performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki.
Clinical investigation
Pusher syndrome was diagnosed using the standardised Scale for Contraversive Pushing (SCP) [9,23] at the same day of MR acquisition. The SCP assesses 1) symmetry of spontaneous posture, 2) the use of the non-paretic arm or leg to increase pushing force by abduction and extension of extremities, and 3) resistance to passive correction of posture. These variables are determined both when patients were sitting (feet with ground contact) and standing. In patients with pusher behavior, all three criteria had to be present and the patients had to show at least a total score of 1 (max. = 2, sitting plus standing) with respect to their spontaneous posture, at least a score of 1 (max. = 2, sitting plus standing) concerning the use of the non-paretic arm and/or leg to increase pushing force by abduction and extension, and had to show resistance to passive correction of posture. Details of the SCP assessment are presented in Table 1. The degree of paresis of the upper and lower limbs was scored with the usual clinical ordinal scale, where '0' stands for no trace of movement and '5' for normal movement. Spatial neglect was diagnosed when the patient exhibited the typical clinical behaviour, such as spontaneous eye and head orientation towards the right [24]. In addition, all patients were further assessed with the following three clinical tests: the 'Letter cancellation' task [25], the 'Bells test' [26], and a copying task [27]. Neglect patients had to fulfill the criterion for spatial neglect in at least two of these tests. Full details about the test procedure and criteria are described elsewhere [24]. Aphasia was assessed conducting a bedside examination that evaluated spontaneous speech, auditory and reading comprehension, picture naming, reading, and oral repetition. Visual field defects were assessed using standardized neurological examination.
MR imaging and analysis
For the depiction of structurally lesioned brain tissue we used diffusion-weighted imaging (DWI) and T2-weighted fluid-attenuated inversion-recovery (FLAIR) imaging. DWI is very sensitive to infarct especially very early after stroke onset where it proves to be superior compared to conventional MR and CT imaging [28]. FLAIR imaging represents a T2-weighted imaging protocol in which the signal from the cerebrospinal fluid is suppressed. FLAIR images provide high sensitivity for acute and subacute infarcts [29][30][31]. For lesion delineation, we used DWI imaging within the first 48 h post-stroke and FLAIR sequences when imaging was conducted 48 h or later after stroke onset [29][30][31][32]. The mean time between stroke and imaging and clinical investigation for the thalamic stroke patients was 9.6 (SD 6.1, range 4-18) days in the group with pusher syndrome and 7.2 (SD 7.9, range 2-23) days in the group without the disorder (t = 0.56, p = 0.591, two-tailed). For the patients with extra-thalamic lesions the mean time was 3.5 (SD 4.7, range 0-10) days in the group of pusher patients and 3.0 (SD 4.1, range 0-9) days in the group without pusher snydrome (t = 0.16, p = 0.878, two-tailed). Scans were obtained on a 1.5-T echoplanar imaging (EPI) capable system (Magnetom Sonata, Siemens, Erlangen, Germany). The FLAIR sequence was acquired with 72 axial slices (thickness 1 mm, interslice gap 1 mm), a field of view (FOV) of a 1926256 mm 2 , matrix 1926256 pixels, repetition time (TR) of 9310 ms and an echo time (TE) of 122 ms. DWI was performed with a single-shot EPI spin echo sequence (TR 3200 ms; TE 87 ms; FOV 2306230 mm 2 ; matrix 1286128 pixels; slice thickness 5 mm; gap 1mm; b-values of 0, 500 and 1000 s/mm 2 ). The boundary of the lesion was delineated directly on the individual MRI image for every single transverse slice using MRIcron software [33] (http://www.mricro.com/ mricron). In order to illustrate the common region of structurally lesioned brain tissue per group, both the scan and lesion shape were then transferred into stereotaxic space using the spatial normalization algorithm provided by SPM2 (http://www.fil.ion. ucl.ac.uk/spm/). For determination of the transformation parameters, cost-function masking was employed [34]. In patients with thalamic strokes, left and right lesions had been found to affect homologues structures [13,15]. In the present analysis, we thus switched the left-sided thalamic lesions and relative perfusion maps to the right side in order to obtain a larger data basis for the subtraction analysis [35]. Hypoperfused brain tissue was visualized using perfusionweighted imaging (PWI [36]). Fifty repetitions of perfusionweighted EPI sequences (TR 1440 ms; TE 47 ms; FOV 2306230 mm 2 ; matrix 1286128; 12 axial slices; slice thickness 5 mm; gap 1 mm) were obtained with 20 ml gadolinium diethyl triamineene pentaacetic acid (Gd-DTPA) bolus power injected at a rate of 3-5 ml/s. The amount of bolus used depended on the body-weight of the subject. Time-to-peak (TTP) maps were calculated to characterize malperfusion. TTP represents the time at which the largest signal drop occurs in the signal intensity curve with respect to the first image. It is generated directly from the signal intensity curve and does not rely on deconvoluting algorithms or the choice of adequate input functions [37,38]. In order to identify common regions of perfusion abnormality, the PWI volumes were spatially realigned and then transferred into stereotaxic space using the spatial normalization algorithm provided by SPM2. The normalized TTP maps were spatially smoothed with a Gaussian filter of 2 mm. For SPM normalization, we used a template featuring symmetrical left-right hemispheres [39]. Subsequently voxel-wise inter-hemispheric comparisons were performed for each individual before extracting perfusion deficit volumes. This method takes regional biases for perfusion parameters into account, as each region is compared voxel-byvoxel to its mirrored region, thereby comparing homologous regions and avoiding a region-specific bias [40]. For the normalized TTP maps, we subtracted from each voxel of the affected hemisphere its mirrored voxel in the unaffected hemisphere. For the determination of volumes with perfusion abnormalities we defined the threshold for TTP delays $3.0 s. The TTP delay threshold was based on previous observations that TTP delays .2.5 s in Wernicke's area were associated with language dysfunction [41], and that the general functional impairment of stroke patients correlated best with the volume of PWI abnormality for TTP delays $4 s [42]. The area of mismatch between DWI/FLAIR and PWI abnormalities, i.e. the zones of structurally intact but dysfunctional neural tissue, was determined by subtracting for each subject the normalized DWI/FLAIR map from the normalized TTP delay map. Finally, we compared perfusion abnormalities in the patient groups with and without pusher syndrome. For this purpose, the superimposed mismatch images of the groups without pusher syndrome were subtracted from the overlap mismatch images of the groups with pusher syndrome (details concerning the subtraction technique are given in ref. [35]). Figure 1a presents the overlay plots of the normalized DWI/ FLAIR data for the group of patients with thalamic lesions showing versus not showing pusher syndrome. In both groups, the maximum of lesion overlap centered on the thalamus. In order to identify those areas that were structurally intact but hypoperfused, i.e. the zones showing a mismatch between DWI/FLAIR and PWI abnormalities, we subtracted the normalized DWI/FLAIR map from the normalized TTP delay map for each subject. The zones of perfusion abnormalities then were superimposed creating an normalized overlap image showing the common regions of structurally intact (no DWI/FLAIR abnormalities) but abnormally perfused tissue in each group (Fig. 2). In the group of patients with as well as without pusher syndrome, we found only few voxels in single patients (dark blue colour in Fig. 2 indicates malperfusion in n = 1 subject) to be malperfused though structurally intact. We analysed a further marker for abnormal perfusion, namely the maximal signal reduction (MSR). While TTP is a parameter that depicts the arrival time of blood in the brain tissue, MSR measures the amount of blood flow reaching the different regions of the brain and is closely related to relative cerebral blood flow (rCBF) in stroke patients [43,44]. However, also the analysis of malperfused tissue as depicted by normalised MSR maps did not reveal significant perfusion changes outside the area of structural damage. Thus, we conclude that the patients with pusher syndrome following thalamic lesions did not show a systematic involvement of dysfunctional brain areas in addition to the ones found to be structurally damaged.
Thalamic brain lesions
Extra-thalamic brain lesions Figure 1b presents the overlay plots of the normalized DWI/ FLAIR data for the group of patients with extra-thalamic lesions showing versus not showing pusher syndrome. By using the anatomical parcellation of the MNI single-subject brain by Tzourio-Mazoyer et al. [45] implemented in MRIcron software [33] (http://www.mricro.com/mricron) and the Jülich probabi-listic cytoarchitectonic atlas for the white matter fiber tracts [46,47], we found the center of lesion overlap for the patients with pusher syndrome affecting the insula, frontal and rolandic operculum, inferior frontal gyrus, pre-and postcentral gyri, as well as part of the corticospinal tract, inferior occipitofrontal and uncinate fasciculi. The lesions of the extra-thalamic group without pusher syndrome centered on the insula, rolandic operculum, superior temporal gyrus as well as part of the corticospinal tract.
In order to identify those areas that were structurally intact but hypoperfused, i.e. the zones showing a mismatch between DWI/ FLAIR and PWI abnormalities, we subtracted the normalized DWI/FLAIR map from the normalized TTP delay map for each subject. The mismatch images of all individuals in the group of patients with pusher syndrome and in the group without the disorder were superimposed (Fig. 3a). To illustrate the common area of malperfusion in the patients with pusher syndrome in direct contrast to those malperfused areas that were present in the patients without the disorder, we subtracted the overlay mismatch images of the latter group from the overlap mismatch images of the pusher patient group. The resulting subtraction images specifically highlight structurally intact regions that were both typically hypoperfused in patients with pusher syndrome as well as typically spared in patients without the disorder (Fig. 3b). In the patients with pusher syndrome, we found the maximum of perfusion deficits in the structurally intact inferior frontal gyrus Overlay plot of the subtracted superimposed mismatch images of the pusher group minus the mismatch images of the group without pusher syndrome. The percentage of overlapping areas of structurally intact but abnormally perfused tissue in the pusher group after subtraction is illustrated by five different colours, coding increasing frequencies from dark red (difference = 1-20%) to white (difference = 81-100%). Each colour represents 20% increments. The different colours from dark blue (difference = 21% to 220%) to light blue (difference = 281% to 2100%) indicate regions abnormally perfused more frequently in patients without pusher syndrome than in the pusher group. Regions where there is an identical percentage of abnormal perfusion in both groups ( = 0%) are not depicted in the figure. MNI z-coordinates of the transverse sections are given. IFG, inferior frontal cortex; PreCG, precentral gyrus; SLF, superior longitudinal fasciculum; MTG, middle temporal cortex; CB, callosal body; Wh.mat., white matter; IPL, inferior parietal lobule. doi:10.1371/journal.pone.0005737.g003 temporal white matter (x, 30; y, 1; z, 16), and of the superior longitudinal fasciculus from (x, 31; y, 237; z, 32) to (x, 26; y, 240; z, 40) were affected.
Discussion
We examined the functioning of the structurally intact cortical tissue in patients with thalamic and with extra-thalamic strokes showing versus not showing pusher syndrome in a continuous series of stroke patients admitted to the Center of Neurology. In the patients with pusher syndrome following thalamic lesions, we found no systematic involvement of dysfunctional brain areas in addition to the ones observed to be structurally damaged. Obviously, additional cortical malperfusion is not an indispensable prerequisite if thalamic patients exhibit pusher syndrome. However, due to the limited number of 11 patients with thalamic lesions that could be investigated in the present study, we cannot exclude the possibility that a thalamic lesion combined with a perfusion deficit extending the borders of the lesion territory may also be observed in association with pusher syndrome. Our results only demonstrate that this obviously is not a physiological necessity when patients with structural damage of the thalamus exhibit pusher syndrome.
In the group of patients with extra-thalamic lesions and pusher syndrome, the thalamus was neither structurally damaged nor malperfused. Rather, these patients showed small regions of abnormal perfusion in the structurally intact inferior frontal gyrus (IFG), middle temporal gyrus (MTG), precentral gyrus, inferior parietal lobule (IPL), and parietal white matter. Further, small parts of the callosal body, of the temporal white matter, and of the superior longitudinal fasciculus (SLF) were more frequently involved. These anatomically intact but malperfused structures thus appear to contribute to the appraisal of disturbed postural control in pusher syndrome following extra-thalamic lesions.
While we have to consider, of course, the general restrictions related to the present methodology by using perfusion-weighted MR imaging, our results may indicate on the one hand that the neural tissue in the posterior thalamus itself rather than additional malperfusion in distant cortical areas is integral to perceiving gravity and controlling upright body orientation. However, the analysis also showed that thalamic damage is not a conditio sine qua non for the manifestation of pusher syndrome. In patients showing this disorder after extra-thalamic lesions, the thalamus was neither structurally damaged nor malperfused. This indicates that the malperfused areas uncovered in the present study as well as the structural damage in extra-thalamic areas identified in an earlier study, i.e. the insula, superior temporal gyrus, postcentral gyrus, and inferior parietal lobule [16], contribute to the network controlling upright body posture.
Interestingly, invasive studies in non-human primates [48][49][50] and in vivo studies in humans [51,52] showed that some of the cortical structures lesioned in pusher syndrome, namely the insular cortex and the postcentral gyrus [16], have direct connections with the ventroposterior and the lateral posterior nuclei of the posterior thalamus, i.e. with those thalamic strucutres typically found affected when patients exhibit pushing behaviour [13,15]. In detail, the axons arising in the ventroposterolateral and the ventroposteromedial nuclei project to primary somatosensory cortex in the postcentral gyrus (Brodmann areas 3a, 3b, 1, and 2), to the secondary somatosensory cortex in the parietal operculcum, and to the insula [49,53]. Hence, these thalamic and cortical structures that cause pusher syndrome when lesioned might represent those areas in which the afferent sensory graviceptional signals, required to control upright body position, are processed. This conclusion is strengthen by functional imaging data that argued for a functional connectivity between these areas. For example, stimulation of the vagus nerve influenced neural activity in the thalamus, the insular cortex and the postcentral gyrus, among other brain areas [54][55][56].
The insula, the inferior parietal lobule, the superior temporal gyrus and the postcentral gyrus, have also been recognized to be the substrate of visuo-vestibular processing [8,20,[57][58][59][60]. Unilateral lesions of the superior temporal and of the insular cortices (including the parieto-insular vestibular cortex [PIVC]) cause deviations of the perceived subjective visual vertical (SVV) and lateral imbalance of stance and gait [4,20,59,61]. The question thus raises whether those brain areas representing the visualvestibular system might also be related to the control of upright body orientation studied in the present experiment.
Our study does not allow to answer this question. At the behavioural level, these two processes clearly dissociate. While patients with vestibular disorders have abnormal tilts of the SVV but preserved perception of postural vertical [3], pusher patients show the opposite patter, namely a preserved perception of the SVV with a marked tilted perception of own body posture [9,11]. It remains the issue of future studies to investigate whether these two behavioural processes are mediated by anatomically identical or closely related cortical structures in the IPL, STG and postcentral gyrus.
To conclude, the present research supports the assumption that malfunction or lesion of cortical areas not involving the thalamus can indeed be associated with pushing behaviour. Vice versa the data suggest that if the posterior thalamus is lesioned in pusher patients it is the damage of the neural tissue in the posterior thalamus itself, and not necessarily malperfusion in distant cortical areas, that provokes the behavioural disorder. Thus, it seems as if the normal functioning of both extra-thalamic as well as posterior thalamic structures is integral to perceiving gravity and controlling upright body orientation in humans.
|
v3-fos-license
|
2021-12-27T14:22:12.917Z
|
2021-12-27T00:00:00.000
|
245495314
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-021-05610-x.pdf",
"pdf_hash": "b9cf206b239fe14b1b1e0802b22621f3093b06f1",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1221",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b9cf206b239fe14b1b1e0802b22621f3093b06f1",
"year": 2021
}
|
pes2o/s2orc
|
[18F]FDG-PET accurately identifies pathological response early upon neoadjuvant immune checkpoint blockade in head and neck squamous cell carcinoma
Purpose To investigate the utility of [18F]FDG-PET as an imaging biomarker for pathological response early upon neoadjuvant immune checkpoint blockade (ICB) in patients with head and neck squamous cell carcinoma (HNSCC) before surgery. Methods In the IMCISION trial (NCT03003637), 32 patients with stage II‒IVb HNSCC were treated with neoadjuvant nivolumab with (n = 26) or without (n = 6) ipilimumab (weeks 1 and 3) before surgery (week 5). [18F]FDG-PET/CT scans were acquired at baseline and shortly before surgery in 21 patients. Images were analysed for SUVmax, SUVmean, metabolic tumour volume (MTV), and total lesion glycolysis (TLG). Major and partial pathological responses (MPR and PPR, respectively) to immunotherapy were identified based on the residual viable tumour in the resected primary tumour specimen (≤ 10% and 11–50%, respectively). Pathological response in lymph node metastases was assessed separately. Response for the 2 [18F]FDG-PET-analysable patients who did not undergo surgery was determined clinically and per MR-RECIST v.1.1. A patient with a primary tumour MPR, PPR, or primary tumour MR-RECIST-based response upon immunotherapy was called a responder. Results Median ΔSUVmax, ΔSUVmean, ΔMTV, and ΔTLG decreased in the 8 responders and were significantly lower compared to the 13 non-responders (P = 0.05, P = 0.002, P < 0.001, and P < 0.001). A ΔMTV or ΔTLG of at least − 12.5% detected a primary tumour response with 95% accuracy, compared to 86% for the EORTC criteria. None of the patients with a ΔTLG of − 12.5% or more at the primary tumour site developed a relapse (median FU 23.0 months since surgery). Lymph node metastases with a PPR or MPR (5 metastases in 3 patients) showed a significant decrease in SUVmax (median − 3.1, P = 0.04). However, a SUVmax increase (median + 2.1) was observed in 27 lymph nodes (in 11 patients), while only 13 lymph nodes (48%) contained metastases in the corresponding neck dissection specimen. Conclusions Primary tumour response assessment using [18F]FDG-PET-based ΔMTV and ΔTLG accurately identifies pathological responses early upon neoadjuvant ICB in HNSCC, outperforming the EORTC criteria, although pseudoprogression is seen in neck lymph nodes. [18F]FDG-PET could, upon validation, select HNSCC patients for response-driven treatment adaptation in future trials. Trial registration https://www.clinicaltrials.gov/, NCT03003637, December 28, 2016. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05610-x.
Introduction
Immune checkpoint blockade (ICB) of programmed cell death protein 1 (PD-1) leads to objective responses in 13-17% of patients with recurrent or metastatic head and neck squamous cell carcinoma (HNSCC) and significantly improves their overall survival compared to chemotherapy [1,2]. Recent trials have shown that dual ICB of PD-1 and cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) can be safely administered prior to definitive surgery and leads to pathologically confirmed responses in patients with various solid tumours [3][4][5][6][7]. In HNSCC, neoadjuvant combined anti-PD-1 and anti-CTLA-4 ICB lead to a major pathological response (MPR) in 20-35% of patients [6,7]. Importantly, our group has recently demonstrated that none of the HNSCC patients with an MPR after neoadjuvant dual ICB has developed a tumour relapse, significantly superior to patients without an MPR [7]. While these results warrant validation, they could challenge the necessity of mutilating and functionally impairing surgery [8] and adjuvant (chemo)radiotherapy, and provide a rationale to investigate the feasibility of withholding or de-escalating standard-of-care in patients with a deep pathological response early upon neoadjuvant ICB.
Such a response-driven treatment adaptation requires a reliable biomarker to identify individual patients with a pathological response in the neoadjuvant time frame. With its widespread availability and established position in the clinic, imaging-based response evaluation is an attractive option. However, evaluation of CT and MR images according to the response evaluation criteria in solid tumours (RECIST [9]) has shown to underestimate the frequency and depth of pathological response after neoadjuvant ICB in various tumour types, including HNSCC [3,4,7,10]. [ 18 F]fluorodeoxyglucose(FDG)-PET-based metabolic response evaluation [11], on the other hand, has been shown to accurately identify pathological responses after two cycles of neoadjuvant ICB in patients with nonsmall cell lung cancer 3 to 5 weeks after start of treatment [12]. In addition, our group has recently demonstrated that HNSCC patients with an early primary tumour pathological response to neoadjuvant ICB are accompanied by a decrease in primary tumour total lesion glycolysis (TLG) assessed per [ 18 F]FDG-PET in a 4-week timeframe [7]. Still, the exact value of [ 18 F]FDG-PET as an imaging biomarker for early pathological response to neoadjuvant ICB in HNSCC remains unclear, as does its susceptibility to false-positivity (i.e. pseudoprogression) by ICB-induced immune activity in the primary tumour or involved or reactive lymph nodes [13]. Here, we report in detail on the [ 18 F]FDG-PET scans acquired in the context of the IMCI-SION trial, wherein patients with locoregionally advanced HNSCC were treated with two cycles of nivolumab (anti-PD-1) monotherapy or nivolumab plus ipilimumab (anti-CTLA-4) before definitive surgery [7]. We aim to describe the manifestations of metabolic response, metabolic progression, and metabolic pseudoprogression after neoadjuvant ICB and explore [ 18 F]FDG-PET scanning's ability to predict pathological response early upon immunotherapy in patients with resectable HNSCC. IMCISION (NCT03003637) was an investigator-initiated, non-randomized, open-label phase Ib/IIa trial carried out at the Netherlands Cancer Institute (NKI), of which the methods and main results have been reported previously [7]. Briefly, adult patients with human papillomavirus (HPV)-related or HPV-unrelated, T2-T4, N0-N3b, resectable HNSCC of the oral cavity or oropharynx were eligible for inclusion. Patients with hypopharyngeal or laryngeal SCC were eligible too, but only 1 laryngeal HNSCC patient was accrued and had no scans available; this patient is not included in the current investigation. Patients with recurrent HNSCC were eligible if they were scheduled for curative surgery. All patients had a World Health Organization performance score of 0 or 1 and adequate bone marrow, liver, and kidney function. Critical exclusion criteria were distant metastases, a medical history of autoimmune disease, the use of immunosuppressive medication, or prior treatment with ICB targeting PD-1, PD-L1, or CTLA-4.
Patients and trial interventions
Patients underwent staging investigations at baseline (week 0), including tumour biopsy, laboratory investigations, MR imaging of the head and neck, ultrasound of the neck with fine-needle aspiration cytology, and total body [ 18 F]FDG-PET. Staging was performed according to the 8th edition of the American Joint Committee on Cancer (AJCC) staging manual. Enrolled patients received 2 cycles of neoadjuvant ICB. Figure 1 details trial treatments and timelines. The first 6 patients were treated with nivolumab (240 mg flat dose) in weeks 1 and 3; the subsequent 26 patients received nivolumab (240 mg flat dose) and ipilimumab (1 mg/kg) in week 1, followed by nivolumab (240 mg flat dose) in week 3. On-treatment MR imaging and, if additional consent was given, [ 18 F]FDG-PET were obtained at the end of week 4. Standard-of-care surgery was performed in week 5, or ultimately in week 6. Adjuvant (chemo)radiotherapy was performed if indicated according to institutional and national guidelines.
Defining response to immunotherapy: pathological and MR-RECIST evaluation
The pathological response of the primary tumour was determined by a head and neck pathologist (LS) on H&Estained sections of the resected specimen. The proportion of viable tumour cells within the histologically identifiable tumour bed was quantified as a percentage and compared to the percentage of viable tumour cells within the baseline biopsy to compensate for a low baseline viable tumour cell count. The degree of pathological tumour regression was determined by calculating the percentage change in the primary tumour viable tumour cell percentage from baseline biopsy to the on-treatment resected specimen. Patients with ≤ 10% residual viable tumour cells and 90-100% tumour regression at the primary tumour site at time of surgery had a major pathological response (MPR). Patients with ≤ 50% residual viable tumour cells and 50-89% regression had a partial pathological response (PPR), and patients with any percentage of residual viable tumour cells but < 50% regression had no pathological response (NPR) [14]. Two patients did not undergo curative surgery and were thus unevaluable for pathological efficacy. For the present analyses, these patients were classified according to their MR-RECIST v.1.1 response on the ontreatment scan compared to the baseline scan.
To facilitate pathological correlation of [ 18 F]FDG-PETidentifiable lymph nodes after ICB, the head and neck surgeon marked the different cervical nodal levels during surgery using beads of different colours, according to our institutional neck dissection protocol. Lymph nodes were evaluated by LS. If a metastasis was present, ICB response was determined using the same cut-offs for pathological response at the primary tumour site (MPR, PPR, or NPR).
Overall, a patient with a primary tumour MPR, PPR, or, in the absence of pathological response evaluation, MR-RECIST-based response upon immunotherapy was called a responder. Having a (pathological) response in one or more lymph nodes in the absence of at least a partial response at the primary tumour site was not sufficient to be classified as a responder.
[ 18 F]FDG-PET image acquisition
PET-scans were obtained at baseline and, if the patient consented, in the week prior to surgery using a Gemini TF, TF Big bore, or Vereos PET/CT scanner (Philips, Cleveland). One patient underwent baseline scanning in the referring hospital on a Biograph M20 (Siemens, Munich). Patients Fig. 1 Flow chart of trial treatments, timelines, and patients included in IMCISION and their [ 18 F]FDG-PET-based metabolic treatment response. Patients were treated with neoadjuvant nivolumab or nivolumab and ipilimumab (week 1), followed by nivolumab (week 3). Surgery was performed in week 5 or, ultimately, week 6. An evaluable baseline and on-treatment scan were obtained in 21 patients, of whom 6 had a major pathological response (MPR), 1 a partial pathological response (PPR), and 12 no pathological response (NPR) at their primary tumour site. Two patients did not undergo complete resection of their primary tumour and were classified according to MR-RECIST v.1.1, which was in agreement with physical examination in both cases: 1 RECIST-PR (responder) and 1 RECIST-PD (non-responder) were instructed to fast for at least 6 h before the scan. If the blood glucose level did not exceed 12 mmol/L, the patient received 190-280 MBq [ 18 F]FDG (according to BMI) intravenously. Sixty minutes later, 3D PET images were obtained with 3 min per bed position for the head-neck area and 2 min per bed position for the neck-thighs. For anatomical correlation, low dose CT was acquired with parameters including 120 kV, 40 mAs with dose optimization, and slice interval and thickness 2 mm. All image sets from all scanners used were acquired and reconstructed according to EARL specifications to allow standardized quantification.
PET image analysis
The [ 18 F]FDG-PET images were evaluated jointly by two researchers (JV and WV), one of whom is a head and neck nuclear physician (WV), using Osirix software v11.0.1 (Pixmeo, Switzerland). A spherical volume of interest containing the whole area of [ 18 F]FDG-activity was manually grown around the primary tumour, in which SUV max was determined. Next, SUV mean (the mean SUV of voxels within the volume of interest) was calculated in the subvolume with an intensity ≥ 50% of SUV max , as is the clinical standard in our institute. This volume also defined the metabolic tumour volume (MTV). Total lesion glycolysis (TLG) was calculated by multiplying MTV with SUV mean . For the on-treatment scan, 50% SUV max of the baseline scan was used to calculate MTV and TLG. SUV mean , MTV, and TLG could not be reliably calculated if the primary tumour could not be clearly visualized or accurately distinguished from background [ 18 F]FDGuptake. As determined at the immunotherapy symposium of the European Association of Nuclear Medicine 2017 annual meeting [13], both the PERCIST or the European Organization for Research and Treatment of Cancer (EORTC) PET study group's response criteria may be used in the assessment of immunotherapy response. According to routine clinical practice at our institute, we assessed response using the EORTC recommendations [15]: complete metabolic response (CMR) was defined as a complete resolution of FDG uptake in the primary tumour from baseline to on-treatment, partial metabolic response (PMR) as a > 25% decrease in primary tumour SUV max from baseline to on-treatment, and progressive metabolic disease (PMD) as a > 25% increase in primary tumour SUV max from baseline to on-treatment. As we determined pathological response in the primary tumour separately from the response in the lymph nodes, the appearance of new [ 18 F]FDG-avid lesions was not classified as PMD. All patients not meeting the criteria for CMR, PMR, or PMD were classified as having stable metabolic disease (SMD).
In case a lymph node, with or without metastasis, showed notable metabolic activity on the baseline or on-treatment scan (or both), SUV max , SUV mean , MTV, and TLG were determined. In case of metabolic activity on only the baseline or on-treatment scan, background metabolism in the same node on the other scan was measured for reference. All lymph nodes detected by [ 18 F]FDG-PET prior to treatment were clinically diagnosed by ultrasound and, if needed, fine needle aspiration cytology. If an [ 18 F]FDG-PET-identifiable lymph node resided in a level where at least one node was pathologically tumour-positive, that avid lymph node was assumed to be tumour-positive. A lymph node detected by [ 18 F]FDG-PET was considered tumour-negative only if all dissected nodes in that particular level were pathologically tumour-negative. We defined a lymph node as pseudoprogressive if there was an increase in SUV max from baseline to on-treatment in the absence of HNSCC metastasis upon pathological examination.
Statistical considerations
All statistics were descriptive. SUV max , SUV mean , MTV, and TLG values at baseline and on-treatment and the (percent) change between the two time points are reported as medians with their interquartile range (IQR). Median values are compared between responders and non-responders using a Wilcoxon rank-sum test. Within the same patient, baseline and ontreatment values were compared using a Wilcoxon signed-rank test. Time to progression (TTP) was defined as the time from surgery to the first local, regional, or distant HNSCC relapse. The 2 patients that did not undergo surgery were thus excluded from TTP analysis. Overall survival (OS) was defined as the time between the first ICB dose and death from any cause (i.e. including the 2 patients that did not undergo surgery). Survival estimates were made using the Kaplan-Meier method; responders and non-responders are compared using a log-rank test. Median follow-up time was calculated using the inverse Kaplan-Meier method. The performance of a PET parameter as a diagnostic test for detecting response was assessed per receiver operating characteristic (ROC) and the area under the ROC curve. All tests were two-sided, and a P-value < 0.05 was considered statistically significant. All statistical analyses were performed in SPSS Statistics version 25.0 (IBM Corp, Armonk, NY, USA) and GraphPad Prism version 9.0.0 (GraphPad Software, San Diego, CA, USA).
Patient characteristics, pathologic and metabolic treatment response
[ 18 F]FDG-PET scans were obtained at baseline and a median of 24 (IQR 3) days after the start of ICB in 21 of 32 IMCI-SION patients. Thirteen patients underwent imaging on the same scanner at both time points. Different scanners from the same manufacturer were used in 7 patients, while 1 patient was scanned on an EARL-calibrated machine from another manufacturer in the referring hospital. Definitive surgery was performed in 19 of these 21 patients, a median of 3 days (IQR 0) after the on-treatment scan: 2 patients were ineligible for surgery due to progressive disease or synchronous incurable oesophageal carcinoma.
Twenty of 21 PET-analysable patients had HPV-unrelated HNSCC, and 18 had an oral cavity carcinoma. Six patients (of whom 5 non-responders) had recurrent disease after previous concurrent cisplatin-or cetuximab-radiotherapy (3 patients), surgery with postoperative radiotherapy (1 patient), or surgery only (2 patients). Detailed baseline and neoadjuvant treatment characteristics are shown in Table 1.
Seven of 21 patients had a pathological response at their primary tumour site, including 6 patients with an MPR and one with PPR. Twelve patients had no pathological response (Fig. 1). Of the two patients who did not undergo surgery, one had apparent clinical primary tumour regression and a partial response based on MR-RECIST v.1.1 (grouped with the pathological responders), and one had biopsyproven MR-RECIST progressive disease (grouped with the pathological non-responders). In all, 8 patients (6 MPR, 1 PPR, 1 MR-RECIST-based response) were responders, and 13 patients (12 NPR, 1 clinical non-response) were nonresponders (Fig. 1).
EORTC metabolic response assessment underestimates incidence and depth of primary tumour pathological response early upon neoadjuvant immunotherapy
Five of the 8 responders had a CMR (2) or PMR (3) after neoadjuvant ICB according to EORTC criteria. Two responding patients had SMD and 1 had PMD (Fig. 1). A waterfall plot illustrating individual patients' ΔSUV max is shown in Fig. 2a.
The responder marked with b in Fig. 2a had a SUV max increase of 117%, yet the surgical specimen revealed an MPR with 94% cancer cell regression surrounded by a dense population of infiltrating immune cells (Fig. 2b). The two responders with SMD, one of whom had a PPR and is marked with c in Fig. 2a, demonstrated a decrease in the volume of metabolic activity at the primary tumour site in the absence of a SUV max decrease (Fig. 2c).
Volumetric [ 18 F]FDG-PET metabolic primary tumour response assessment accurately identifies patients responsive to neoadjuvant ICB and favourable survival
The median baseline and on-treatment primary tumour SUV max , SUV mean , MTV, and TLG and their percentage change from baseline to on-treatment are shown in Table 2.
The medians of all parameters decreased from baseline to on-treatment in the responders' group, whereas they increased in non-responding patients. The most profound change was observed in MTV and TLG, with a median of − 74 and − 77% for responding patients, and + 85% and + 108% for non-responding patients, respectively. In a paired analysis of baseline and on-treatment scans of individual patients, SUV max did not change significantly in the responders' group (P = 0.2). In contrast, SUV mean (P = 0.04) and MTV and TLG (both P = 0.02) decreased significantly in all patients in the responders' group (Fig. 3). Patients in the non-responding group had a significant increase in SUV max (P = 0.04), MTV (P = 0.002), and TLG (P = 0.002, Fig. 3). One patient without a primary tumour pathological response, however, showed a decrease in SUV max (− 22%), SUV mean (− 7%), MTV (− 47%), and TLG (− 51%, Supplementary Fig. 1). While not meeting Responder Non-responder pathological response criteria, this patient did have 22% pathological primary tumour regression and a major pathological response in the largest lymph node metastasis. The percent change in MTV and TLG as diagnostic tests for a pathological or MR-RECIST-based response early upon immunotherapy outperformed the EORTC criteria in terms of accuracy (Table 3). Using ΔTLG of − 12.5% as a threshold, patients with a TLG-based metabolic response at the primary tumour site who underwent surgery had a superior TTP compared to patients without a TLG-based metabolic response (Fig. 4a) at a median follow-up of 23.0 months since surgery (log-rank P = 0.06). OS since the start of ICB did not differ between the ΔTLG groups (log-rank P = 0.3, Fig. 4b). Of note, SUV mean , MTV, and TLG of the tumour of the patient with an MPR and a 117% increase in SUV max (shown in Fig. 2b) could not be accurately calculated from the baseline scan due to poor distinction from surrounding normal tissue avidity, and were thus excluded. These data indicate that [ 18 F]FDG-PET and particularly MTV and TLG may be accurate and early surrogate biomarkers for primary tumour ICB response and favourable TTP upon neoadjuvant immunotherapy prior to extensive surgery in HNSCC.
Cervical lymph node metabolic response assessment is troubled by pseudoprogression
Pathological assessment of the cervical lymph nodes was performed in the 19 patients undergoing surgery (7 primary tumour responders, 12 non-responders). As reported previously [7], response to neoadjuvant ICB at the lymph node metastatic sites was not always congruent with the ICBresponse at the primary tumour site. In all, only 5 of the 33 (15%) pathologically tumour-positive lymph nodes shared among 3 patients (of whom 1 with primary tumour MPR and 2 primary tumour NPR) showed evidence of PPR or MPR.
Twenty-one of 33 pathologically confirmed lymph node metastases could be reliably identified on [ 18 F]FDG-PET (Fig. 5a). The 5 metastases with a PPR or MPR (example in Fig. 5b) showed a significant decrease in SUV max (median 3.1, Wilcoxon signed rank P = 0.04). Three other lymph node metastases had a decrease in SUV max A patient with cT4aN2c HNSCC of the left alveolar process of the mandible showed PMD at the primary tumour site (SUV max + 35%, SUV mean + 35%, MTV + 144%, and TLG + 230%). Two ipsilateral level 2 lymph nodes (arrows) showed a SUV max decrease from 8.8 to 5.7 (− 35%) and 6.6 to 5.6 (− 15%). Correlative keratin 14-stained pathology slides revealed one node with disturbed architecture but little viable tumour (12 × image), corresponding to an MPR. The other level 2 lymph node showed a PPR (not shown). c Waterfall plot showing the absolute change in SUV max from baseline to on-treatment for pathologically proven tumour-negative lymph nodes. Bars marked with d and e ii are further detailed under d and e ii , respectively. d A patient with a SUV max increase from 3.4 to 5.3 (56%) in a left (contralateral) level 1b lymph node after neoadjuvant ICB (arrows). Correlative H&E slide of the left level 1b neck dissection specimen revealed no lymph node metastases. This patient's primary tumour showed a partial pathological response (shown in Fig. 2c). e Level 3 transversal [ 18 F]FDG-PET and keratin 14-stained pathology images of the same patient shown under b. Two level 3 nodes are detected: one left (ipsilateral, marked e i ) with an SUV max increase from 4.0 to 8.5, and one right (contralateral, marked e ii ) with an SUV max increase from 4.1 to 8.8. Correlative keratin 14-stained pathology slides showed a metastasis in level 3 left without evidence of ICB response (e i ), while none of the resected right level 3 nodes contained tumour (e ii ) (Fig. 5c). Fourteen showed an increase in SUV max from baseline to on-treatment (median + 2.1, Wilcoxon signed rank P = 0.002) and were considered pseudoprogressive (example in Fig. 5d). In one patient, a 1.9 increase in SUV max in a contralateral lymph node led to escalation of surgery to include a bilateral neck dissection: histopathology revealed no contralateral metastases (Fig. 5d). Median baseline and on-treatment SUV max and their difference (absolute and per cent) for all 27 lymph nodes with a SUV max increase after neoadjuvant ICB (13 tumourpositive and 14 tumour-negative) are shown in Table 4. The on-treatment SUV max of the 13 tumour-positive nodes (7.2, IQR 4.1) was significantly higher than of the 14 tumour-negative nodes (4.8, IQR 1.1, P = 0.02). Still, distinguishing between true-and pseudoprogression in the cervical lymph nodes on [ 18 F]FDG-PET is problematic, not least because these phenomena may be present simultaneously within the same patient and irrespective of the ICB responses in other lymph node metastases and the primary tumour, as illustrated in Fig. 5e.
Due to insufficiently avid and bulky disease at the metastatic sites, MTV and TLG could only be determined at baseline and on-treatment in 2 lymph node metastases with a response (of which one is shown in Fig. 3f) and greatly decreased in both nodes (MTV: − 99 and − 85%, TLG: − 99 and − 88%). MTV and TLG increased from baseline to on-treatment in the 2 non-responsive metastases for which they could be calculated (MTV: + 167 and + 533%, TLG: + 178 and + 748%).
Discussion
Immune checkpoint blockade has become standard of care for patients with recurrent or metastatic HNSCC, and recent trial data demonstrate that ICB may be safely and effectively integrated into curative treatment as neoadjuvant therapy [6,7,16]. The relatively low major pathological response rate after neoadjuvant dual ICB with anti-PD-1 and anti-CTLA-4 of 20-35% in HNSCC, however, underlines the need to select of patients likely to respond [6,7]. While several pre-treatment biomarkers, reviewed in [17], have been proposed, only the tumour PD-L1 combined positive score has entered clinical practice in the R/M HNSCC setting [2]. In the absence of reliable predictive biomarkers prior to treatment, on-treatment biomarkers identifying individual patients with a clinically relevant response early upon ICB may guide decisionmaking in future clinical trials investigating responsedriven treatment adaptation. Our research suggests that MTV and TLG based on [ 18 F]FDG-PET are promising surrogate biomarkers for primary tumour pathologic response and favourable disease-specific clinical outcome after neoadjuvant ICB in HNSCC and could, upon validation in an independent series, select patients for response-driven treatment adaptation in future trials.
Treatment response assessment per RECIST-criteria [9] based on CT or MR imaging has long been the gold standard for objectifying ICB response in a palliative setting and is a widely reported endpoint in clinical trials. For [ 18 F]FDG-PET-based response evaluation, the EORTC [15] and PER-CIST [11] criteria have been formulated, where metabolic response is determined based on a decrease in SUV max or, respectively, SUL peak . ICB's mechanism of action, recruiting host immune cells to infiltrate and clear a tumour, may result in a lesion to remain metabolically stable or even progress while it is, in fact, responding to treatment. To overcome pseudoprogression, the iRECIST criteria [18] for CT or MRI and iPERCIST [19] criteria for [ 18 F]FDG-PET were developed, where an additional scan at a later time point is required to confirm or refute actual progressive disease. However, the neoadjuvant time frame does not offer the months needed to perform a reliable first RECIST or EORTC/PERCIST-based response assessment, let alone an additional confirmatory scan necessary per iRECIST or iPERCIST. Consequently, objective response rates assessed per RECIST have been shown to underestimate both the depth and incidence of pathological responses to neoadjuvant ICB in melanoma, colon cancer, non-small cell lung cancer, and HNSCC [3,4,6,7,10]. Therefore, unidirectional RECIST tumour measurements performed on CT or MR imaging seem unsuitable to predict an early pathological response upon ICB treatment accurately.
We have herein shown that while [ 18 F]FDG-PET response assessment per EORTC criteria identifies some responders, it still yields an underestimation of the pathological response and results in pseudostable or pseudoprogressive disease at the primary tumour site in 3 of 8 HNSCC patients (38%) in IMCISION. In a trial investigating neoadjuvant sintilimab (anti-PD-1) in patients with resectable non-small cell lung cancer, Tao et al. noted that one patient with a PPR after neoadjuvant sintilimab (anti-PD-1) was classified as having PMD per PERCIST, while MTV and TLG did decrease with 60 and 50%, respectively [12]. Similarly, two other reports on metabolic ICB response assessment in patients with metastatic non-small cell lung cancer treated with nivolumab have shown that a decrease in TLG outperforms SUV max when used as an early (2-4 weeks after therapy initiation) biomarker for efficacy and progression-free survival [20,21]. We herein propose that a decrease in primary tumour MTV and TLG accurately predicts primary tumour pathological response 4 weeks after start of neoadjuvant ICB in HNSCC patients. Importantly, we further show that none of the patients with a decrease in primary tumour MTV or TLG has developed a tumour relapse after 23 months postsurgical follow-up, superior to HNSCC patients without an MTV-or TLG-based metabolic response, and irrespective of the presence or absence of pathological response in these patients' lymph node metastases. Using MTV and TLG as biomarkers for ICB response early on-treatment has limitations. First, accurate computation of MTV and TLG requires a tumour bulk that can be accurately demarcated from [ 18 F]FDG-avidity in the surrounding tissue. While this is in general not a problem in the locally advanced HNSCC setting, one patient in the present trial had a barely avid T2 tumour of the cheek mucosa, of which MTV and TLG could not be calculated at baseline. Similarly, metastatic HNSCC in cervical lymph nodes is often not sufficiently bulky and avid. Second, while MTV and TLG are more accurate than SUV max , they too are most likely not free from false-negativity through immuneinduced pseudostable or -progressive disease, as has been shown in non-small cell lung cancer [21]. While we were unable to provide quantitative evidence in this research, the patient in whom baseline MTV and TLG were incalculable showed visually evident metabolic progression after treatment yet had a major pathological response at the primary tumour site. From a practical point of view, finally, adherence to an intensive protocol encompassing ICB and repeated metabolic response assessment in the short neoadjuvant time frame may be challenging for some patients with advanced HNSCC, a patient population characterized by alcohol and tobacco abuse and a low socio-economic status [22,23].
Metabolic cervical lymph nodal pseudoprogression, herein defined as the increase of nodal avidity after neoadjuvant ICB in the absence of tumour, was seen in 14 of the 27 evaluable nodes (52%) in the present trial. Schoenfeld et al. reported that cervical lymph node dissection after neoadjuvant ICB (nivolumab or nivolumab + ipilimumab) in HNSCC showed no tumour in 7 of 15 patients (47%) with an increase in lymph nodal SUV max of 6 or more, and as much as 14 of 15 (93%) with a nodal SUV max increase of 3 or more. However, they performed the second [ 18 F]FDG-PET scan at a relatively early time point: median 14 days after ICB initiation, compared to 24 in our study [6]. Cervical lymph nodal pseudoprogression puts patients at risk of unjustified expansion of the cervical dissection, as was the case in one patient. Using a more tumour-specific radiotracer like 3′-deoxy-3′-[ 18 F]fluorothymidine (FLT, a proliferation tracer) may help distinguish between true-and pseudoprogression and has previously been proven an early indicator of a favourable outcome after (chemo)radiotherapy in HNSCC [24,25]. A small pilot study in stage IV melanoma patients treated with pembrolizumab (anti-PD-1) suggests FLT-PET-based response assessment in week 6 accurately predicts RECIST-based response in week 12, but its utility as a biomarker to separate pseudo-from truly progressive disease in ICB for HNSCC is unknown.
In conclusion, our data suggest that [ 18 F]FDG-PET-based, primary tumour volumetric metabolic response assessment may be an early and accurate surrogate biomarker to identify individual HNSCC patients with a clinically relevant pathological response to neoadjuvant nivolumab or nivolumab + ipilimumab. In addition, an MTV or TLG decrease seems a promising tool to identify individual patients who are very unlikely to develop a tumour relapse, irrespective of mixed responses or pseudoprogression in the cervical lymph nodes, and may therefore serve as an on-treatment surrogate biomarker to guide response-driven treatment adaptation in future trials.
|
v3-fos-license
|
2021-07-01T13:45:37.553Z
|
2021-07-01T00:00:00.000
|
235691164
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41438-021-00578-z.pdf",
"pdf_hash": "a1f3168c739b4d6e13b7978952be02e639f6c1ec",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1222",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "6e9a8a4623d6d5cc572b7633388670c58cb9c052",
"year": 2021
}
|
pes2o/s2orc
|
Hyperoside promotes pollen tube growth by regulating the depolymerization effect of actin-depolymerizing factor 1 on microfilaments in okra
Mature pollen germinates rapidly on the stigma, extending its pollen tube to deliver sperm cells to the ovule for fertilization. The success of this process is an important factor that limits output. The flavonoid content increased significantly during pollen germination and pollen tube growth, which suggests it may play an important role in these processes. However, the specific mechanism of this involvement has been little researched. Our previous research found that hyperoside can prolong the flowering period of Abelmoschus esculentus (okra), but its specific mechanism is still unclear. Therefore, in this study, we focused on the effect of hyperoside in regulating the actin-depolymerizing factor (ADF), which further affects the germination and growth of pollen. We found that hyperoside can prolong the effective pollination period of okra by 2–3-fold and promote the growth of pollen tubes in the style. Then, we used Nicotiana benthamiana cells as a research system and found that hyperoside accelerates the depolymerization of intercellular microfilaments. Hyperoside can promote pollen germination and pollen tube elongation in vitro. Moreover, AeADF1 was identified out of all AeADF genes as being highly expressed in pollen tubes in response to hyperoside. In addition, hyperoside promoted AeADF1-mediated microfilament dissipation according to microfilament severing experiments in vitro. In the pollen tube, the gene expression of AeADF1 was reduced to 1/5 by oligonucleotide transfection. The decrease in the expression level of AeADF1 partially reduced the promoting effect of hyperoside on pollen germination and pollen tube growth. This research provides new research directions for flavonoids in reproductive development.
Introduction
In a suitable environment, the pollen on the stigma germinates and grows pollen tubes; then, the pollen tube extends toward the ovule. As pollen tubes pass rapidly through the style, sperm cells are transferred to the ovule for fertilization 1 . Flavonoids play a significant role in the growth of pollen tubes 2 . Chalcone synthase (CHS), as a key enzyme in the flavonoid synthesis pathway, plays an important role in the synthesis of flavonoids. A study on CHS mutants of maize and petunia showed that pollen deficient in flavonoids failed to produce functioning pollen tubes. By applying specific flavonols to pollen or the stigma during pollination, the defect can be overcome, and fertility can be restored 3,4 . The Arabidopsis CHS mutant (tt4) also showed reduced seed setting and reduced pollen germination in vitro 5 . These reports indicate that flavonoids may interfere with pollen germination and pollen tube growth. However, the specific mechanism by which flavonoids affect pollen germination and pollen tube growth is unclear. Hyperoside, also known as quercetin-3-O-β-D-galactopyranoside, is a flavonol glycoside compound 6 . Our previous research showed that the content of hyperoside in Abelmoschus esculentus (okra) highly accumulates during flowering, as it is a signal substance that affects the length of the flowering period 6,7 .
For some plants, such as Epiphyllum oxypetalum, Opuntia stricta, and A. esculentus, the flowering period can be maintained for only a few hours. Such flowers often possess high ornamental and medicinal value. The petals and seeds of A. esculentus, a medicinal herb of the Malvaceae (mallow) family, have edible and medicinal compounds 8 . Medicinal ingredients are abundant in A. esculentus, especially in the petals 9 , and the concentration of active compounds in the petals peaks before they begin to wilt. Although most studies have focused on separating and extracting medicinal ingredients from A. esculentus flowers, it would be valuable to prolong the flowering period and improve the efficiency of fertilization during a short period of time [10][11][12] . Hyperoside is a major pharmacologically active component in A. esculentus. Many studies have already shown that hyperoside has pharmacological effects, such as relieving oxidative stress injury in cells and anticancer properties [13][14][15] .
Pollen tube growth is an important part of the fertilization process, and the actin cytoskeleton plays a critical role in pollen tube growth by supporting organelle movement 16 . The actin cytoskeleton refers to a protein fiber network framework composed of microtubules, microfilaments, and intermediate filaments. Microfilaments, which are called filamentous actin (F-actin), are composed of multimers of globular actin (G-actin) monomers 17 . In addition to actin fibers, there are many microfilament-binding proteins involved in the microfilament system. These microfilament-binding proteins are involved in the formation of high-level microfilament fibers, regulate the dynamic assembly of actin fibers, and perform specific functions [18][19][20] . Actin dynamics (i.e., assembly and disassembly) exhibit circadian regulation during pollen tube development 21,22 . Actindepolymerizing factor (ADF) is one of the most widely studied proteins and can combine with monomeric or fibrous actin to accelerate the decomposition of actin subunits. There are 11 ADF genes in Arabidopsis, and they exhibit opposing biochemical properties. AtADF1 belongs to the class I ADFs and has an F-actin depolymerization function; a high concentration of ADF protein has a higher depolymerization ability. In contrast, AtADF5 belongs to the class III ADFs and has an F-actin-binding function [23][24][25] .
In this research, we started from the phenomenon that hyperoside has a positive regulatory effect on pollen tube growth and found that hyperoside promotes the depolymerization of microfilaments in an Nicotiana benthamiana cell system. Furthermore, we found that AeADF1 is highly expressed in pollen in response to hyperoside and plays a significant role in pollen germination and pollen tube growth by severing actin. This research revealed that hyperoside promotes the severing efficiency of AeADF1 protein on microfilaments to promote pollen germination and pollen tube growth. This research provides new research directions for exploring the mechanism of flavonoids in other plants during flower development.
Results
Hyperoside increased cleavage rate of microfilaments in the plant cell We constructed a fusion expression vector of Lifeact and eGFP and then expressed them in N. benthamiana cells by transient transfection technology. The morphology of microfilaments in N. benthamiana cells sprayed with buffer solution only, sprayed with buffer solution containing 0.1 mM hyperoside, and not sprayed with any solution was examined under a laser confocal microscope. The results showed that the morphology of the microfilaments in the N. benthamiana cells sprayed with buffer solution was consistent with that of the control, and the microfilaments were not depolymerized within 10 min. In the N. benthamiana cells sprayed with hyperoside, an obvious process of microfilament depolymerization was observed. The depolymerization effect of the microfilaments in the cells had already been observed at 4 min and 17 s post hyperoside spraying (Fig. 1A, Supplemental Movies 1-3). To further quantify this process, we measured the relative fluorescence intensity in a similarly sized area in the control and both treatments and calculated the depolymerization time of the microfilaments per micrometer to indicate the depolymerization speed of the microfilaments (Fig. 1B, C). These results are consistent with our observations that hyperoside accelerates the depolymerization rate of cell filaments.
Hyperoside prolongs the effective pollination period of okra
Since the depolymerization of filaments in cells is closely related to flower development 26 , we further explored whether hyperoside can affect flower development. As the effects of hyperoside on different plants may have certain differences, we did not verify them on N. benthamiana. We chose okra, which has only a one-day flowering period, as our experimental object to facilitate our experimental observation. In this study, we found that the exogenous application of hyperoside can prolong the effective pollination period of A. esculentus, with a flowering period from 9:00 to 16:00, or approximately 7 h ( Fig. 2A-C). To describe the prolongation of the effective pollination period in detail, the average opening angle and the average interior diameter of flowers during the flowering period were measured. Compared with the control and buffer groups, the average opening angle and the average interior diameter of flowers increased continuously for~3 h after spraying hyperoside (Fig. 2D, E). The length of the effective pollination period is an important factor affecting plant reproduction, as is the elongation of pollen tubes. Therefore, we further tested whether hyperoside has an effect on the elongation of pollen tubes. A. esculentus is a typical self-pollinated plant in which self-pollen is used to pollinate the pistils. We observed the length of the pollen tube in the style with aniline blue staining. The pollen tube length in the control only reached 70% of the tube length in the hyperoside treatment group. The pollen tube growth of the pollen treated with the control and buffer solutions was significantly slower than that treated with hyperoside (Fig. 2F). These results indicated that hyperoside treatment can positively affect both the length of the effective pollination period and pollen tube growth.
Among the genes encoding actin-binding proteins, AeADF1 and AeADF5 respond most strongly to hyperoside Exogenous application of hyperoside can prolong the effective pollination period of okra, and the germination and elongation of the pollen tube during the effective pollination period are the most important factors affecting pollination. Therefore, we further observed whether hyperoside has a certain effect on the elongation of pollen tubes in vitro. To determine whether hyperoside affects pollen germination and pollen tube growth, we measured the pollen germination rate and the average length of pollen tubes in vitro. In the pollen germination test, we found that the germination rate of untreated and buffertreated pollen grains was significantly lower than that of hyperoside-treated pollen grains ( Fig. 3A-C).
To complete fertilization, the pollen tube must enter the ovule. Whether actin filaments and actin-binding proteins play a significant role in the polar growth of pollen tubes is a popular research topic 27,28 . Since the A. esculentus genome has not been sequenced, we first designed degenerate primers to clone the full-length sequences of the 6 ADF genes and then designed specific primers to detect the transcript level. To determine which AeADF gene responded the most, we first cloned all six AeADF genes in flowers. In total, 6 full-length AeADF genes were obtained, and the transcript levels of AeADF1 and AeADF5 were upregulated the most, up to tenfold, after spraying with hyperoside ( Fig. 3D).
Isolation and bioinformatic analysis of AeADF1
By comparing the protein sequences of AeADF1, AeADF5, and AtADF1, it was found that the alpha helix motifs present in AtADF1 that play a significant role in the function of severing microfilaments existed in the AeADF1 protein, while the structures of the AeADF5 and AtADF1 proteins were quite different. At the same time, studies by Dong et al. showed that among the homologous genes in Arabidopsis, AtADF5 has the function of polymerizing microfilaments, and AtADF1 has the function of depolymerizing microfilaments 25,29,30 , so it is speculated that AeADF1 may have a microfilament severing function (Fig. S1). To gain insight into the function of the identified plant ADF proteins, we constructed homology models using the SWISS-MODEL server (https://swissmodel. expasy.org/interactive). The intensive mode of SWISS-MODEL uses multi-template modeling for higher accuracy. AeADF1 protein, which showed the highest level of similarity, was selected (Fig. 4A). Multiple sequence alignment of AeADF1 with ADFs from other plant species (Arabidopsis thaliana, Malus domestica, Camelina sativa, Nicotiana tomentosiformis, Rosa chinensis and Capsicum annuum) indicated the presence of a conserved ADF domain in the AeADF1 protein, which is a common characteristic of the ADF family (Fig. 4B). Preliminary studies found that the expression of AeADF1 in okra is increased under hyperoside treatment. Many literature reports have proposed that the ADF protein plays an important role in the process of microfilament depolymerization [31][32][33][34] , suggesting that AeADF1 has the same function. To explore the role of AeADF1 protein in pollen development, the gene expression of AeADF1 was determined using semiquantitative RT-PCR and quantitative RT-PCR in different flower organs. We used whole flower cDNA as a template to clone AeADF1 by RACE-PCR. The analysis revealed that AeADF1 was specifically expressed in the pollen tube. When flowers were sprayed with hyperoside, the transcript level of AeADF1 increased more in pollen than in petals (Fig. 4C, D).
AeADF1 colocalizes with F-actin filaments
To examine the ability of AeADF1 protein to act on actin filaments, we first determined whether it colocalizes with F-actin. We used Agrobacterium transient transfection to coexpress AeADF1-GFP and Lifeact-mCherry in N. benthamiana leaves. Particle bombardment was used to transfect the recombinant plasmids of AeADF1-eGFP and Lifeact-mCherry into onion (Allium cepa) epidermal cells and okra pollen. The colocalization of these proteins was Fig. 3 Screening of ADF genes in pollen that respond to hyperoside. A Pollen tube germination in the control and treatment with buffer and hyperoside (0.1 mM) groups. The length of the pollen tube (B) and germination rate (C) when treated with buffer and hyperoside for 2 h or not treated. Each treatment had three biological replicates, and error bars display the standard error of the sample. *P < 0.05, **P < 0.01 (Student's t test). D The gene expression of AeADF1-6 after treatment with buffer and hyperoside (0.1 mM) for 2 h or after no treatment observed under a laser confocal microscope. When AeADF1-GFP was expressed together with Lifeact-mCherry in okra pollen, onion epidermal cells, and N. benthamiana leaf cells, the signals colocalized with Factin ( Fig. 5A-C). AeADF1-GFP colocalized with free Lifeact-mCherry, indicating that they likely interact with F-actin. Pearson's coefficient indicated the degree of colocalization in the cell (Fig. 5D). These results suggest that AeADF1 and F-actin were colocalized in pollen and other cells.
AeADF1 F-actin-severing activity depends on hyperoside Based on the above results, we speculated that AeADF1 might play a role in pollen tube growth by regulating Factin cleavage. To further elucidate the function of AeADF1, we used total internal reflection fluorescence microscopy (TIRFM) to observe the effect of AeADF1 on the cleavage activity of F-actin. We labeled F-actin filaments with rhodamine to facilitate observation. The experimental results showed that with AeADF1 or hyperoside alone, the F-actin filaments were not significantly broken. However, when 50 μM hyperoside and 0.1 μM AeADF1 were added together, the F-actin filaments were broken (Fig. 6A). To show this effect more clearly, we quantified the average frequency of F-actin filament breakage when AeADF1 and hyperoside were added together. The quantitative results showed that compared with adding hyperoside alone, adding hyperoside and AeADF1 together increased the average frequency of F-actin filament breakage by approximately 50fold (Fig. 6B). The above results all indicate that hyperoside can promote the severing of F-actin by AeADF1.
To test whether AeADF1 is capable of severing F-actin and whether this process is hyperoside-dependent, recombinant AeADF1 protein and polymerized F-actin were incubated for 30 min in the presence of different concentrations of hyperoside. After incubation of purified actin filaments in the presence of free hyperoside, breaks were detected along the F-actin filaments. In the presence of 5.0 μM hyperoside, the average length of F-actin filaments was 11.9 ± 0.12 μm, which was significantly shorter than that of the filaments that formed in the presence of lower concentrations of hyperoside (Fig. 7A, B). At 50 μM hyperoside, we observed a dramatic reduction in the length of the F-actin filaments (Fig. 7A, B). These data showed that in the presence of 0.1 μM recombinant AeADF1, filament length decreased as the concentration of hyperoside increased, proving that the number of breaks substantially increased with increasing hyperoside concentration.
The inhibitory expression of AeADF1 in pollen tubes reduces the promotion of pollen tube growth by hyperoside
To further prove the role of AeADF1 in pollen germination and pollen tube growth, we used oligonucleotide technology to inhibit the expression of AeADF1 in pollen and observed its pollen germination rate and pollen tube growth in the control and hyperoside treatments for 40 min. Compared with the s-ODN-ADF1 and the control groups, the expression of AeADF1 was significantly reduced in the as-ODN-ADF1 group (Fig. S2). In the control, the inhibitory expression of AeADF1 reduced the pollen germination rate and pollen tube length. The application of exogenous hyperoside can partially recover the pollen germination rate and pollen tube length. The above results showed that hyperoside can promote the expression of AeADF1 to have a positive effect on pollen germination and pollen tube growth (Fig. 8).
Discussion
Flavonoids are secondary metabolites that widely accumulate in plants and are present in various plant tissues 35 . Previously, most studies on the role of flavonoids in flower development at home and abroad focused [36][37][38][39] . Anthocyanins increase the attractiveness of plants to pollinators, resist ultraviolet rays, and defend against pathogens 40 . For example, the three proteins MYB, bHLH, and WDR can interact and form the MBW complex to promote the accumulation of anthocyanins in plants. This phenomenon has been reported in petunia, grape, and poplar 41,42 . Our previous research proved that flavonoids are involved in the flower development of A. esculentus and play an important role as signaling substances. Exogenous application of hyperoside promotes the synthesis of hyperoside in okra, thereby promoting okra's fruit set rate 43 . This study proves that the exogenous application of hyperoside prolongs the effective pollination period of okra, promotes the expression of the AeADF1 gene, and promotes the depolymerization of AeADF1 on microfilaments. The exogenous application of hyperoside can promote pollen germination and pollen tube growth in okra. The oligonucleotide transfection experiment of AeADF1 gene proved that the inhibitory expression of AeADF1 reduced the rate of pollen germination and inhibited the growth of pollen tubes. Exogenous application of hyperoside to pollen with inhibited expression of AeADF1 can partially alleviate this inhibition. This shows that AeADF1 plays an important role in pollen germination and pollen tube growth in A. esculentus, and hyperoside has a promoting effect on AeADF1 protein. This research laid a molecular foundation for analyzing the effect of flavonoid signal transmission on protein activity and protein modification in other plants (Fig. 9).
The role of the actin cytoskeleton in pollen germination and pollen tube growth is essential. As a kind of actinbinding protein, ADF proteins are abundant in eukaryotic cells. They play a significant role in maintaining the dynamics of actin. In this process, the homologous region (long alpha helix) of ADF can depolymerize F-actin by enhancing the ability of ADF to bind to actin. An important function of ADF depolymerization of microfilaments is to form a new open end, which is usually considered to be the cause of the formation of a microfilament network [44][45][46] . Studies have reported that in A. thaliana, the shortening of cell filament bundles is due to the overexpression of actin depolymerization factor 1 (ADF1). AtADF1 can bind to actin and promote the depolymerization of microfilaments in vitro. The depolymerization ability increases with increasing AtADF1 concentration. The decreased expression of AtADF1 in the adf1 mutant leads to an increase in actin bundles, which in turn reduces flowering time. In this study, we speculate that AeADF1 has a similar function to AtADF1 in Arabidopsis. Both proteins can directly cleave F-actin, and high concentrations of ADF protein have a higher ability to cleave F-actin [47][48][49] . At the same time, the depolymerization of microfilaments can promote pollen germination and pollen tube growth. In our previous research, AeADF5 may also play a significant role in pollen tube growth 43 . Based on the functions of ADF5 homologs in other plants, we speculate that the protein has the function of polymerizing actin. The verification of the function of AeADF5 in A. esculentus and the synergy of AeADF5 and AeADF1 need further elucidation.
In flowering plants, wind and animals distribute pollen in different environments, driving the spread of genetic variation within a species. Successful fertilization of flowers in unpredictable climates is a key factor in determining plant yields 50,51 . Under proper conditions, to complete fertilization, mature pollen needs to germinate on the stigma and then extend its pollen tube to deliver sperm cells to the ovule. The factors that control the successful germination of pollen have always been the focus of plant reproduction, evolution, and breeding research 52,53 . With continuous research, we now have a certain understanding of the regulatory mechanisms that control the fertilization process, such as signal transduction pathways and cytoskeletal proteins. However, most research has focused on model plants. Our related research in a non-model plant, A. esculentus, proved the regulatory effect of the ADF1 protein on pollen germination and pollen tube growth. At the same time, it was found that the hyperoside content increased significantly during flowering. As a flavonoid, hyperoside has many physiological properties, such as anti-inflammatory, antispasmodic, diuretic, and antitussive properties, and can lower blood pressure, lower cholesterol, and protect the heart and cerebral blood vessels; thus, it is an important natural product. The development of new varieties of okra with increased hyperoside content has an important impact on the medical and economic benefits of the plant [54][55][56] . This research lays a molecular foundation for the development of fine varieties of okra and provides a new research direction related to flower development in other non-model plants.
Plant materials
A. esculentus seeds were sown in the greenhouse in April, and they were transplanted to the field after 20 days of growth. Plants began to bloom in early July, and blooming dynamics were recorded beginning on July 15th. Blooming flowers, pollen, and other tissues were collected and stored at −80°C until use.
Application of hyperoside solution
As reported by Yang et al. 6 , we configured the stock solutions. When applied externally, 1 liter of the solution was sprayed for every ten plants. The average height of the sprayed plants was 1 m, and the average canopy width of the sprayed plants was 50 cm. The okras were sprayed every 2 days for a total of four times, and the control was sprayed with buffer only. When hyperoside was used to treat N. benthamiana leaves, a 0.1 mM solution of hyperoside was first prepared. Then, 1 cm N. benthamiana of cut leaves were soaked in hyperoside solution for 5 min and then observed under a microscope.
Effective pollination period and pollen tube growth assay As a result of previous research findings, it takes~24 h for okra to transition from flowering to withering 6 . Therefore, we determined the effective pollination period as the ratio of the time spent on flowers in 24 h.
As reported by Meng et al. 57 , the pistils after pollination were collected and fixed in phosphate-buffered saline. When staining with aniline blue, the fixed pistil was rinsed with running distilled water three times and then placed in 1 M NaOH for 12-16 h to soften it. After softening, the sample was rinsed five times with distilled water. Finally, the pistil was put into a 0.1% aniline blue solution and placed in the dark for 12-16 h. After removal, the pollen tube was observed under a confocal microscope (Leica SP8). Then, the average length of the pollen tube in the pistil was measured. We used the average length of the pollen tube in the pistil as the pollen tube growth indicator.
Pollen germination rate and pollen tube length analysis
Fresh okra pollen was picked and placed in okra pollen germination solution for 2 h at 25°C in the dark. Then, the pollen germination rate and pollen tube length were observed under a microscope. Each treatment had three biological replicates, and 20 visual fields were selected for each replicate under a tenfold microscope for statistical analysis.
RNA isolation and qRT-PCR analysis
As reported by Meng et al. 58 , the CTAB method was used for sample RNA extraction. After removing DNA contamination with RQ1 DNase (Promega, WI, USA), an RT reagent kit (Takara) was used to reverse transcribe 1 μg of total RNA into cDNA. Fast SYBR Mixture (CWBIO, Beijing, China) was used in an Icycler iQ5 (BioRad, CA, USA) instrument to perform qRT-PCR experiments according to the corresponding instructions. When qRT-PCR was executed, there were three technical replicates for each sample, and the expression of housekeeping genes was used as the internal reference for sample standardization. The transcription level was calculated using the 2 −ΔΔCt method. The qRT-PCR primers of all genes are shown in Supplemental Table 1.
Bioinformatics analysis of AeADF1
AeADF1 and other candidate protein sequences were queried against the SMART database (http://smart.emblheidelberg.de/, accessed October 10, 2016) to test whether they were predicted to be members of the ADF family. Structural motif annotation was performed using DNAMAN7 software. SWISS-MODEL (https:// swissmodel.expasy.org) was used for three-dimensional structure prediction.
Subcellular colocalization with F-actin
AeADF1 and eGFP were fused and inserted into the p-CAMBIA1300 vector controlled by the 35S promoter. The Lifeact-mCherry fusion protein was used as an Factin marker. Constructs were transformed into Agrobacterium tumefaciens strain GV3101. As mentioned above, the Agrobacterium liquid was centrifuged to remove the supernatant, and the pellet was resuspended in buffer 59 . Using a 5 mL syringe, the resuspension solution was injected into the leaves of N. benthamiana and then incubated in a greenhouse at 22-24°C for 72 h. The N. benthamiana leaves were collected, and protein fluorescence was observed under a fluorescence microscope.
AeADF1-eGFP and Lifeact-mCherry were added to the pEZS-NL vector. Then, the vector plasmid was evenly mixed with spermidine, 2.5 M calcium chloride, and a standard concentration of gold for use. Next, particle bombardment was used to transfect the recombinant plasmid into onion epidermal cells, and the bombarded onion epidermis was cultured in the dark for 12-18 h. Then, the onion cells were placed under a laser confocal microscope to observe the subcellular localization of genes, as described by Meng et al. 60 .
We calculated the Pearson correlation coefficient in the cell according to the method described by Yang et al. 61 . We used the Pearson correlation coefficient as an indicator of the linear correlation of eGFP and mCherry fluorescence intensity. First, ImageJ software was used to convert the eGFP fluorescence value in the cell to a gray value and define it as X, and the gray value was converted from the mCherry fluorescence value as Y. The expected values of X and Y were defined as E(x) and E(y), respectively, and the covariance was derived from them. Next, the standard deviation of the X variable (rX) and Y variable (rY) was calculated, and the final Pearson coefficient was calculated as Cov/rX, rY. The closer the value is to 1, the greater the colocalization of eGFP and mCherry.
Total internal reflection fluorescence microscopy assay As described by Yang et al. 61 , the severing of F-actin was visualized. Before use, G-actin was labeled with rhodamine and then centrifuged for 2 h. Then, F-actin was added to the flow cell and incubated for 5 min, and the combination of AeADF1 and hyperoside was added to each flow cell. The 1003/1.45 oil objective lens was used immediately to observe the severing of F-actin with a rotating disk confocal microscope. Images were captured and collected by a TIRFM. Images were acquired every 2 s for 220 s, and all images were analyzed with ImageJ.
Fluorescence microscopy assay
As previously described by Zhou et al. 62 , imaging of Factin was performed in vitro. First, all proteins were centrifuged at 50,000 × g for 30 min. F-actin (0.5 μM) was incubated with 0.5 μM AeADF1 at room temperature for 10 min and then fixed with 1% glutaraldehyde. Aliquots (1 μL) of the samples were placed onto a slide and observed using a confocal microscope.
Protein purification
AeADF1 was added to a vector with a His-tag and transformed into Escherichia coli BL21. Isopropyl-b-Dthiogalactopyranoside (1 mM) induced the growth of E. coli BL21 cells, which were grown at 16°C for 16 h. A chromatographic column containing 2 mL of Ni-NTA Sepharose was used to elute and purify the AeADF1 protein from cell lysates.
Fresh rabbit muscle tissue was removed, and the tendon was buried in ice for preservation. After the muscle was minced, it was extracted with KCl, EDTA, double-distilled H 2 O, and acetone in sequence and then subjected to a series of precipitation and dialysis reactions to obtain actin protein.
Antisense oligonucleotide transfection
As mentioned before 60 , phosphorothioate antisense oligodeoxynucleotides (as-ODN) and sense oligodeoxynucleotides (s-ODN) for AeADF1 were designed, and s-ODN was used for comparison. Here, 10 mM as-ODN, s-ODN, and transfection agent were added to the pollen grain germination solution for 40 min, and the pollen germination rate and pollen tube length were determined.
|
v3-fos-license
|
2019-09-13T18:34:04.405Z
|
2019-02-26T00:00:00.000
|
155607529
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://jnp.chitkara.edu.in/index.php/jnp/article/download/205/169",
"pdf_hash": "b4cc7dfcb2f905e671c58fd851da764d0dc79594",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1224",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "fc452766717631b43e3102568f058a62f702c6f4",
"year": 2019
}
|
pes2o/s2orc
|
Response to Neutrons and γ-rays of Two Liquid Scintillators
UltimaGoldTM AB and OptiphaseTrisafe are two liquid scintillators made by Perkin Elmer and EG & G Company respectively. Both are commercially promoted as scintillation detectors for α and β particles. In this work, the responses to γ-rays and neutrons of UltimaGoldTM AB and OptiphaseTriSafe liquid scintillators, without and with reflector, have been measured aiming to use these scintillators as γ-rays and neutron detectors. Responses to γ-rays and neutrons were measured as pulse shape spectra in a multichannel analyzer. Scintillators were exposed to gamma rays produced by 137Cs, 54Mn, 22Na and 60Co sources. The response to neutrons was obtained with a 241AmBe neutron source that was measured to 25 and 50 cm from the scintillators. The pulse height spectra due to gamma rays are shifted to larger channels as the photon energy increases and these responses are different from the response due to neutrons. Thus, UltimaGoldTM AB and OptiphaseTrisafe can be used to detect γ-rays and neutrons. Hector Rene Vega-Carrillo1/*, Martha Isabel Escalona-Llaguno1, Luis Hernandez-Adame2, Sergio M. Sarmiento-Rosales1, Claudia A. Márquez-Mata1, Guillermo E. Campillo-Rivera1, V.P. Singh3, Teodoro Rivera-Montalvo4 & Segundo Agustin Martínez-Ovalle5
Introduction
Outside the atomic nucleus neutrons are unstable. Neutrons are produced artificially in anthropogenic activities, as well as naturally, mainly during the interaction of cosmic rays with nuclei in the atmosphere; also, neutrons are produced during thunderstorm storm, as well as in nuclear reactions between alpha particles and the nuclei of the earth [1,2].
In some interactions with matter neutrons generate secondary charged particles and photons. Due to its radiobiological efficiency and the way in which it interacts with matter, it is important to determine, by means of calculations and/or measurements, the neutron energy distribution or neutron spectrum [3]. In order to measure the neutron spectrum it is necessary to use neutron detectors such as activation foils, proportional counter ( 10 BF 3 and 3 He), bubble detectors, inorganic, and organic scintillators [4,5].
Organic scintillators (liquid or plastic) have fluorescent materials dissolved in a solvent base. Fluorescent materials are aromatic hydrocarbons whose mean atomic number varies from 3.5 to 5. The most commonly used fluorescent materials are the PTP (C 18 H 14 ), B-PBD (C 24 H 22 N 2O ), PPO (C 15 H 11 NO), and POPOP (C 24 H 16 N 2 O 2 ). In the interaction of ionizing radiation with the fluorescent organic materials released energy excites the aromatic solvent molecules. Neutrons are used in basic science (which involves raising and answering research questions), also are used in fusion, fission, to detect special nuclear materials, in dosimetry and characterization of materials [5].
Knowing the fluence rate (flux) and energy distribution of neutrons is important in its use as a neutron source in molten salt nuclear reactors [6]. Also, during calibration and neutron analysis of deuterium plasma experiments it is important to have methods to measure neutrons [7]. In fusion with deuterium reactions, 235 U cameras are used to monitor the neutron flux, spectrometers with Stilbene scintillators to determine the neutron spectrum and to calibrate these detectors, a 252 Cf source is used [8]. In certain pp.172 experiments with fusion, the neutron fluence is usually of the order of 10 14 n/cm 2 and it is necessary to look for detectors that support this intensity of radiation [9].
Despite being a mature discipline, neutron detection is always important issue in several areas. Thus, updated measurements of neutrons produced in U 235 (n, f ) reactions induced with neutrons from 0.7 to 20 MeV have been reported [10]. Also, 4 He gas scintillators have been used to detect neutrons produced during the nuclear fission of natural uranium samples bombarded with 2.45 MeV neutrons produced in deuterium-deuterium reactions [11].
In problems related with nuclear safety such as: the control and prevention of illicit traffic of Special Nuclear Materials, the characterization of transuranic waste, in safeguards, and in the decontamination and dismantling of nuclear facilities, it is important to measure neutrons. Ports, customs and access points in several countries use gantry detectors with 3 He proportional detectors; however, its scarcity worldwide has led to the need to search for new detection options [12,13]. Guzmán-García et al, characterized the performance of scintillators of ZnS (Ag) and 10 B and determined their response to neutrons produced by sources of 241 AmBe and 252 Cf [14]. Agreements on the control of nuclear weapons require inspection procedures where neutron detectors like those based on bubble detectors can be used [15].
Personal neutron dosimetry faces several challenges since there is a strong dependence between the angular and energy distribution of the neutrons with the coefficients of neutron fluence-to-dose conversion. The need to overcome these difficulties motivates the search for new solutions, thus in nuclear power plants have been used a naked 3 He thermal neutron detector and two spherical moderators (3 and 9 inches-diameter) to measure thermal, epithermal, and fast neutrons [16]. Around linear accelerators for medical uses silicon diodes type P-I-N have been used to verify the dose due to photoneutrons [17].
Regardless the area where neutrons are used is important to have a reliable method for measuring them and to estimate the dose. The need to measure neutrons has led to innovation in the detection of neutrons, thus Amaro et al, used 10 BF 3 detectors where 10 B was added in aerosol of nanoparticles [18]. Moderate 3 He detectors with high and low density polyethylene are used to measure neutrons produced in the delayed β neutron decay or β-n emission, in order to have evidence that supports certain theories of astrophysics related to the abundance of the isotopes [19]. Large size liquid scintillators are used in experiments to measure solar neutrinos, proton decay, and dark matter [20]. The DarkSide-50 system has 30 tons of liquid scintillator where background signals, terrestrial and cosmic origin, are suppressed [21].
In the last decade, neutron detectors have been developed to be used in different conditions [22]. The use of liquid scintillators, when they are built in large sizes, have the disadvantage of liquid leaks; there are chemical risks in their handling, and their toxicity, among others; most of these difficulties are eliminated if plastic scintillators are used such as EJ-299-33A [23].
Commercially there are several types of scintillation liquids, such as the UltimaGold TM AB scintillator liquid and the Optiphase Trisafe scintillator liquid. These are well known in the area of research for the detection of betas and alphas, they are used more routinely, to make measurements of environmental samples. The liquid scintillator UltimaGold TM AB was used to optimize the determination of 3 H in aqueous samples [24]. Also, this scintillator was used to determine the concentration of Po-210, which is radiotoxic, in tobacco from India [25]. Broda et al, studied the influence of cocktail composition on the standardization of radionuclides [26]. Also, several scintillators were used to determine their response to 3 H/ 14 C [27]. Another type of commercial liquid scintillator has been used to measure the radon-to-radium ration [28], and in the area of biomaterials [29].
UltimaGold TM AB and OptiphaseTrisafe scintillator liquids are marketed and promoted for the detection of alpha and beta particles. To be used, the sample must be mixed with the scintillator liquid, therefore both scintillators have low density. If γ-rays interact with these liquid scintillators, electrons will be produced due to photoelectric effect, Compton scattering and pair production, due to the low Z of scintillators the most probable interaction will be Compton scattering, where the scattered electron will induce scintillations. If fast neutrons collide with the scintillators (n, p) reactions will be produced in the hydrogen and the proton will induce scintillations that will have different features than the scintillations induced by electrons. This difference will be noticed in the pulse height spectra, thus both scintillators can be used to detect γ-rays and neutrons. Therefore, the objective of this work was to determine the response to gamma rays and neutrons of two organic liquid scintillators that are commercialized as detectors for beta and alpha particles.
Materials and Methods
In this study, we used two scintillation liquids commercially promoted as alphas and betas detectors. These detectors are UltimaGold TM AB and OptiphaseTriSafe, the first is from PerkinElmer and the second is from the company EG & G company.
Both scintillation liquids are organic compounds whose content of carbon and hydrogen is high, therefore should detect gamma rays by the excitation produced by the electrons released in the scintillator and detect neutrons by the excitation produced by the protons released by fast neutrons in (n, p) reactions with hydrogen in the same way that alphas induce scintillation.
In order to determine the response of both scintillators to gamma and neutron, two 25 ml containers were prepared, one with each scintillator liquid. The containers are made of glass with low potassium content, since potassium absorbs the type of light that is emitted by the scintillator liquid. The liquid scintillator container was optically coupled to a photomultiplier tube (PMT) Dupont brand model 6292, as is shown in Figure 1 The PMT has a base made by EG & G Ortec model 296. This array was connected into a spectrometric system with a high voltage supply, an amplifier, and a multichannel analyzer. The scintillator container and the PMT were isolated from the light using a thick black polyethylene bag and the response to gamma rays was carried out allocating the γ-ray source above the scintillator container, as is shown in Figure 1 (b). The pulse height spectra of 0.511, 0.662, 0.834, 1.17, 1.27, and 1.33 MeV γ-rays produced by 137 Cs, 54 Mn, 22 Na, and 60 Co sources was measured by both scintillators using the same measuring conditions. Gamma-ray sources were produced by Eckert & Ziegler Isotope Products. On November 15 th , 2015 the initial activity of the sources are shown in Table 1. For each of these sources, and for radiation background, the pulse height spectrum was measured in the multichannel analyzer.
In order to determine the response to neutrons, a source of 241 AmBe of 3.7x10 9 ± 10% Bq was used. The pulse height spectrum due to neutrons was measured to 25 cm and 50 cm source-to-scintillator distances.
Background, gamma ray, and neutron pulse height spectra were measured using 1024 channels and during a live time of 1800 seconds, using the same electronics features conditions. The first group of measurements was carried out with no reflector in the scintillator liquid containers. Then, glass containers were painted using nail polish in white, and background, γ-rays and neutrons pulse height spectra were measured again.
Results
In Figure 2 are shown the responses of UltimaGold TM AB scintillator without reflector. The response is shown in terms of the pulse height spectra due radiation background, 137 Cs, 54 Mn, 22 Na, and 60 Coγ-rays, and to neutrons produced by the 241 AmBe isotopic neutron source. The response to neutrons is to 25 and 50 cm from the scintillator. In Figure 3 are the pulse height spectra produced by the scintillator OptiphaseTrisafe without reflector, while in In Figure 5 are the pulse height spectra produced by the scintillator OptiphaseTrisafe with reflector. In figure 6 are shown the pulse height spectra (PHS) of 137 Cs and 60 Co γ-rays on UltimaGold TM AB scintillator without and with reflector. For the OptiphaseTrisafe scintillator, without and with reflector, in figure 7 are shown the spectra of 137 Cs and 60 Co γ-rays. In Figures 6 and 7 the PHS have been corrected by background.
The effect, on the PHS due to neutrons, of adding the reflector in both scintillators is shown in Figure 8 for UltimaGoldTM AB and in Figure 9 for OptiphaseTrisafe.
Discussion
In the Figures 2 and 3 show the pulse height spectra for both scintillators, without reflector, for the background, gamma rays and neutrons. The pulse height spectra for neutrons are different to the pulse height spectra for γ-rays. In the responses to gamma-rays the Compton edge shifts to the right as the gamma ray energy increases. For γ-rays of 60 Co practically the pulse height spectrum ends in channel 350.
In the case of neutron responses, for both scintillators the pulse height spectra have the same shape from channel 50 to channel 750. However, the pulse height spectrum is greater when the source is to 25 cm than when it is to 50 cm. This is due to the neutron flux at 25 cm is greater than the neutron flux at 50 cm distance. The response of the scintillators for gammas is different than the response for neutrons. This is because the interaction of the gamma rays is mainly with the electrons of the scintillators, while the interaction with neutrons occurs with the hydrogen of both scintillators.
In Figures 4 and 5 the pulse height spectra for background, γ-rays and neutrons for both scintillators with reflector are shown. Here can be noticed that the shape and end point of pulse height spectra for gamma-rays are the same as the case of scintillators without reflectors. The shape of pulse height spectra for neutrons looks alike the spectra measured without reflector, however, when the reflector is used the end point is in channel 1000. Therefore, the use of reflector increases the amount of scintillations reaching the PMT.
The shape of pulse height spectra (PHS) for γ-rays and neutrons (Figures 2, 3, 4, and 5) are consistent with those reported by Wang, Seidaliev & Mandapaka for their design of a neutron rem meter [30].
The PHS for 241 AmBe neutrons of both scintillators are similar to the pulse height spectra reported by Becchetti et al, although they used C6D6 scintillators; NE230, BC537 and EJ315 which are plastic scintillators, and an organic scintillator in liquid state of deuterated Xylene (C8D10; EJ301D) [31].
The responses of both scintillators to neutrons are similar to those reported for a plastic scintillator EJ-299-33A that together with BC501A were irradiated with almost monoenergetic neutrons [23].
For both scintillators, without or with reflector, as the γ-ray energy increases the pulse height spectrum shifts to higher channels. For 0.662 MeV γ-rays can be noticed that approximately from channel 80 to channel 120 the pulse height spectrum is larger when is measured with the UltimaGold TM with reflector. The same effect is shown for 60 Co γ-rays where the increase is noticed approximately from channel 150 to channel 320 (Fig. 6). For Optiphase (Fig. 7) the same effect is noticed with the difference that for 137 Cs photons a larger amount of counts are notice from channel 90 to 140 approximately and from 160 to 340 for 60 Co γ-rays.
The effect of adding the reflector is also noticed, for both scintillators, in the PHS corrected by background for the 241 AmBe neutrons. For both scintillators the spectrum measured with the reflector has a larger amount of counts per channel above channel 400. In the case of UltimaGold TM beyond channel 400 the count rates measured with the scintillator with reflector are larger than those obtained without the reflector (Fig.8). This difference is larger than the Optiphase Trisafe scintillator (Fig. 9).
Conclusions
In this work have been measured the gamma and neutron responses of the two organic scintillation liquids, UltimaGold TM AB and OptiphaseTrisafe, which are promoted commercially to measure β and α particles.
The two organic liquid scintillators, UltimaGold TM AB and Optiphase Trisafe, in addition to be used to measure α and β particles, can be used to measure γ-rays and neutrons, because the PHS due to γ-rays is different to the PHS due to neutrons.
The PHS due to gamma rays show the Compton edge that shifts to the right as the energy of the incident photon increases. As the photon energy increases the PHS endpoint shifts to the right. The end-point for 60 Co γ-rays is around channel 350.
The PHS due to neutrons has different shape to the PHS due to γ-rays. For neutrons the PHS end-point is in channel 1000.
For both scintillators the use of a white color reflector improves the amount of count rates in the PHS.
A limitation of this work was that to measure neutrons a 241 AmBe neutron source was used. Beside neutrons this source produce 4.4 γ-rays due to 12 C* decay that is produced during the Be(α, n) 12 C* reaction, and 59.5 keV γ-rays during the 241 Am decay. Also the neutron source is enclosed in a shell of polyethylene where neutrons are moderated and captured by H producing 2.2 MeV photons. Therefore, in the neutron pulse height spectra count rates are due to gammas and neutrons. If the pulse shape discrimination technique is not available, the scintillators can be used to measure neutrons by assuming that beyond channel 400 the count rate are only due to neutrons.
|
v3-fos-license
|
2018-04-03T01:54:55.052Z
|
2016-01-01T00:00:00.000
|
561236
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://biochem-molbio.imedpub.com/nucleosomal-barrier-to-transcription-structural-determinants-and-changes-in-chromatin-structure.pdf",
"pdf_hash": "2af250aa372e1d32af58ca204a02751c7cecbe8c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1228",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "695800d8e91b940961a00016395725acd93cef6f",
"year": 2016
}
|
pes2o/s2orc
|
Nucleosomal Barrier to Transcription: Structural Determinants and Changes in Chromatin Structure
Packaging of DNA into chromatin affects all processes on DNA. Nucleosomes present a strong barrier to transcription, raising important questions about the nature and the mechanisms of overcoming the barrier. Recently it was shown that DNA sequence, DNA–histone interactions and backtracking by RNA polymerase II (Pol II) all contribute to formation of the barrier. After partial uncoiling of nucleosomal DNA from the histone octamer by Pol II and backtracking of the enzyme, nucleosomal DNA recoils on the octamer, locking Pol II in the arrested state. Histone chaperones and transcription factors TFIIS, TFIIF and FACT facilitate transcription through chromatin using different molecular mechanisms.
Description
Transcribing RNA polymerase II (Pol II) induces extensive chromatin remodeling facilitated by histone chaperones and elongation factors and accompanied by limited histone exchange [1]. At the same time, histones are fully evicted only from highly transcribed genes [1]; thus Pol II typically encounters nucleosomes during transcription of every ~200 bp DNA regions. Nucleosomes remaining on transcribed genes form two types of barriers for transcribing Pol II [2,3]. In yeast and Drosophila each nucleosome presents a barrier where Pol II is paused after transcribing ~15 and ~50 bp from the nucleosome boundary [2,3]; these barriers are also universally observed in vitro [4]. A much higher barrier of the second type is formed when the active center of the enzyme is positioned ~10 bp upstream of the first (+1) transcribed nucleosome in Drosophila [3]. However, the relative contribution to this pause from the +1 nucleosome and negative elongation factors is not clear, particularly for highly expressed genes [3].
When Pol II encounters a barrier during transcript elongation, either DNA-bound proteins or DNA sequences that disfavour addition of the next NTP, polymerase backtracks by sliding the transcription bubble and RNA-DNA hybrid upstream along the template. This displaces the RNA 3' end from the Pol II active site, resulting in transcriptional arrest. Rapid relief of arrest requires protein factor TFIIS, which acts along with the Pol II active center to drive cleavage of the transcript. This restores alignment of the 3' end with the active center and releases the downstream RNA segment [5]. Arrest sites are rare within DNA but backtracking and arrest are general properties of Pol II complexes halted just downstream (~+17 to +32) of transcription start [6]. This is potentially important for the interaction of newly-initiated Pol II complexes with the +1 nucleosome.
A single nucleosome typically forms a high, asymmetrical barrier of the first type for Pol II transcription in vitro [4,7]; however, the putative regulatory -10 barrier of the second type observed in vivo has not been recapitulated in vitro. The strong +15 and +50 nucleosomal barriers are nucleosome-specific, Pol II-specific, and were described for all analysed organisms, from yeast to human This article is available in: http://biochem-molbio.imedpub.com/archive.php
The nucleosomal barrier is largely relieved after Pol II advances beyond position +49. Initially a small, Pol II-containing intranucleosomal DNA loop (Ø-loop) forms on the surface of the histone octamer at position +49 [12,15]. The Ø-loop is stabilized by Pol II-histone interactions that transiently and locally replace DNA-histone interactions [16]; the high efficiency of Ø-loop formation is characteristic for the Pol II-specific mechanism of transcription through chromatin [17]. Formation of the Ø-loop induces uncoiling of the ~100-bp DNA region in front of the enzyme allowing further transcription through the nucleosome and efficient survival of nearly all histones (with exception of one H2A/H2B dimer that is displaced by Pol II) during this process [12,15]. The high efficiency of histone survival during transcription is explained in part by allosterically stabilized intranucleosomal histone-histone interactions [18]. Recent structural analysis indicates that after Pol II encounters the strong +50 barrier, the enzyme backtracks and nucleosomal DNA re-coils on the octamer, locking Pol II in the arrested state (Figure 1) [18].
Two general mechanisms should facilitate nucleosome traversal: holding Pol II in its active state, including facilitating recovery from arrest, and disrupting critical histone-DNA interactions. As noted, TFIIS mediates transcript cleavage to restart arrested polymerases and facilitates transcription through chromatin in vitro [8,19,20] (Figure 1). In metazoans, TFIIF maintains Pol II's catalytic readiness and thus substantially increases overall elongation rates. Both TFIIF and TFIIS are associated with the body of active genes [21]. In vitro studies showed that these two factors together modestly facilitate elongation through a single nucleosome. However, with a nucleosome containing a Sin mutant histone, which weakens the critical octamer-DNA interactions near the nucleosome dyad, elongation in the presence of TFIIF and TFIIS nearly matched the efficiency and rate of elongation on histone-free DNA [22,23]. These in vitro studies with a minimal transcription machinery demonstrate that efficient and rapid nucleosome traversal is clearly possible when Pol II is optimized and DNA unwrapping from the octamer is facilitated [24]. Histone chaperone FACT is an example of a factor that facilitates DNA unwrapping from H2A/H2B dimers to relieve the nucleosomal barrier and facilitate nucleosome traversal by Pol II [25,26] (Figure 1). Histone acetylation [27] and/or multiple molecules of Pol II [28,29] also help to overcome the barrier, affecting different steps during transcription through chromatin in vitro. While wrapping of DNA on the central core of the histone octamer provides the primary block to transcript elongation, in vitro studies using histones lacking the N-terminal tails showed that the tails also contribute to the nucleosomal barrier [27,30].
Future studies in this area should address more fully the mechanisms through which Pol II overcomes the two classes of nucleosome-induced pauses described above. It has been suggested that the nearly universal pause by metazoan Pol II at ~50 nt downstream of transcription start is directly linked to the barrier imposed by the +1 nucleosome [3], consistent with the general tendency of Pol II to backtrack early in elongation [6]. While pausing ~10 bp upstream of a promoter-proximal nucleosome has not be observed in vitro, earlier studies did not incorporate known negative elongation factors, including NELF and DSIF (reviewed in [31]). Once Pol II has overcome promoterproximal pausing, the polymerase will encounter barriers at ~15 and 50 bp within each downstream nucleosome [3]. Entry into productive elongation in vivo requires at least the activity of P-TEFb, but the full set of factors essential for pause relief and rapid long-range transcription has not been identified [31]. While proof of principle experiments with TFIIF and TFIIS have shown that the nucleosome is not an insurmountable barrier to elongation by Pol II [22], a major long term challenge will be to evaluate the roles of the much larger set of elongation-associate factors [31] in studies which require Pol II to rapidly and effectively traverse long arrays of nucleosomes in vitro.
|
v3-fos-license
|
2018-12-11T11:00:15.763Z
|
2017-10-01T00:00:00.000
|
55554326
|
{
"extfieldsofstudy": [
"Computer Science",
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/898/8/082026/pdf",
"pdf_hash": "34c6d919aa3f6c9e3f94f8c943ca7172657ecc1b",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1230",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "8baf45ef25c2676119838b283eab9f0c7756d5fd",
"year": 2017
}
|
pes2o/s2orc
|
Advancing data management and analysis in different scientific disciplines
Over the past several years, rapid growth of data has affected many fields of science. This has often resulted in the need for overhauling or exchanging the tools and approaches in the disciplines’ data life cycles. However, this allows the application of new data analysis methods and facilitates improved data sharing. The project Large-Scale Data Management and Analysis (LSDMA) of the German Helmholtz Association has been addressing both specific and generic requirements in its data life cycle successfully since 2012. Its data scientists work together with researchers from the fields such as climatology, energy and neuroscience to improve the community-specific data life cycles, in several cases even all stages of the data life cycle, i.e. from data acquisition to data archival. LSDMA scientists also study methods and tools that are of importance to many communities, e.g. data repositories and authentication and authorization infrastructure.
Introduction
The project Large-Scale Data Management and Analysis (LSDMA) [1,2] aims at joint research and development by data scientists and domain scientists of various fields. The organizational structure of LSDMA with five domain specific Data Life Cycle Labs (DLCL) and the Data Services Integration Team (DSIT) allows to address both community-specific and generic aspects of data management and analysis. In this note, we discuss selected highlights of LSDMAs research and development activities. Specifically, we will address how the results have advanced the user communities in their datadriven research. We will conclude with the lessons we have learned in the past few years.
DLCL Climatology
Filtering and visualization of large amounts of data are important aspects of data analysis. In the DLCL Climatology, a server-client application was designed and developed that allows to visualize data from earth-observing satellites [3]. The distributed system consists of three building blocks. Semi-structured data are stored in a NoSQL database serving as horizontally scalable storage back-end. The visualization application runs as web application inside a browser, i.e. without any need to install additional software. The web client benefits from the 3Dvisualization capabilities of WebGL that is supported by all modern browsers and that supports GPU accelerated image processing. The storage back-end and the web client are connected via a middle layer called Node Scala [4]. Node Scala provides a REST interface for clients to request a specific selection of data. Node Scala does not only fetch data from the storage back-end, but performs parallel predefined pre-processing tasks. This design results in fast response times while minimizing the data transfer rates to the client and the client-side CPU demands. Climate researchers benefit from a easy-to-use tool for an interactive analysis of large amounts of data.
DLCL Energy
In the DLCL Energy, a concept for an energy management system has been refined and adapted [5]. To that end, a detailed analysis was conducted: In a first phase, a technical data life cycle analysis was performed and in a second step, a privacy analysis from a user's perspective was carried out. The resulting requirements were used to improve the existing design. Furthermore, the concept has been implemented in form of a demonstrator: the Data Custodian Service (DCS). The DCS is the only energy data sink in a household. In order to potentially gain access to that data, interested parties have to file a data request at the DCS. This request is evaluated with regard to privacy implications for the user. The result is presented to the respective data owners and depending on their decision, the data request is granted or declined. Thus, users are able to fully understand the consequences of sharing data and can make an informed decision regarding the circulation of their data. On the technical side, the demonstrator uses an SQLbased storage for the meta data and an HDF5 storage for the time series data. This enables efficient searching and filtering on the meta data as well as efficient access to large amounts of energy data.
DLCL Key Technologies
Light microscopy is a routine imaging technique in biological and medical research and diagnosis. Localization microscopy, especially Spectral Position Determination Microscopy, can scale the optical resolution down almost to the electron microscopy level in the 10 nm range, which is important for biological and medical research and diagnosis [6,7]. But these techniques produce image data in the range of GB/s and require the handling, processing and evaluation of image stacks of up to thousands of frames per single cell. These data have to be stored and made accessible for the research community. To this end, the DLCL Key technologies has designed a Localization Microscopy Open Reference Data Repository (LMORDR). Theses developments are a step towards data management and curation with long-term perspectives under the aspects of sustainability and potential of re-use for different analyses. LMORDR is a system for transmitting the data, adding metadata for characterization and retrieval and storing the data. It offers programs and processing procedures for fast evaluation, either on individual machines, or in clusters. A Generic Client Service API [8] for connecting disparate services is designed and implemented seamlessly, integrating with the KIT Data Manager [9] and the Large Scale Data Storage. A structured metadata model based on Core Scientific Metadata Model (CSMD) is established for describing the extremely large datasets of localization microscopy research. Standardized descriptions of the workflow steps with an automated execution of the workflow, based on extended image analysis programs, are achieved by a workflow management system. A provenance manager collects, models and stores the entire provenance information generated during the execution of a localization microscopy workflow [10]. Provenance information is stored in the W3C ProvONE standard format using a graph database. Due to the interdisciplinary collaboration of computer scientists, biophysicists and biomedical users, constructive approaches led to an implementation that fulfills all requirements of the
HiDRA Petra III and Flash
The development of detectors at 3rd generation light sources are currently outpacing experimental methods and data acquisition. Single clients will produce 0.5 GBytes/sec and the next generation is already pushing for 6 GBytes/sec. For 30 beamlines the expected aggregated rate is of 50 to 80 GBytes/sec, depending on detector deployments. Measurements last from a few hours to a few days resulting in many data sets of up to tens of TBs each. From next generation detectors we also expect multi GBytes/sec spread over many 10GE connections. The requirements will vary a lot due to the very dynamic experimental setup with inherent burst nature and a very heterogeneous environment regarding technology and social context. In order to support better data control and shorter turnaround cycles for analyses, the new system has to allow high-speed data access within seconds after data have been generated by the detector, within a few minutes (shorter is better) for full scale data analysis using multiple CPUs and within hours to be archived to tape media and to be available for external (remote) access. The next generation of experiments will require controlled and fast access (bandwidth and latency) to the most current generated data to allow immediate experiment control [11]. Scientists at DESY have developed experimental setups where samples are constantly flowing in a liquid or gaseous jet across a pulsed X-ray source which has a repetition rate of up to 120 Hz. Significant amounts of sample are consumed in a very short time, and the data generated by the instruments requires a large amount of storage space. Furthermore, experimental parameters, such as the degree of molecular alignment in controlled imaging experiments, or the hit rate and resolution in an SFX 1 experiment, must be kept within acceptable bounds. By monitoring experimental conditions in close to real time, the experiment may be maintained in optimal alignment, or alternatively, one may pause the experiment to correct unfavorable conditions, thereby preventing the collection of unfavorable data while preserving valuable sample.
OnDA (Online Data Analysis) is a fast online feedback framework which provides the possibility to decide in near realtime about the quality of the data produced in serial X-ray diffraction and scattering experiments. It is designed on a highly modular basis and provides stable and efficient real-time monitors for most common types of experiments. Recent beamtimes completed at the Petra III facility show a smooth integration with the new storage system supporting all required criteria. This integration work is ongoing, expecting more experiments with similar demands. Building a generic solution, supporting all types of data flow control and dispatch, meeting all performance criteria, is the base goal for the ongoing development effort. The developed HiDRA software package introduce a generic layer to allow a flexible data flow configuration between detector and the first 'touch down' of the data in the GPFS storage system for any type of online synchronous data analysis. [12]
FAIR
The experiments at the Facility for Antiproton-and Ion Research (FAIR) in Darmstadt have large storage requirements of several 10 PB/year. To support the large I/O demands of the experiment data analysis, the GSI Helmholtzzentrum für Schwerionenforschung operates for the FAIR experiments a large-scale, high-performance Lustre shared-file system. However, the local and WAN data access and management from the experimental frameworks are based on the XRootD protocol. In the context of LSDMA, work has been done to couple both worlds. Many aspects of this work have already been used in production in the context of the ALICE Tier 2 center operated at GSI [13,14]. The storage resources pledged at GSI to the global ALICE community are provided via a Grid Storage Element which consists of a set of XrootD daemons running on top of the Lustre file system. The compute jobs of the ALICE Tier 2 centre are submitted to GSI's HPC cluster, which is considered as an isolated environment, where direct connections between the cluster's worker nodes and the internet are partially or fully restricted. Therefore, an XrootD forward proxy has been set up, which enables the site admin to allow worker nodes to read input files from, and write output files to, remote sites, while adhering to the aforementioned restriction.
All clients using XrootD to access ALICE Grid data request it through XrootD data servers. This means that I/O traffic needs to go through the limited link that one data server can provide and that Lustre's full I/O speed cannot be utilized directly.
The proposed solution for this is to use the XrootD client plug-in API to redirect underlying access to data on Lustre directly, bypassing the need to read indirectly via the XrootD data servers if the data is locally available. The XrootD client plug-in API comes with the possibility to change XrootD's underlying I/O operations, so that higher level software's performance (e.g. ROOT, xrdcp) is improved by the plugin transparently. The current tests show that such a plug-in can be used to adjust XrootD to specific needs as described above.
DLCL Neuroscience
The three-dimensional Polarized Light Imaging (3D-PLI) is a neuroimaging technique used at the Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Juelich, to reconstruct the three-dimensional nerve fiber architecture in postmortem mouse, rat, and human brains at the micrometer scale [15] [16]. The examination of a human brain with 3D-PLI generates about 2,500 histological sections, which are digitized at 1.3 μm pixel size resulting into image sizes per section of about 70,000 x 100,000 pixel with a color depth of 32-bit. The subsequent post processing and the extraction of fiber orientations from the microscope images require a complex chain of tools. These tools have been integrated in a UNICORE (Uniform Interface to Computing Resources) [17] workflow towards a fully automated and parallelized image processing, utilizing advanced supercomputing infrastructure efficiently. This reduces significantly the processing time from days to hours, which is a relevant factor considering the thousands of sections to be analyzed in a whole human brain study. UNICORE turned out to be a valuable tool serving both the software developer by integrating their image processing tools, as well as the scientific user lacking in deep knowledge of how to use a supercomputer infrastructure. Neuroscientists were able to perform complex data analysis and routine data production, without knowing all details about the different data inputs, calls and requirements of individual software packages of the workflow. This setup clearly minimized operation failures as compared to manual processing of individual software packages. Furthermore, the workflow could be used as performance measurement tool for the utilized supercomputers. As a result, the advantage of specific features of different supercomputers (e.g., GPU vs. faster CPU) could be addressed by the workflow.
Data Services Integration Team (DSIT)
In the context of LSDMA the Data Services Integration Team set out to develop solutions that are relevant to multiple DLCLs. Below, highlights of six fields are presented:
Authentication and Authorization Infrastructures (AAI)
The goal in this field is to allow globally operated infrastructures and global collaborations to access resources and share data in secure ways. DSIT provided a general roadmap to integrate several different authentication mechanisms with one another. This includes token translation services, account linking and identity harmonization services. The goal is that users will be able to access or manage the data they possess, regardless of which technology they used to authenticate with a specific access protocol. This work builds on top of concepts found in todays WLCG grid middleware. Several ideas were introduced into successful EU proposals and are implemented in INDIGO Data-Cloud 2 and continuously improved within Authentication and Authorization for Research Communities 1, 2 (AARC/AARC2) 3 and EUDAT-2020 4 .
In cooperation with AARC/AARC2 DSIT developed the Blueprint Architecture for a Pan-European AAI [18], a document that describes how authentication should work in future infrastructures. A prototype with AARC and EUDAT led to the integration of b2access (the EUDAT SP/IdP proxy) into services that allow shell access to unmodified ssh daemons. This allows logging in via a home-identity, or google or ORCID into an ssh host. Avoiding the need for source code modifications is one step to ensuring a sustainable solution.
In the context of INDIGO, the DSIT plan for TTS (on-site token translation service) was implemented. This tool provides an extensible web and REST service that returns access credentials to a user authenticated via OpenID-Connect (OIDC). OIDC [19] was chosen initially, since this is used in INDIGO, extension to the Security Assertion Markup Language (SAML [20]) is straightforward. As an example, users can now use an OIDC authentication token to obtain an ssh-key or an X.509 certificate should they wish to access ssh or gridftp, respectively. To ensure different credentials are mapped to the same user at a given site, identity harmonization is developed, which allows multiple accounts for the same user to access the same data. This is accomplished by setting all UIDs of users to a primary UID, as defined by the user at an external services. Such a service is IAM [21], which is developed within the INDIGO project by partners at INFN/CNAF.
In the context of the Human Brain Project [22] and of DSIT, the HPC grid middleware UNICORE [17,23] was extended from SAML and the Simple Object Access Protocol (SOAP) to support federated authentication via OIDC. Also supporting REST interfaces on the whole UNICORE stack was a major achievement.
Federated storage
The high level objective of federating storage is the provisioning of tools for conveniently storing, federating, accessing and sharing huge quantities of data. The resulting toolbox is mainly targeting scientific communities, who are not willing or not able to develop their entire data management framework themselves. The selection of services and products within that toolbox is based on their use of open standards and their availability on the open source market. Even more importantly, significant focus has been put on the evaluation of the potential self-sustainability of components, due to an active user community or due to the commitment of the product teams to further maintain their products. Besides integrating well established and sustainable data management components, LSDMA evaluated gaps in existing data management procedures and, in response, either established working groups in international scientific organizations, like the Research Data Alliance (RDA) 5 or joined existing taskforces in industry, like Storage Networking Industry Association (SNIA) 6 , on those topics. As those activities naturally require agreements on the European and possibly international level, DSIT partners successfully joined European projects, like the INDIGO-DataCloud or AARC, engaging a larger group of communities.
Metadata
Managing metadata in a generic way is of essential importance in scientific data life cycles [24]. It needs to be both efficient and seamless. Such a concept was designed and implemented within the MoSGrid [25,26] Science Gateway. The utilized UNICORE Metadata Management is a generic service within the UNICORE HPC middleware. The concept resulted in the DFG project MASi [27] that utilizes the repository framework KIT Data Manager to built up a generic metadata-driven research data management service. The initial use cases are situated in geography, chemistry, and digital humanities. Furthermore, metadata developments in general and provenance support specifically for the generic web processing framework birdhouse are introduced. This includes their applications within the earth sciences. We highlight the valuable contributions in generic and specific metadata management that were designed and implemented.
Archives
Scientific and cultural organizations, international collaborations and projects have a need to preserve and maintain access to large volumes of digital data for several decades. Existing systems supporting these requirements span from simple databases at libraries to complex multi-tier software environments developed by scientific communities. All communities see an increasing volume of data that must be stored efficiently and economically, which today is usually a combination of storage on disk and tape. Development and integration of components to enable secure and reliable archival storage that make use of existing computer centre infrastructures is a long standing goal in LSDMA. The project brings together diverse communities and functions as pivot for generic solutions. At the same time requirements have been collected to support long term access to data for multiple scientific domains and international projects. The material was used in accompanying projects to implement an infrastructure for long time storage, develop easy access to archives and enable new user groups.
Performance analysis
Managing and analysing large amounts of data requires high performance storage systems that can keep up with the applications' I/O demands. Additionally, energy efficiency plays an important role as storage systems are often responsible for a significant part of the total cost of ownership. Within the Performance and Power Optimization work package of the Data Services Integration Team, we have focused on both of these aspects. Based on demands observed in real systems and applications, we have developed tools and solutions to improve both performance [28] and cost efficiency [29,30].
Data Intensive Computing (DIC)
Besides data management, the analysis of research data is another important aspect covered by LSDMA. The overarching goal of all data efforts must be to start managing the scientist's data right after the data left the data acquisition device as this is the only way to capture gapless provenance information. This is inevitable for transparent and reproducible science. However, this means also that the research data is at the very beginning of its lifecycle. In order to obtain publishable results the data often has to go through several processing steps until the final results are available [31]. These steps can reach from basic scripts to complex scientific workflows consisting of several dependent processing steps. In order to achieve the aforementioned reproducibility capturing the data provenance, e.g. what happened with the data and led to which results, should be an essential part of every step.
Conclusions
After five years it is fair to claim that LSDMA has advanced selected scientific communities in their data management and analysis. Some selected highlights are presented and referenced in this note. From our experience in the project we draw the following conclusions on lessons we learned: The variety of topics presented here already show that the needs of communities vary immensely, even within research areas. The communities interest in new tools and methods is fueled by need and by new research potential. The automation of procedures and of workflows may still boost a scientists' efficiency in performing research significantly. One important contribution for interoperability is the design of an Authentication and Authorization Infrastructure. This work continues in the context of EU-projects. Policies (e.g. Open Data) and legal regulations (e.g. data privacy) are additional challenges. The clear separation of domain specific methods and generic data methods was sometimes not that obvious as certain communities are the main drivers for generic methods or tools. However, the success of various community projects and the highlights presented here justify the dual approach with DLCLs and the DSIT.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2014-09-22T00:00:00.000
|
2394895
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-014-0216-5",
"pdf_hash": "1f3fbd12952ebcb61208a1413b00521716d15e0e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1231",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a95873b6b138584a296d5ac3943da8386dd93de4",
"year": 2014
}
|
pes2o/s2orc
|
Validation of a magnetic resonance imaging guided stereotactic access to the ovine brainstem
Background Anatomical differences between humans and domestic mammals preclude the use of reported stereotactic approaches to the brainstem in animals. In animals, brainstem biopsies are required both for histopathological diagnosis of neurological disorders and for research purposes. Sheep are used as a translational model for various types of brain disease and therefore a species-specific approach needs to be developed. The aim of the present study was to establish a minimally invasive, accurate and reproducible stereotactic approach to the brainstem of sheep, using the magnetic resonance imaging guided BrainsightTM frameless stereotactic system. Results A transoccipital transcerebellar approach with an entry point in the occipital bone above the vermis between the transverse sinus and the external occipital protuberance was chosen. This approach provided access to the target site in all heads. The overall mean needle placement error was 1.85 ± 1.22 mm. Conclusions The developed transoccipital transcerebellar route is short, provides accurate access to the ovine caudal cranial fossa and is a promising approach to be further assessed in live animals.
In animals, brainstem biopsies are required both for histopathological diagnosis of neurological disorders and for research purposes. Sheep are used as a translational model for various types of brain diseases in humans [25][26][27] and brainstem biopsies are necessary to investigate the neuropathogenesis of listeric rhombencephalitis, the most frequent central nervous system disease of ruminants. However, the anatomical differences between species do not allow methods developed in humans to be transferred directly to the sheep.
The tetrapod gait of domestic mammals entails a horizontal brain axis where the brainstem is not situated underneath, but caudal to the forebrain [28], which precludes the use of a transfrontal route to the brainstem ( Figure 1). Furthermore, the frontal sinuses of most domestic mammals are larger than in humans and cover a greater portion of the rostral brain surface [29]. Traversing the frontal sinuses is inadvisable [30] because of increased morbidity due to intrasinusoidal bleeding/epistaxis [31], wound infections [32] or subcutaneous emphysema [33]. It also prolongs the trajectory and poses the risk of instrument deviation [34] at the compact bone [23] between the sinus and the dura where the needle cannot be controlled visually.
The intraparenchymal transcerebellar approach to the brainstem is inadequate in domestic mammals because of the lateral position of the cerebellar peduncles [28]. Moreover, the caudal contour of the cerebellum is covered by the squama occipitalis, and the attachment of nuchal muscles precludes a suboccipital transcerebellar approach ( Figure 1).
Consequently, stereotactic approaches to the brainstem used in human medicine cannot be applied to sheep. In veterinary medicine, brain targets located within the caudal cranial fossa were rarely addressed stereotactically [35,36], but the employed approach was not mentioned. In the present study an applicable transoccipital transcerebellar magnetic resonance imaging guided stereotactic approach to the brainstem of sheep is described and its target accuracy was determined using the modified Brainsight TM stereotactic system.
Results
The transoccipital transcerebellar approach with its entry point in the occipital bone above the vermis between the transverse sinus and the external occipital protuberance allowed access to the target site in all of the eighteen cadaver heads (Figures 2, 3). Attachment of the fiducial marker post, acquisition of both sets of magnetic resonance images (MRI), planning of the trajectory and establishment of the target coordinates and the stereotactic injection of the contrast medium took 30, 40, 30 and 45 minutes, respectively.
The mean needle placement error for the midbrain (n = 6), pons (n = 6) and obex (n = 6) targets was 1.77 ± 1.47, 2.48 ± 1.16 and 1.28 ± 0.83 mm, respectively. The overall mean needle placement error for all target sites (n = 18) was 1.85 ± 1.22 mm. The mean target depth for the midbrain, the pons and the obex targets was 36.9 ± 2.36, 33.18 ± 0.82 and 29.6 ± 1.50 mm, respectively. There was no statistically significant relationship between needle placement error and target depth (P = 0.28).
Macroscopic evaluation revealed that toluidine stains were visible in the region of sixteen of the eighteen targeted sites. In two brains, a blue stained margin was observed at the edge of the obex. The position of the toluidine stain was at the target site in thirteen sheep heads. The dot was displaced in three brainstems. It was found rostrally to one target in the left midbrain, medially to one target in the right pons and lateroventrally to a destination in the left pons.
Discussion
The stereotactic transoccipital transcerebellar approach to the ovine brainstem used in this study was applicable in all the heads used. The overall mean needle placement error in the brainstem of 1.85 ± 1.22 mm is comparable to previous results of the magnetic resonance imaging guided Brainsight TM stereotactic system in targets in the canine rostral and middle cranial fossa (mean needle placement error of 1.79 ± 0.87 mm) [36]. Reported mean needle placement errors of CT-guided stereotactic systems used in veterinary medicine are larger for targets within the rostral, middle and/or caudal cranial fossa [ 30,31,37], and only slightly smaller (1.7 ± 1.6 mm) for targets exclusively located in the rostral cranial fossa [33].
The obtained error was also smaller in comparison with the in vivo accuracy of frameless MRI guided stereotactic brainstem biopsy sampling in human medicine [7,38]. Frameless systems have replaced the stereotactic frame with a method of registration that relies on anatomic landmarks -such as nose, eyes and ears -or artificial markers, called 'fiducials'. The latter are attached to the patient's head before the brain scan, and a three-dimensional digitizer matches them to the corresponding points in the image. Frameless systems provide a wide range of motion for the instrument guidance arm [39].
We consider the transcerebellar approach with its entry point in the occipital bone above the vermis between the transverse sinus and the external occipital protuberance to be the only one to be safely applied to the whole ovine brainstem ( Figure 1). The brain surface through which a trajectory to the brainstem can be placed is rostroventrally confined by the large frontal sinuses [30][31][32][33]. As in people, the trajectory should avoid the membranous tentorium of the cerebellum [4]. Therefore, the transfrontal approach, which permits access to all divisions of the brainstem [4,9,16] has limited value in domestic mammals and the transcerebellar route is more promising. However, a transcerebellar trajectory may not enter through the squama occipitalis, the part of the occipital bone situated caudally to the cerebellum, because the required shallow angle between bone and drill bit would result in slippage of the drill bit [36]. Furthermore, the occipital squama of sheep is covered by a considerable amount of muscles, whose dissection is known to cause massive postoperative wound pain in people [7,11,40]. Consequently, the entry point has to be placed rostral to or on the external occipital protuberance (Figures 1, 3).
Attention must be paid to the vasculature, notably the transverse sinus [7,12,18,19,41], the dorsal cerebellar veins and dorsal rami of the caudal cerebellar arteries between the vermis and the cerebellar hemispheres [28], which limit the area allowing the caudal cranial fossa to be accessed cranially and laterally, respectively.
Usually, the suboccipital transcerebellar trajectory to the brainstem in people is placed through one of the middle cerebellar peduncles [13,16,17,19,20,23,24,[40][41][42][43]. This approach, however, is not feasible in domestic mammals because of the more lateral position of these structures [28]. Therefore an access route through the zone of greatest contiguity of the brainstem to the cerebellum, which spares the fourth ventricle [7,44] and is used as an alternative approach in people [18,45], was applied in the present study ( Figure 4).
Using this approach, there is no lateral restriction to the described trajectory. Another important advantage is the shortness of the trajectory [12,13,16,17,19,21,23,40], which results in minimized tissue trauma as well as increased accuracy [17,46].
Disadvantages of the human suboccipital transcerebellar compared to the transfrontal approach such as the need for general anesthesia [9], prone positioning [23,40] and considerable muscle dissection [7,11,40] are irrelevant in veterinary medicine: all the patients need to be subjected to general anesthesia, thus rendering participation in an intraoperative neurological examination impossible [9,11]. The tetrapod anatomy suggests prone positioning during the MRI examination and the interventional procedure, which is a comfortable operating position for the surgeon [19] and complies with the physiological patient composure so that brain shift is a minor concern [23]. In this cadaver head study, loss of CSF and elasticity of the brain parenchyma as well as loss of continuity of the brainstem with the spinal cord may have induced brain shift [23,40,47]. On the other hand, lack of parenchymal excursions in temporal synchrony with systole [47] could have led to underestimation of targeting error, but this impact can only be assessed in a clinical setting.
Other causes of needle placement error might also have occurred during target registration, fiducial registration, and target positioning [48]. In order to minimize these errors, a rigid head fixation was ensured and the registration was checked after each step that could have caused slippage of the head in the clamp or movement of the freeguide arm [46]. In accordance with published recommendations, the fiducial markers implanted into the bone were carefully monitored throughout the whole procedure by the same previously trained person [32,34,46].
As judged from macroscopic evaluation, the toluidine dot was displaced in three specimens. In all three specimens, the actual needle placement error was higher than the mean needle placement error (4.35 mm, 3.15 mm and 2.01 mm). Nevertheless, in some objects with a large needle placement error (4 mm, 3.23 mm), the toluidine stain was not judged to be off target macroscopically, possibly due to the fact that an error with a large deviation in a single plane is more striking than an error with a small deviation in all three planes. In two brains, in which no toluidine stain but a blue margin of contrast medium at the edge of the obex was detected, the contrast medium presumably leaked when the injection needle pierced the meninges, a known drawback of stereotactic drug delivery [49,50], which is probably of minor importance in stereotactic brain biopsy sampling. Access to the rostral mesencephalon is limited by the tentorium and the external occipital protuberance, which prohibits more caudal tilting of the trajectory. Therefore, in our experience, the crura cerebri are the most rostral area, which can be accessed through the cerebellum. The caudal restriction of the transcerebellar approach is given by the caudal border of the cerebellum, making the obex the most caudal area which can be reached. Additionally the transverse sinus prevents further rostral tilting of the trajectory.
Consequently, the aforementioned transoccipital transcerebellar approach gives access to all structures within the ovine caudal cranial fossa including the cerebellum, cerebellar peduncles and lateral regions of the brainstem.
Intracranial hemorrhage and postoperative neurological deficits are a major concern in human stereotactic brainstem biopsy [13,18,43,45] and could also arise in sheep. Although the herein developed approach to the ovine brainstem avoids the transverse sinus, the dorsal cerebellar veins, the dorsal rami of the caudal cerebellar arteries between the vermis and the cerebellar hemispheres and cranial nerves within the brainstem, the risk of bleeding from smaller vessels as well as the occurrence of neurological deficits remains to be systematically assessed in living sheep.
In contrast to sheep, the canine skull has more prominent bony crests in the occipital area. Depending on dog breed and size, this prohibits an entry point in the midline of the occipital bone. However, the anatomy of the ovine, canine and feline head is otherwise similar so that the herein described approach to the brainstem can theoretically be translated to dogs and cats [29]. The authors have already employed a transoccipital transcerebellar approach in canine cadavers.
Conclusions
The study proved a stereotactic transoccipital transcerebellar approach to be suitable to access targets along the whole axis of the ovine brainstem with good accuracy and is currently used to sample the brainstem in live sheep. Possibly associated complication rates and the application of the access in other species can now be assessed in further studies.
Methods
Eighteen one-year-old healthy sheep of different breeds (Swiss White Alpine Sheep (n = 3), Black-and Brownheaded Mutton (n = 3 and n = 12, respectively) were slaughtered in the context of food production. The study was performed in agreement with the local ethic regulations (Swiss Veterinary Service, Office of Agriculture and Nature (LANAT)). The heads were disarticulated at the atlantooccipital junction and stored at + 6°C within 48 hours upon arrival until use. Mean weight of the heads was 2.96 ± 0.91 kg (1.92-4.62 kg). The stereotactic procedures were performed using the Brainsight TM frameless stereotactic system (Rogue Research Inc., Montreal, Canada) [36] with a modified bone implanted fiducial marker system [46,48,51,52]. After clipping of the coat and placement of the sheep head in the Cclamp, the fiducial marker post was fixed on the frontal bones caudal to one of the zygomatic processes to ensure that the registration markers were in different planes and not obstructing the surgical site. Hence, the distance from the centroid of all fiducial markers to the planned target was small [46,48,52] and suitable bone thickness for post fixation was ensured. The implant post was attached with at least three 8 mm-ceramic screws. A fiducial array hub with five fiducial markers was screwed onto the post.
The cadaver head was scanned in a 1.0 Tesla MRI system (Philips Panorama HFO, Philips System, Best, The Netherlands) in prone position using a head coil. A T1weighted gradient echo 3D sequence was acquired using the following parameters: TR = 25 ms, TE = 6.9 ms, flip angle = 30°, Number of Signal Averages = 2, slice thickness = 1.8 mm without interslice gap. The field of view was adjusted according to skull size and position of fiducial markers.
Three different target sites within the brainstem were determined: ventral part of the midbrain directly adjacent to the pons, pons at the level of the emergence of the facial nerve or obex, on the right or left side, respectively ( Figure 2). This resulted in three cadaver heads per target site. For each target, coordinates (X, Y, Z) were read out using the Brainsight TM neuronavigation software. The trajectory for all the targets was planned via a transoccipital transcerebellar access (Figures 3, 4).
Following acquisition of the MRI, the sheep head with the fiducial marker system was again fixed in the surgical headclamp in prone position using four skull screws. The open side of the surgical C-clamp was directed caudally. The two skull screws at the caudal end of the clamp were placed in the temporal fossa. The rostral skull screws made contact with the nasal bone.
The subject to image-registration was performed [36] with the Polaris® optical position sensor placed rostrally to the cadaver head and checked directly before and after drilling and after removal of the manual ruler guide from the instrument sleeve. The neuronavigation pointer was inserted into the instrument sleeve of the articulated arm. The entry point was kept as perpendicular to the skull surface as possible to prevent slippage of the drill bit. Tight locking of the articulated arm [46,52], correct use of the stabilization pin and good contact between the drill guide tube and the skull [46] were ensured. Furthermore the instrument receptacle of the articulated arm was fixed manually during the drilling of a 5 mm burr hole to prevent residual movement of the assembly.
The neuronavigation pointer was inserted in the instrument sleeve and lowered down to a zeroing platform. The distance from the zeroing platform to target was determined by the software [36]. The manual ruler guide with a 10 μl syringe and attached 26-gauge, 6-inches needle (Hamilton Company, Reno NV/Bonaduz, Switzerland) filled with the contrast solution and straightened by a guiding cannula was then placed in the instrument sleeve of the articulated arm. The needle was lowered manually to the zeroing platform, the manual ruler guide was set to zero and after removal of the platform the needle was lowered to target.
Ten minutes after the needle reached the predetermined target depth, 0.5 μl of the contrast solution was injected. This contrast solution was made by adding 0.4 ml of gadodiamide (Omniscan®, GE Healthcare Inc., Glattbrugg, Switzerland) and 0.05 g of toluidine blue to 100 ml of 0.9% NaCl. The needle was kept in place for 5 min after the injection to prevent contrast medium leakage along the needle track. Immediately following the contrast injection, a second MRI study was performed using the same parameters as before. These images were uploaded and saved to the neuronavigation computer later on [36].
Thereafter the brain was unhinged and fixed in formalin for eight weeks. Subsequently, all the brains were sliced and the contrast stains were judged to be present or absent by a neuropathologist (A.O.). The position of the toluidine dot was assessed subjectively to be at or off the targeted site ( Figure 5).
To determine the mean needle placement error, the postoperative MR images were registered to the preoperative ones using the Image Feature of the Brainsight TM neuronavigation software. The preoperative MRI served as the reference so that a common coordinate space was used between the preoperative and postoperative MRI. The planned injection site represented target A and the center of the gadodiamide deposition target A' [36]. Coordinates of A' (X' , Y' , Z') were read off. In two cases, the contrast bloom was not clearly visible but a gas bubble was depicted at the tip of the needle track on the MRI. The center of these gas bubbles was taken as the target. The precision of the system in bringing the needle to target (needle placement error) was calculated for each target site using the formula: Error = √[(X-X') 2 + (Y-Y') 2 + (Z-Z') 2 ]. Mean needle placement error and standard deviation (SD) were then assessed based on all the target sites. Linear regression was used to evaluate the relationship between needle placement error and target depth in the brain. P value < 0.05 was considered significant [36].
|
v3-fos-license
|
2018-04-03T00:05:59.405Z
|
2018-01-23T00:00:00.000
|
3268815
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-017-02522-z.pdf",
"pdf_hash": "19f157f3787d99c55facdfb6e33a2da48efa8479",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1232",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "3779ad5da059acb099448420e855b804ad6929b8",
"year": 2018
}
|
pes2o/s2orc
|
Fungal networks shape dynamics of bacterial dispersal and community assembly in cheese rind microbiomes
Most studies of bacterial motility have examined small-scale (micrometer–centimeter) cell dispersal in monocultures. However, bacteria live in multispecies communities, where interactions with other microbes may inhibit or facilitate dispersal. Here, we demonstrate that motile bacteria in cheese rind microbiomes use physical networks created by filamentous fungi for dispersal, and that these interactions can shape microbial community structure. Serratia proteamaculans and other motile cheese rind bacteria disperse on fungal networks by swimming in the liquid layers formed on fungal hyphae. RNA-sequencing, transposon mutagenesis, and comparative genomics identify potential genetic mechanisms, including flagella-mediated motility, that control bacterial dispersal on hyphae. By manipulating fungal networks in experimental communities, we demonstrate that fungal-mediated bacterial dispersal can shift cheese rind microbiome composition by promoting the growth of motile over non-motile community members. Our single-cell to whole-community systems approach highlights the interactive dynamics of bacterial motility in multispecies microbiomes.
M ultispecies microbial communities (microbiomes) play key roles in agricultural productivity, human health, and ecosystem services [1][2][3] , but our understanding of the ecological processes and mechanisms that structure the diversity of microbiomes is still in its infancy [4][5][6] . Small-scale (micrometer-centimeter) dispersal of bacterial cells is one key ecological process that may impact the dynamics of microbial community assembly. After a propagule (cell, spore, etc.) of a microbial species colonizes a potential habitat, the ability to grow and rapidly spread may determine both the distribution and functions of that particular species within the community.
Many bacteria use active motility, via extracellular appendages or secreted metabolites, to disperse over small spatial scales up or down gradients of resources or attractants 7,8 . A significant body of work from just a few model bacterial species has determined the genetic and biophysical mechanisms of active bacterial dispersal, including swimming, swarming, gliding, twitching, and sliding 8,9 . Almost all of these studies have used monocultures of bacteria in highly simplified laboratory environments to dissect modes and mechanisms of bacterial motility. How these bacterial motility mechanisms, discovered in highly idealized laboratory systems, translate to complex multispecies microbiomes where microbes interact is largely unknown.
Changes in the abiotic and biotic environment, due to interactions with neighboring microbial species, have the potential to alter modes and mechanisms of bacterial motility and subsequent dispersal dynamics. Metabolites secreted into the environment by neighboring species may act as chemoattractants that can direct cell movement 10,11 or alter quorum sensing 10 . Modification of the physical environment by neighboring microbes could also impact cell dispersal. Solid surfaces can exert forces on swimming cells and guide them over long distances 12,13 . These same forces facilitate interactions between cells in biofilms, which can result in collective cell motility and dispersal 14,15 .
One potentially widespread interaction that may shape the dynamics of bacterial cell dispersal is the migration of bacterial cells on fungal hyphae 16,17 . Multicellular filamentous fungi form mycelial networks that enable bacteria to migrate across simplified environments 16,18 . The specific biological and physical mechanisms underlying these interactions are not fully understood, but fungi likely maintain microenvironments that allow motile bacteria to swim and/or swarm in otherwise dry conditions 19 . Most previous studies characterizing fungal-mediated bacterial dispersal relied on artificial combinations of bacteria and fungi with unknown natural histories and limited ecological contexts 16,20,21 . The taxonomic breadth of bacteria that can disperse on fungal networks is also poorly characterized because prior work has largely focused on a limited number of bacteria and fungi in soil systems 16,19,[21][22][23] . Moreover, the potential contribution of these strong, pairwise bacterial-fungal interactions to the assembly of microbiomes has not been tested. Fungal networks may shape the composition of bacterial communities by promoting the dispersal and growth of motile bacteria over nonmotile community members.
Cheese rind biofilms are an ideal system for exploring the mechanisms and consequences of fungal-mediated bacterial dispersal in multispecies microbiomes. Rind biofilms form on the surfaces of cheeses that are aged in caves around the world, and several different genera of filamentous fungi commonly co-occur with motile Proteobacteria in cheese rinds 24,25,26,27 . Ecological dynamics in cheese rinds are easy to dissect due to the limited diversity of these microbiomes and the ability to culture most bacterial and fungal species that grow in these communities 27 . Previous work has demonstrated that strong bacterial-fungal interactions occur in cheese rinds [27][28][29] , but mechanisms underlying these interactions are largely unknown.
Here we report the patterns, mechanisms, and consequences of bacterial dispersal on fungal networks in cheese rind microbiomes.
We focus on one common cheese rind bacterium, Serratia proteamaculans, to characterize the mechanisms of bacterial dispersal on different fungal networks. We then place these pairwise interactions in an ecological context and quantify how fungal networks can shape the composition of multispecies cheese rind communities through dispersal facilitation. Our work highlights the ability of diverse cheese Proteobacteria to disperse on fungal networks and how fungal-mediated bacterial dispersal can promote the growth of motile bacteria over non-motile community members.
Results
S. proteamaculans disperses on cheese rind fungal networks. During a culture-based survey of cheese rinds, we observed unusual streams of bacterial cells of the bacterium S. proteamaculans (strain BW106; hereafter Serratia) on hyphae of the filamentous fungus Mucor lanceolatus (strain SN1; hereafter Mucor) (Fig. 1, Supplementary Movie 1). These growth patterns suggested that Serratia used Mucor networks to disperse, possibly through the use of active motility mechanisms. To experimentally characterize this interaction, we first quantified bacterial dispersal using a co-spotting assay on standard lab media (brain heart infusion agar or BHI agar) with three different fungal networks: Mucor, Galactomyces geotrichum (hereafter Galactomyces), and a Penicillium strain closely related to P. commune (hereafter Penicillium) (Supplementary Table 1). All three fungi were isolated from cheese rinds. We chose these three fungi because they are the dominant fungi in natural and washed rind cheeses 27,30 , and they represent three different types of fungal networks: Mucor is a fast-growing fungus with diffuse network growth 31,32 , Galactomyces is also a fast-growing fungus but forms a dense network 32,33 , and Penicillium is slow-growing and forms very dense fungal networks 32,34 . The cells of Serratia were co-spotted on BHI agar (1.5% agar) with each of these fungi or without a fungus ("No network", Fig. 2a). Fungal networks grew out from the co-spot, and Serratia was able to spread on the networks. After 14 days of incubation, the horizontal dispersal distance of the bacterial colony from co-spot center to colony edge was quantified using a bacterial transfer approach (see Methods). Serratia rapidly spread on networks of both Mucor and Galactomyces, with a 173% and 179% increase, respectively, in dispersal distance across the agar surface compared to Serratia without a fungal network (Fig. 2b). In contrast, Penicillium networks provided limited dispersal facilitation of Serratia, with only a 23% increase in dispersal. The strong dispersal facilitation of Serratia was not limited to the environment on BHI agar; Serratia spread on Mucor networks on a range of media types, including cheese curd agar (CCA) (Supplementary Fig. 1). To confirm that this dispersal trait was not unique to our cheese strain, we quantified the ability of closely related Serratia strains and species from other environments (Supplementary Table 1) to spread on networks of Mucor. Most Serratia isolates showed substantial dispersal facilitation on Mucor networks, ranging from 145 to 175% increases in dispersal distance ( Supplementary Fig. 2). Limited dispersal in a few isolates (e.g., S. proteamaculans strain B-41156, with only a 41% increase in dispersal distance) indicates natural variation in the ability of Serratia species to disperse on fungal networks.
Hyphae of filamentous fungi, including Mucor species, grow from the tip 35 , and bacterial dispersal on fungal hyphae could be a result of passive dispersal when bacteria are pushed horizontally across surfaces by the growing fungi. To determine if Serratia spreads using active motility or passive dispersal by the fungus, a synthetic glass fiber network was created on top of a 10-µL spot of Serratia cells on BHI agar. These glass fibers are comparable in diameter (8 µm) to the hyphae of Mucor (10-25 µm) and provided a similar physical network for the movement of motile Serratia cells. After a week of growth, Serratia spread out from the initial spot and followed the topology of the synthetic network ( Fig. 2c), suggesting that active motility drives the movement of this bacterium across physical networks.
In nature, bacterial cells may initially land in a microenvironment where an existing fungal mycelium is available for colonization. To determine if Serratia could disperse across established (static) fungal networks, we transformed S. proteamaculans BW106 with a green fluorescent protein (GFP)producing plasmid to allow for real-time, non-destructive tracking of its spread across Mucor, Galactomyces, and Penicillium networks. Unlike the assays above that just track linear dispersal distance, this approach allowed us to track the network area covered by Serratia. As with the co-spot assays above, Serratia rapidly spread across existing networks of Mucor, covering an average of 64 cm 2 within 48 h, compared to 1 and 0.6 cm 2 on Galactomyces and Penicillium networks, respectively (Fig. 2d, e). The limited dispersal on static networks of Galactomyces contrasts with the high dispersal facilitation observed on actively growing Galactomyces networks (Fig. 2b). This discrepancy suggests that passive dispersal resulting from bacterial cells being pushed or dragged during fungal growth may contribute to dispersal on Galactomyces networks while dispersal on Mucor networks is largely due to active motility processes.
The substantial increase in dispersal distance of Serratia on Mucor networks may provide a significant benefit by allowing it to colonize unoccupied niches. But, changes in dispersal distance across a surface may not completely reflect the total impact on Serratia growth, as increased dispersal may actually decrease cell density. Moreover, bacterial dispersal data do not capture impacts on the fungal host. To measure the growth of both interacting partners, we determined the total colony-forming units (CFUs) of both Serratia and Mucor at 7 and 14 days of growth alone and in co-culture. As predicted from dispersal experiments, Mucor networks have a strong positive effect on Serratia growth at day 7, but this effect diminishes and results in growth inhibition at day 14 (Fig. 2f). The dispersal of Serratia on Mucor networks negatively impacts fungal growth at both day 7 and day 14. Surprisingly, unlike most other examples where Serratia species almost entirely eradicate the fungi with which they interact [36][37][38] , Mucor is only partially inhibited by the bacterium and not completely killed ( Supplementary Fig. 3).
Studies of bacterial dispersal on fungi have largely focused on macroscopic patterns (millimeter-centimeter scale) of bacterial and fungal hyphae growth 16,21,36 . But the initial phases of colonization occur at the micron scale and the relevant biophysical interactions that regulate bacterial dispersal on hyphae at these scales are poorly characterized. To identify how Serratia cells interact with Mucor networks at a microscopic scale, we used time-lapse microscopy of the two microbes co-cultured on a thin layer of BHI agar (≈0.7 mm in height). This approach revealed that interactions are initiated between individual cells of Serratia and a liquid layer surrounding Mucor hyphae (Fig. 2g). After imaging numerous bacterial-fungal contacts, we were able to consistently observe three phases of interaction initiation. First, the bacterial colony comes into physical contact with the liquid layer that surrounds the fungal hyphae as the hyphae grow near bacterial colonies. Next, pioneer cells at the edge of the bacterial colony rapidly transition from a stationary state to a motile state and swim in the liquid layer around the fungal hyphae. These pioneer cells swim along the fungal hyphae until they encounter physical barriers or reach an intersection and move to other hyphae. In the final phase, a mass of swimming cells colonizes the fungal hyphae. These interaction phases were commonly observed across replicate Serratia-Mucor contacts (Supplementary Fig. 4; Supplementary Movies 2 and 3).
The rapid cellular switch from a static colony state to a motile swimming state demonstrates that Serratia cells can quickly change behavior in the presence of the liquid layer around fungal networks. Collectively, these experiments and observations characterizing Serratia-fungal interactions from cheese rinds demonstrate that Serratia species can rapidly disperse on fungal networks via active bacterial motility mechanisms. We next sought to identify the mechanisms controlling these interactions and their ecological consequences for the assembly of these communities.
Mechanisms driving Serratia dispersal on Mucor networks. To determine potential genetic mechanisms that drive fungalmediated dispersal facilitation of bacterial cells, we used three complementary approaches: (1) transcriptome sequencing (RNAseq) to identify genes that are differentially expressed across the Serratia genome when grown on fungal networks, (2) transposon mutagenesis to identify genes that are essential for the dispersal phenotype described above, and (3) comparative genomics of different Serratia strains with variable dispersal abilities on Mucor networks. Given the rapid change from a stationary to motile cell population observed above, we predicted that the genes that control quorum sensing, flagellar biosynthesis, and other motility-related processes would be differentially expressed when Serratia was co-cultured with Mucor and would be essential for these interactions.
The presence of the fungal networks caused a shift in global gene expression of Serratia ( Fig. 3a; Supplementary Data 1), with 108 genes showing significantly decreased expression and 41 genes with increased expression levels when Serratia was grown with Mucor networks. Surprisingly, the most differentially expressed genes were related to metabolic processes and other functions, not motility or quorum sensing (Fig. 3b). Of the 67 genes with decreased expression that had predicted functional annotations, almost half (33 genes) were predicted phage proteins. Significantly lower expression of genes associated with carbohydrate metabolism (8), amino acids and derivatives (4), and membrane transport (8), as well as genes associated with folate and biotin metabolism (4), suggests that growth on Mucor networks alters the supply of nutrients and vitamins available for Serratia. Surprisingly, we also detected downregulation of many Homologs of these genes have been associated with antifungal properties in other Serratia species 39,40 and downregulation of chitinases may partly explain why Serratia does not completely kill Mucor. Many of the genes with increased expression levels were associated with carbohydrate catabolism (Fig. 3b), again suggesting that growth on fungal networks alters the metabolism of Serratia.
A single time point of RNA-seq data cannot capture dynamic transcriptional responses that may occur during the different stages of microbial interactions. However, the overall pattern of limited differential expression of S. proteamaculans BW106 when grown with Mucor aligns with a previous RNA-seq study of the bacterium Serratia plymuthica grown in the presence of the fungus Rhizoctonia solani 41 . In that study, only 38 genes were differentially expressed, similar to the magnitude of differentially expressed genes observed in our study.
Transposon mutagenesis provides a complementary approach to RNA-seq by identifying specific genes necessary for dispersal on fungal hyphae. We used a Tn5 transposon mutagenesis system 42 to generate Serratia mutants that were then screened on arrays of Mucor networks (Fig. 3c). Using this approach, we initially identified 59 mutants that demonstrated altered colony appearance or dispersal phenotypes on Mucor networks, ranging from complete lack of dispersal on fungal hyphae to killing of the fungal host (Fig. 3d). These 59 mutants were further re-screened for fungal-mediated dispersal using the co-spotting assay described above, and six mutants with distinct phenotypes were selected for whole-genome sequencing to identify transposon insertion sites.
In our dispersal assay, the most striking mutant was Tn5_13, which was entirely dispersal deficient on both Mucor networks (Fig. 3d) and low-agar medium ( Supplementary Fig. 5), suggesting a loss of motility. Using whole-genome sequencing, we discovered that the Tn5 transposon had disrupted the fliS gene in mutant Tn5_13 (Supplementary Fig. 6). FliS has not been wellcharacterized in Serratia species, but in other bacteria, FliS is a flagellin-specific chaperone that coordinates export of flagellin from the cell 43 . Disruption of this key regulator of flagellin biosynthesis leads to the production of short flagella and loss of motility in Bacillus subtilis and Salmonella typhimurium 44,45 and may play similar roles in Serratia. Over 40 genes are predicted to be involved with flagellar biosynthesis and regulation in the S. proteamaculans BW106 genome. Screening 6886 mutants provided ≈1× coverage of the predicted genes in the 5.6 Mb genome of S. proteamaculans BW106. More subtle loss-of-function flagellar mutants may have been difficult to identify using our macroscopic phenotypic approach. Despite this limitation, our screen supports previous targeted knockout studies and confirms a key role of flagella in fungal-mediated bacterial dispersal 18,36,46 .
Other mutants were motile and did not display the same striking loss of dispersal on fungal networks as Tn5_13 ( Supplementary Fig. 5), but they did display altered interaction outcomes or dispersal phenotypes ( Supplementary Fig. 6) that provided further insights into other genes that may impact the outcomes of Serratia dispersal on Mucor networks. Surprisingly, mutants Tn5_11 and Tn5_54 completely killed Mucor, and thus Serratia did not have a fungal network present to facilitate dispersal across the agar surface (Fig. 3e, Supplementary Fig. 6). The Tn5 transposon inserted into a predicted ADP-heptose synthase in Tn5_11 and into a predicted ferric-binding periplasmic protein in the enterobactin operon in mutant Tn5_54. Why transposon insertions in these two genes caused Serratia to kill Mucor is unclear, but disruption of metabolic pathways associated with these gene products may have resulted in the accumulation of metabolites with antifungal activity.
Three other mutants-Tn5_55, Tn5_57, and Tn5_59-formed Serratia-Mucor co-spots with altered colony edges or thicknesses ( Supplementary Fig. 6), but the overall dispersal distance of these mutants on Mucor did not significantly change (Fig. 3d). These mutants had transposon insertions in genes related to phosphate metabolism (PhoU protein in Tn5_55) 47 , a gene with an unknown function (in Tn5_57), and a gene known to be essential in phospholipid biosynthesis (a glycerol-3-phosphate dehydrogenase in Tn5_59) 48 , suggesting that phosphate and phospholipid metabolism of Serratia can impact colony formation on Mucor networks.
To further investigate potential genetic mechanisms underlying Serratia dispersal on Mucor networks, we compared the genomes of the three closely related S. proteamaculans strains that showed different dispersal patterns on Mucor: BW106 and B-41162, which disperse on Mucor, and B-41156, which does not ( Supplementary Fig. 2). Of the 62 gene annotations absent in the genome of B-41156, but present in BW106 and B-41162, one stood out: the gene fliQ, which is part of the fliLMNOPQR flagellar biosynthesis operon (Supplementary Data 2). In motile species of the Enterobacteriaceae, FliQ is one of the six transmembrane proteins that make up the flagellar export apparatus 49,50 . A frameshift deletion in fliQ of B-41156 leads to predicted loss of function ( Supplementary Fig. 7). This loss of function was supported with a motility assay: while strains BW106 and B-41162 showed rapid dispersal across high motility (0.6% agar) plates (515% and 525% increase in growth, respectively), strain B-41156 showed a limited increase in growth (152%) (Supplementary Fig. 8). While other genomic differences could also contribute to the loss of spreading on fungal networks by B-41156, this observation reinforces the important role of flagella and motility in dispersal facilitation of bacteria by fungal networks.
Fungal networks shape cheese rind microbiome composition. Previous studies have described the potential existence of fungalmediated bacterial dispersal in soil systems through pairwise interaction studies of a few laboratory strains 16,19,36 . Whether these interactions, which were studied in isolation, can shape the composition of multispecies communities has not been determined. Strong pairwise interactions may be dampened by multispecies interactions that can occur in communities with three or more species 51 . Given our observation that motility is required for Serratia to disperse on fungal networks, we predicted that fungal-mediated bacterial dispersal would be unevenly distributed across cheese rind bacteria: other motile Proteobacteria species would disperse on fungal networks while non-motile Actinobacteria and Firmicutes species would have limited dispersal on fungal networks. We also predicted that this uneven dispersal facilitation would have consequences for the assembly of multispecies communities, with motile Proteobacteria favored over other bacterial taxa in communities when dispersal-promoting fungal networks were present.
To test whether fungal networks can impact cheese rind microbiome diversity by promoting the growth of Proteobacteria, we inoculated CCA with equal CFUs of S. proteamaculans BW106 (Proteobacteria-high dispersal on fungal networks), Staphylococcus equorum BC9 (Firmicutes-medium dispersal on fungal networks), Brevibacterium linens JB5 (Actinobacteria-low dispersal on fungal networks), and Brachybacterium alimentarium JB7 (Actinobacteria-low dispersal on fungal networks) as well as the yeast Debaryomyces hansenii. The yeast was included because it is a common component of cheese rind microbial (4) Penicillium. Fungi were added as spores and were allowed to form mycelia as they grew with the bacterial communities. After 2 weeks of growth, a rind had formed on the cheese surface and the experimental communities were harvested to determine CFUs of each bacterium present. As predicted, the addition of fungal networks shifted the composition of the bacterial communities compared to no network communities (PERMANOVA F = 72.14, p < 0.001; Fig. 4b), with Mucor communities having the highest relative abundances of Serratia across all treatments (No network: 64.9 ± 2.1%; Mucor: 98.5 ± 0.3%; Galactomyces: 45.2 ± 3.1%; Penicillium: 80.1 ± 3.0%; ANOVA F 3,19 = 89.2, p < 0.001; Fig. 4c). The limited impact of Galactomyces on community composition is surprising, given that this fungus strongly promoted the dispersal of Serratia alone (Fig. 4a), and indicates that pairwise interactions cannot always predict outcomes in multispecies communities. Surprisingly, Penicillium caused an increase in the relative abundance of Serratia in the experimental communities even though it demonstrated limited dispersal facilitation in co-culture experiments. Previous work in this system demonstrated that Penicillium inhibits the growth of Brevibacterium and Brachybacterium to a greater extent than it does Staphylococcus or Serratia, possibly due to the production of antibacterial compounds 27 , and this differential inhibition may be driving the shift in bacterial composition when Penicillium is present.
To tease apart the relative impact of dispersal of bacteria on networks versus other abiotic or biotic interaction mechanisms, we repeated the same experiment but instead added glass fiber networks described above to create synthetic networks on the cheese surface ( Supplementary Fig. 9). We predicted that the relative abundance of Serratia would increase in the presence of synthetic networks as we observed in the Mucor treatments. After 2 weeks of growth, the synthetic networks had shifted in composition (PERMANOVA F = 7.38, p < 0.01; Fig. 4d), with significantly higher relative abundance of Serratia compared to the no network control treatment (ANOVA F 1,13 = 9.25, p < 0.01; No network: 64.1 ± 9.9%; Synthetic network: 77.8 ± 6.6%). The more limited effects of the synthetic networks on community composition compared to living Mucor networks (Fig. 4c) could be because synthetic networks do not grow with the bacterial populations over time and/or because biotic cues or conditions generated by the fungi are missing. Regardless, these data demonstrate that the presence of physical networks alone can cause bacterial communities to shift in composition through differential dispersal facilitation of bacterial species.
Discussion
Fungi and bacteria co-occur in many types of microbiomes, including animal hosts, soils, and food systems 17 . Despite the potential for substantial diversity of bacterial-fungal interactions in such microbiomes, the biology of most bacterial species has been studied without considering the potential role of fungi as mediators of evolutionary and ecological processes. We demonstrate that strong, pairwise interactions between bacteria and fungi can not only shape the small-scale dispersal dynamics of a single bacterial species, but can also impact the diversity of multispecies bacterial communities (Fig. 5). The results presented here-in concert with recent studies from other systems 28,64-66suggest that a thorough understanding of mechanisms regulating microbiome assembly requires both eukaryotic and prokaryotic perspectives.
It has been previously demonstrated that motile bacteria can migrate on fungal hyphae 16,18,20,21,36 , but studies of the genetic mechanisms of these interactions are limited and are often based on the selection of a few candidate genes 36 . Our novel interaction-based transposon mutagenesis screen used an untargeted approach to identify any non-essential genes controlling this interaction. We determined that flagella-mediated motility is essential for the dispersal of S. proteamaculans in liquid layers on fungal hyphae (Fig. 5b). Previous studies of fungal-mediated bacterial dispersal have not tested the effect of these pairwise interactions in the context of multispecies communities. Using the tractable cheese rind model microbiome 25,27,28 , we demonstrated that fungal-mediated bacterial dispersal can shape the composition of relatively simple multispecies microbiomes by promoting the growth of motile Proteobacteria (Fig. 5c). This dispersal facilitation was not limited to a single strain of bacterium, but was also identified in numerous strains of S. proteamaculans and Serratia liquefaciens as well as a range of other Gammaproteobacteria species. We acknowledge that our cheese rind microbial communities are relatively low in species diversity, and the impacts of fungal-mediated bacterial dispersal interactions may be more diffuse in complex communities. But given that Mucor and other filamentous fungi often co-occur in environments with motile Proteobacteria 67-69 , we predict that this biophysical interaction can play key roles in determining the composition of other microbiomes. Many aspects of the diversity, mechanisms, and impacts of these interactions remain to be explored. How do co-occurring motile bacteria interact and compete on fungal networks? Do motile pathogenic species, such as Listeria monocytogenes, use fungal networks to disperse across cheese surfaces and other environments? Given the high cost of producing flagella 18 , can fungal networks impact the evolution of motility traits in bacteria? Future work using the cheese rind model microbiome and other tractable systems will continue to explore the causes and consequences of fungal-mediated bacterial dispersal.
Methods
Isolation and maintenance of cultures. Strains were isolated from the rinds of cheeses by serially diluting cheese rind scrapings on plate count agar with milk and salt, or PCAMS 27 . For all experiments described below, strain BW106 of S. proteamaculans isolated from the Saint-Nectaire cheese described above was used with strain SN1 of M. lanceolatus (isolated from the same cheese). Inocula for all experiments were created from frozen glycerol stocks of bacterial overnight cultures grown in BHI broth for 16 h. These experimental glycerol stocks were plated to determine CFUs per µL of inoculum, and these CFU densities were used to standardize the inputs into the experiments below. Fungal stocks were created by either scraping the surface of a plate containing spores (Mucor and Penicillium) or from liquid overnight cultures grown in yeast peptone dextrose broth (Galactomyces).
To comprehensively identify the species of Serratia and Mucor isolated from Saint-Nectaire, as well as Serratia strains received from the United States Department of Agriculture ARS Culture Collection (NRRL), we used wholegenome sequencing to create draft genomes as previously described 28 . Draft genome sequences have been deposited in NCBI (see Supplementary Table 1 for accession info). To construct a phylogenomic tree of the Serratia species, we first identified single-copy genes shared by all Serratia species from RAST-annotated 70 genomes that were assembled using CLC Genomic Workbench. An alignment of all single-copy genes was made using MUSCLE 71 , and then a maximum likelihood tree was constructed using RAxML 72 using the General Time Reversible + Gamma (GTRGAMMA) model. To place the Mucor strain SN1 isolated from cheese within a phylogenetic context, we used previously published 18S rRNA, 28S rRNA, ITS, and rpb1 sequences from Mucor isolated from cheese and other environments 73 . Using a low coverage (10×) assembly of reads from a 100-bp, paired-end, genomic library of Mucor strain SN1 (Supplementary Table 1), we extracted 18S rRNA, 28S rRNA, ITS, and rpb1 genes for this phylogeny. An alignment of concatenated sequences from representative strains was used to construct a maximum likelihood phylogeny using RAxML (Supplementary Fig.10).
Network dispersal co-spot assays. To measure the dispersal of Serratia on actively growing fungal networks, 250 CFUs of Serratia were inoculated in 10 µL of 1× phosphate buffered saline (PBS) onto the center of a BHI agar (1.5% agar) plate with 500 CFUs of a fungus (Mucor, Galactomyces, and Penicillium networks) or with no fungus (No network). Bacterial and fungal cells used in the co-spots were from frozen glycerol stocks with a known number of CFUs/µL. The 10-µL spot of PBS dried within a few minutes of adding it to the plate and did not impact the growth or dispersal of the bacterial cells. We used an initial ratio of 1:2 Serratia to Mucor based on pilot experiments where this ratio gave most consistent outputs and because final bacterial and fungal densities using this approach reached levels similar to those found in cheese rinds 74,75 . Plates were incubated at 24°C for 14 days.
In addition to BHI agar, various other types of media were used to demonstrate that this interaction is not specific to one medium type (Supplementary Fig. 1). These media included CCA 27 , PCAMS 27 , potato dextrose agar (PDA), and yeast extract sucrose agar (YES). Most experiments in the manuscript were conducted on BHI, PCAMS, or CCA. Because the hydration level of different types of media can impact the motility of bacteria 18,23 , we used media that had similar water activity (a w ) to what is found in fresh cheese curds (BHI a w = 0.992; PCAMS a w = 0.992; CCA a w = 0.972; fresh cheese curds a w = 0.988).
To quantify the extent of bacterial dispersal across the fungal network, we developed a transect "tap and transfer" method. A sterile toothpick, measuring the radius of the Petri dish (4.2 cm), was tapped on the fungal network from the center of the plate (center of the initial spot of bacterial and fungal cells) to the outside edge of the actively growing fungal mycelium. The toothpick was then removed from the experimental plate and tapped onto the surface of a new BHI agar plate containing cycloheximide (50 mg/mL) to inhibit fungal growth for Penicillium and Mucor networks and natamycin (21.6 mg/L) for Galactomyces, which is resistant to cycloheximide. Transfer plates were incubated for 24 h at 24°C and the length of the transects with Serratia growth was used to infer the distance traveled across the fungal network. ANOVAs were used to determine significant differences in dispersal distance across the fungal network treatments.
Synthetic fungal networks, made of glass fibers, were used to determine if active motility was necessary for dispersal on fungal networks. After a 10-µL spot of Serratia inoculum in PBS containing 250 CFUs absorbed into BHI agar, sterile glass fibers were placed on the surface of the agar at a similar density to living fungal networks. Glass fibers have a similar thickness (8 µm) to that of fungal hyphae, which range from 4 (Penicillium) to 25 µm (Mucor) in diameter 73,76 . After 2 days of growth, the presence of dispersal across the synthetic network was noted and representative plates were photographed.
To determine if S. proteamaculans could spread on existing fungal networks, we inserted a plasmid (pGLO, BioRad) containing a gene for GFP with an araC promoter into S. proteamaculans BW106. Electrocompetent cells of S. proteamaculans were made by first growing up overnight cultures for 16 h in liquid BHI medium. The cultures were then diluted 100-fold in liquid BHI medium in a baffle flask. The cultures were grown with agitation for 2-4 h until the OD 600 measured 0.5. The culture was chilled on ice for 15 min, then centrifuged at 3000×g and 4°C for 10 min. The resulting pellet was washed four times at decreasing volumes (125, 75, 25, and 5 mL) of 10% glycerol. After the final wash, the cells were re-suspended in 2.5 mL 10% glycerol and frozen at −80°C for at least 18 h. These cells were used for electroporation at 1.8 kV, 25 µF, and 200 Ω, to transform the strain with the pGLO plasmid.
Square Petri dishes (12 cm × 12 cm) containing PCAMS with ampicillin (100 mg/mL) and arabinose (2 mg/mL) were inoculated with 10,000 CFUs of either Mucor, Galactomyces, or Penicillium. Plates were incubated at 24°C for 48 h before inoculum of S. proteamaculans BW106 containing the pGLO plasmid was tapped into the top left corner of the plate using the tip of a sterile toothpick. Each fungal network treatment and a control treatment (No network) were replicated four times. After 48 h of growth when the fungal hyphae had formed a complete lawn across the plates, the area colonized by Serratia was determined by photographing plates while exposed to a long-wave UV lamp. ImageJ was used to trace outlines of the area colonized by Serratia and to quantify the total area colonized. ANOVAs were used to determine significant differences in area colonized across the fungal network treatments.
Co-culture growth assays. Mucor growth was quantified by co-inoculating Mucor (500 CFUs) and Serratia (250 CFUs) at the center of a Petri dish. Controls of Mucor (500 CFUs) in 10 µL PBS and Serratia (250 CFUs) in 10 µL PBS were also created. Each treatment was replicated five times each for two time points: 7 and 14 days. At each time point, the population was quantified using a whole plate harvest technique. With a sterile pipette tip, the agar and microbes were excised from the plastic Petri dish and placed into a 710-mL Whirl-Pak ® bag with 30 mL of 1× PBS. The agar and microbes were manually homogenized by rolling a closed 50-mL conical tube across the outside of the bag to pulverize and mix the agar with the PBS. The homogenate was serially diluted on selective media (BHI agar with cycloheximide 100 µg/mL for bacteria and BHI agar with chloramphenicol 50 µg/ mL for fungi). The CFUs on these plates were counted to quantify the abundance of Serratia and Mucor. We chose CFU quantification and not other measures of bacterial and fungal abundance (biomass) because it is a commonly used measure of bacterial and fungal abundance in cheese rinds and systems with easy to culture microbes, it is a metric that can be easily used across both bacteria and fungi, and it captures the total number of reproductive units in both types of microbes. We acknowledge that in Mucor, CFUs can originate from both hyphal fragments and spores.
Microscopy. Thin layers (≈0.7 mm in height) of BHI agar (1.5% agar) were poured into 60 mm × 15 mm Petri dishes on a level surface and left to set for 4 h. Serratia and Mucor were co-inoculated in the center of the Petri dish in a 5-µL spot of liquid 1× PBS containing 115 CFUs/µL of Serratia and 1150 CFUs/µL of Mucor. This ratio of Serratia and Mucor cells provided an ideal distribution of Serratia and Mucor colonies for imaging initial contacts between the bacterium and fungus. The Petri dishes were incubated at room temperature (24°C) in the light for at least 10 h prior to imaging. Images and movies of Serratia-Mucor interactions were taken on a Nikon TiE inverted microscope using phase contrast imaging with an Andor Zyla 5.5 camera under 40× magnification (0.6 NA). Three replicate Serratia-Mucor contacts were imaged on multiple days to determine the robustness of phases of cell contact and growth. Three replicate time-lapse examples are presented in the manuscript.
were inoculated with 5000 CFUs of S. proteamaculans (No network) or a mix of 5000 CFUs of S. proteamaculans and 5000 CFUs of Mucor (+ Mucor network) and incubated at 24°C for 27 h. Each treatment was replicated three times. Cells were harvested by scraping the agar surface with a sterile razor blade to remove most of the microbial biomass. Harvested cells were stored in RNAProtect Reagent (Qiagen) to stabilize mRNA and frozen at −80°C. RNA was extracted using a standard phenol-chloroform protocol described previously 77 . This protocol uses a standard bead-beating step in a lysis buffer to release cell contents from pelleted cells in RNAProtect. To ensure that the RNA was of high quality and not degraded, 500 ng of each RNA prep was run and visualized on a 1.5% agarose gel. DNA was removed from the nucleic acid pool using a TURBO DNA-free kit (Life Technologies). 5S rRNA and tRNA were depleted using MEGAClear (Life Technologies) kits. 16S and 23S rRNA were depleted using RiboZero (Illumina) kits. To remove both fungal and bacterial ribosomal RNA, yeast and bacterial rRNA probes from the RiboZero kits were mixed 1:2 and used for rRNA depletion. To confirm that the samples were free of DNA contaminants, a PCR of the 16S rRNA was conducted with standard 16S primers (27f and 1492r).
RNA-seq libraries were constructed from purified mRNA using the NEBNext Ultra RNA Library Prep Kit for Illumina (New England Biolabs) using manufacturer's instructions and sequenced using paired-end 100 bp reads on an Illumina HiSeq Rapid Run by the Harvard Bauer Core Facility. About 16 million reads were sequenced for each library. Only forward reads of the paired ends were used for analysis. Raw reads and differential expression data have been submitted to the NCBI GEO database. The project is available as GEO Series Accession Number GSE85095. Analysis of RNA-seq libraries, including read mapping, normalization, and quantification of transcript abundances, was done using Rockhopper version 1.3.0 78 with default settings as previously described 28 . The S. proteamaculans BW106 genome was concatenated and used as the reference genome for read mapping. It has been deposited at DDBJ/ENA/GenBank under the accession MCGS00000000. Expression values were normalized by the upper quartile of gene expression. We considered differentially expressed genes to be those that were greater than 2-fold change in expression when comparing No network to + Mucor network replicates. Expression differences were deemed statistically significant based on a q-value of <0.05. Rockhopper's q-values are pvalues adjusted for false discovery rate using the Benjamini-Hochberg procedure. Functional assignment of differentially expressed genes was determined from the SEED subsystem annotations from the RAST-annotated 70 genome of S. proteamaculans BW106.
Transposon mutagenesis. To identify genes that impact Serratia dispersal on fungal networks, we used the EZ-Tn5 TM <KAN-2>Tnp Transposome TM Kit (Epicentre) to generate a transposon mutant library. Electrocompetent cells (as described above) were used to electroporate the EZ-Tn5 TM Transposome into Serratia. Cells were plated onto PCAMS containing 50 μg/mL kanamycin to select for successful transformants. Colonies were then patched onto Nunc Omni Trays in an 8 × 12 grid to match a 96-pin replicator.
To screen the mutant library for altered dispersal phenotypes on fungal networks, each 96-array of mutants was tapped onto an array of Mucor networks that had been growing for 24 h. Mucor networks were generated by tapping the 96pin replicator onto a lawn of three-day-old Mucor to collect spores, which were then transferred to a fresh Omniplate containing PCAMS with 50 μg/mL kanamycin. This array of 96 spots of Mucor was incubated at 24°C for 24 h, after which a 96-array of S. proteamaculans mutants was inoculated on the networks using the same 96-pin method. After another 24 h of growth, the resulting cocultures were visually screened using 4× magnification under a dissecting microscope. We confirmed that lack of dispersal was not the result of poor growth by also tapping the mutant library on Omni Trays without Mucor. A subset of putative mutants was isolated and rescreened using the co-spot assay described above in "Network dispersal co-spot assays".
Transposon insertion sites were determined by using whole-genome sequencing of the putative mutants. DNA was extracted using MoBio PowerSoil DNA extraction kits from streaks generated from a single bacterial colony grown for 2-3 days on BHI agar. Approximately 1 µg of purified gDNA was sheared using NEBNext dsDNA fragmentase (New England Biolabs) to a size range of approximately 300-700 bp NEBNext Ultra DNA Library Prep Kit for Illumina (New England Biolabs). Libraries were spread across multiple sequencing lanes with other projects and were sequenced using 100-bp, paired-end reads on an Illumina HiSeq 2500. Approximately 10 million reads were sequenced for each genome. Failed reads were removed from the libraries and reads were trimmed to remove low-quality bases and were assembled to create draft genomes using the de novo assembler in CLC Genomics Workbench 8.0. Insertion sites were identified by generating a BLAST searchable database of the whole genome and then searching for the transposon sequence.
Putative mutants were also screened for loss of motility by spotting 250 CFUs of cells in 10 µL of 1× PBS in the center of a Petri dish. Each mutant was screened for motility using five biological replicates. Plates were incubated at 24°C, and the radius of bacterial dispersal was measured after 14 days of growth.
Comparative genomics. To identify genes that might be absent in dispersal deficient S. proteamaculans strain B-41156, we compared the presence of predicted protein coding genes of that strain to two closely related motile strains, BW106 and B-41162. Core and accessory genes were identified using PGAP 79 , as previously described for Staphylococcus species 28 . Species-to-species orthologs were identified by pairwise strain comparison using BLAST with PGAP defaults: a minimum local coverage of 25% of the longer group and a global match of no less than 50% of the longer group, a minimum score value of 50, and a maximum E value of 1e−8. Multistrain orthologs were then found using MultiParanoid in PGAP.
Community experiment. To determine how fungal networks might impact the dispersal of other bacterial taxa that live in cheese rinds, the same network dispersal co-spot assay described above for S. proteamaculans BW106 was used to measure fungal-mediated bacterial dispersal of an additional 22 bacterial isolates spanning the phyla Proteobacteria, Firmicutes, and Actinobacteria (Supplementary Table 1). These bacteria are commonly the dominant species in rind communities found on Saint-Nectaire cheese and other natural and washed rind cheeses 27 . As with co-spot experiments above, bacteria were spotted alone or with each of the three fungal networks (Mucor, Galactomyces, and Penicillium). Based on low variation observed across biological replicates in pilot experiments, three biological replicates were conducted for each treatment. The dispersal distance from the center of the co-spot to the edge of the bacterial colony was measured using the same transect and tap method described above. Data are expressed as change in dispersal distance when on a network compared to alone. Nested ANOVAs were used to test for significant differences in dispersal distances on fungal networks between the phyla Proteobacteria, Firmicutes, and Actinobacteria.
An in vitro community reconstruction approach was used to determine how fungal networks impact bacterial community composition. CCA was used to grow a model cheese rind community consisting of B. linens JB5, B. alimentarium JB7, S. proteamaculans BW106, and S. equorum BC9 (see Supplementary Table 1 for strain origin information). The yeast D. hansenii 135B was also added to these communities because it is widespread in cheese rinds and facilitates deacidification of the cheese curd 27 . Three fungal networks treatments were applied to this community: Mucor, Galactomyces, or Penicillium. As with co-spots above, bacterial cells and fungal spores used in this experiment came from −80°C glycerol stocks with a known number of CFUs/µL. All species were mixed in approximately equal concentrations ("Input") for a total CFU of 20,000. A control community with no fungal network ("No network") was also created by adding the same volume of 1× PBS used for the fungal network inocula. The communities were plated onto the surface of 20 mL of CCA dispensed in 100 mm Petri dishes. During this experiment, fungal networks grew with the bacterial cells on the surface of the cheese curd to form a rind. Experimental units were incubated in the dark for 4 days at 24°C and then for 10 days at 14°C in order to mimic the conditions of a cheese cave. After 2 weeks, each community was homogenized in 1× PBS, serially diluted, and then plated onto PCAMS media for the quantification of each species. All four bacteria have unique colony morphologies and pigments ( Supplementary Fig. 11) and can be easily distinguished from one another.
To isolate the effect of physical networks for growth and dispersal of bacterial cells from other chemical or biological effects of the fungal networks, we repeated the experiment with a synthetic glass fiber network treatment. The same input inoculum described above containing B. linens JB5, B. alimentarium JB7, S. proteamaculans BW106, S. equorum BC9, and the yeast D. hansenii 135B was added to 14 CCA plates. To seven replicate plates, 1 g of the synthetic glass fiber network described above was spread across the surface of the agar after the inoculum had dried onto the cheese curd surface ("Synthetic network", Supplementary Fig. 9). The remaining seven replicate plates were not manipulated and served as controls ("No network"). Experimental units were incubated in the dark for 4 days at 24°C and then at 14°C for 10 days as above. Bacterial community composition was determined by counting colonies as described above.
In both the fungal network and synthetic network experiments, the differences in community composition across treatments were determined with PERMANOVA and changes in the relative abundance of Serratia across treatments were determined using ANOVA. Principal coordinates analysis was used to visualize differences in community composition across replicates using the Bray-Curtis dissimilarity index.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.