added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2020-08-27T09:07:55.266Z
2020-08-22T00:00:00.000
234634734
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-33038/v2.pdf?c=1631868026000", "pdf_hash": "85d18b119af78afd06215f1a4301fe7b0ff5f4f9", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1410", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "fd09466324d4a140b033405aa91d7c9e82af1669", "year": 2022 }
pes2o/s2orc
Effects of land conversion on soil microbial community structure and diversity Background : To study the impact of land-use change on soil microbial community structure and diversity in Northeast China, three typical land-use types (plough, grassland, and forest), grassland change to forest land and grassland change to plough, in the Qiqihar region of Heilongjiang Province were taken as research objects. Methods : MiSeq high-throughput sequencing technology based on bacterial 16S rRNA and fungal ITS rRNA was used to study the above community structure of soil bacteria and fungi and to explore the relationship between soil bacteria and soil environmental factors. Results : The results showed that the dominant bacterial phyla changed from Actinobacteria to Acidobacteria , the dominant fungal phyla changed from Ascomycetes to Basidiomycetes , and the ECM functional group increased signicantly after the grassland was completely changed to forest land. After the grassland was changed to plough, the dominant phyla changed from Actinomycetes to Proteobacteria . The functional groups of pathogens and parasites increased signicantly. There was no signicant difference in the diversity of soil bacterial communities, and the diversity of fungal communities increased signicantly. CCA showed that pH, MC, NO 3 - -N, TP and AP of soil were important factors affecting the composition of soil microbial communities, and changes in land-use patterns changed the physical and chemical properties of soils, thereby affecting the structure and diversity of microbial communities. Conclusions : Our research results clarify the impact of changes in land use on the characteristics of soil microbial communities and provide basic data on the healthy use of land. translation, ribosomal and biogenesis; lipid posttranslational protein metabolite biosynthesis, catabolism; Background The soil microbial community is the main driving force of ecosystem processes and has the functions of completing the decomposition of soil organic matter and plant litter and mediating the carbon (C) and nitrogen (N) biogeochemical cycles in terrestrial ecosystems [1,2] . However, the composition and diversity of these communities are largely controlled by soil environmental conditions. Therefore, understanding the composition and diversity of soil microbial communities can reveal the interrelationships between soil microorganisms and the local environment and how these communities respond to human disturbance [3] . Land-use transformation dominated by human activities has a signi cant impact on the composition and structure of soil microbial communities [4,5] . It can fundamentally change soil quality and nutrient cycling, thereby affecting the construction of soil microbial communities, and it can potentially affect soil microbial diversity and ecosystem functions [6,7] . For example, Jangid et al. [8] found that grassland conversion to plough caused signi cant changes in bacterial and fungal abundance and diversity and determined that land-use change was the main determinant of microbial community composition. Wang et al. [9] found that after grassland was transformed into pine forest, the dominant soil bacterial phylum changed from Proteus to Actinomycetes, the dominant fungal phylum changed from Ascomycetes to Basidiomycetes, and grassland afforestation increased ECM fungi but reduced biological nutrition fungus. Mendes et al. [10] found that the contents of acidophilus bacteria and chlamydia in forest soil were higher, the content of actinomycetes in forest logging areas was higher, and the content of nitrifying bacteria and thermophilus bacteria in plough was higher. Heilongjiang Province, as China's largest commercial grain production base, has fertile soil and a long history of farming. It's soil physical and chemical properties and fertility are crucial to the sustainable development of agriculture [11] . Since the 1950s, with the increase in population, to solve the problems of food and clothing, the area has adopted land reclamation to obtain ploughs. The original grassland was reclaimed into plough, and the natural vegetation disappeared. When the plough land was abandoned, the ground was exposed, which seriously damaged the soil, and it was di cult to restore the original plant community in a short period [12] . At the beginning of the 21st century, the government realized the severity of the ecological and environmental problems and planted some grassland and abandoned land with forest to protect the fragile local ecological environment and promote sustainable and stable economic development [13,14] . In this paper, in the western region of Heilongjiang Province was chosen as the study area, and three land-use types-grass, plough, forest land-were used to determine how longterm land use changed the soil physical and chemical properties and soil bacterial and fungal community structure and diversity; this information is important for the maintenance of soil fertility in the study area, for breeding and for providing a scienti c basis for the protection of soil microbial diversity. Results Physical and chemical properties of soil in different land-use patterns The physical and chemical properties of the soil of the three land-use patterns are shown in Table 1. The pH of all soils was relatively alkaline, with a signi cant difference between grassland and plough (P<0.05), with the lowest pH value in plough and the highest pH value in grassland. The soil moisture content of the three land-use patterns was signi cantly different (P<0.05), with the highest soil moisture content in forest. The contents of microbial biomass carbon, microbial biomass nitrogen, total phosphorus, available phosphorus and nitrate nitrogen in plough were signi cantly higher than those in forest and grassland (P < 0.05). However, there was no signi cant difference in soil organic matter, total nitrogen, ammonium nitrogen, total potassium or available potassium. Venn diagram of soil microorganisms in different land-use patterns The bacterial Venn diagram of the three land-use patterns is shown in Figure 1A. Effects of land-use patterns on alpha diversity of soil bacteria and fungi There was no signi cant difference between the Shannon index and Simpson index of soil bacteria. However, the soil bacterial Ace and Chao1 indexes of the three land-use patterns were signi cantly different (P < 0.05). Among them, the Ace index showed plough > forest > grassland, and there were signi cant differences between grassland, forest, and plough (P < 0.05). The Chao1 index result was plough > forest > grassland, and the three land-use types had signi cant differences (P<0.05). The number of soil bacteria OTUs was signi cantly different and showed plough > forest > grassland. The soil fungal Shannon index, Simpson index, Ace index, Chao1 index, and OTU index were signi cantly different ( Table 2). The OTU index was ranked plough > forest > grassland; the Shannon diversity index was plough > grassland > forest; the Simpson index was forest > grassland> plough; the Ace index was plough > forest > grassland; and the Chao1 index was plough > forest > grassland. Analysis of soil bacterial and fungal community structure in different land-use patterns From the perspective of the overall bacterial community structure, all OTUs belong to 55 bacterial phyla. If the sequence cannot be classi ed to the known phylum level, the phylum can be uniformly classi ed into "others". According to the relative abundance of all phylum levels of the three land-use patterns, the dominant bacteria in the samples were Proteobacteria, Acidobacteria and Actinobacteria ( Figure 3A). The relative abundance of actinomycetes among the dominant bacteria in the original grassland soil was 30.01%, the relative abundance of Acidobacteria was 29.52%, and the relative abundance of Proteobacteria was 17.57% ( Figure 3B). The dominant phylum in plough soil was Proteobacteria, with a relative abundance of 31.22%; additionally, the relative abundance of Actinomycota was 8.73%, and the relative abundance of Acidobacteria was 21.42% ( Figure 3C). The dominant phylum in forest was Acidobacteria, with a relative abundance of 35.7%; additionally, the relative abundance of Proteobacteria was 20.53%, and the relative abundance of Actinomycota was 15.8% ( Figure 3D). From the perspective of the overall composition of the fungal community structure, all OTUs belong to 35 bacterial phyla, and the sequences that cannot be classi ed to a known phylum level are uniformly classi ed as "others". From the relative abundance of all levels of the three land-use patterns, the dominant phyla in the sample were Ascomycota, Basidiomycota, and Zygomycota ( Figure 4A). The relative abundance of Ascomycota is grassland was 62.74%, making it the dominant soil fungi; additionally, the relative abundance of Basidiomycota was 2.60%, and the relative abundance of Zygomycota was 0.86% ( Figure 4B). After the grassland was converted to plough, the dominant mycoplasma was still Ascomycota, with an abundance of 46.63%, the abundance of Basidiomycota was 11.87%, and the abundance of Zygomycota was 7.28% ( Figure 4C). The dominant phylum of the forest was Basidiomycota, with an abundance of 76.68%; it was followed by Ascomycota, with an abundance of 15.90%, and Zygomycota, with an abundance of 1.18% ( Figure 4D). Functions of soil bacterial and fungal communities in different land-use modes Using the PICRUSt function prediction software to analyse the soil bacterial community functions in different land-use patterns, it can be seen from Figure 9A that the bacterial community functions are mainly amino acid transport and metabolism; energy production and conversion; signal transduction mechanisms; cell wall/membrane biogenesis; transcription; carbohydrate transport and metabolism; inorganic ion transport and metabolism; translation, ribosomal structure and biogenesis; lipid transport and metabolism; posttranslational modi cation, protein turnover; coenzyme transport and metabolism; secondary metabolite biosynthesis, transport and catabolism; nucleotide transport and metabolism; defence mechanisms; cell cycle control, cell division, chromosome partitioning; RNA processing and modi cation; and chromatin structure and dynamics. It can be seen from Table 3 that except for the three functions of intracellular tra cking, secretion, and vesicular transport, cytoskeleton, and extracellular structures, there were no signi cant differences in the other functions. Note Mean values (means ± SD, n=6) , Significant levels are indicated at the *P < 0.05; **P < 0.01. Using FUNGuild software to analyse soil fungal community functions under different land-use patterns, it can be seen from Figure 9B that the fungal community functions in the three land patterns are: ectomycorrhizal; animal pathogen; endophyte; dung saprotroph; plant pathogen; arbuscular mycorrhizal; fungal parasite; endomycorrhizal-plant pathogen; bryophyte parasite-ectomycorrhizal; and clavicipitaceous endophyte-plant pathogen. As seen from Table 4, there are signi cant differences in the functions of the ve communities in terms of ectomycorrhizal, animal pathogen, endophyte, dung saprotroph, and fungal parasite. After the grassland was transformed into forest, the ectomycorrhizal functional group increased signi cantly. After the grassland was transformed into plough, the functional groups of animal pathogens, endoparasites, and faecal saprophytic organisms increased signi cantly. Redundancy analysis of soil bacterial and fungal communities and physicochemical properties in different land-use patterns The relationship between soil physical and chemical properties and the community composition of bacteria and fungi at the OTU level was analysed using CCA, and the results are shown in Figure 10. Discussion Impact of land use on soil bacterial and fungal community diversity The results of this study show that different land-use patterns signi cantly change the soil bacterial Ace and Chao1 indexes. Compared with grasslands, the soil bacterial richness indexes of plough and forests increased signi cantly, but the Shannon and Simpson indexes did not change ( Table 2), indicating that after the grassland was changed to forest and plough, the soil bacterial community richness increased signi cantly, while its uniformity did not change. Compared with the bacteria, the Shannon, Simpson, Ace, and Chao1 indexes of the three land-use soil fungi were signi cantly different. After the grassland was converted to forest and plough, the diversity of the soil fungal community increased signi cantly ( Table 2). This result may be due to the increase in soil fungal diversity after conversion to forest due to abundant litter. This result is consistent with previous research, where afforestation often stimulates the growth of soil fungal communities [15] , while soil bacteria appear to be less sensitive to land use [16,17] . According to research reports, bacterial community structure, diversity and biomass are more resistant than those of fungi [18] . This difference may be because bacteria can produce a wider range of metabolites to adapt to the new environment. In contrast, fungi depend to a large extent on the presence of their hosts [19] , so the structure and diversity of fungal communities have more dramatic changes based on land use. Effects of land-use patterns on soil bacterial community composition At the gate level, the dominant phyla in the three types of soil are Proteobacteria, Acidobacteria, and Actinomyces, which can account for more than 80% of the total bacterial community in each soil sample. The community structure results are consistent [20,21] . However, this study also found that when the grassland was changed to forest and plough, the abundance of its dominant bacteria changed signi cantly. This study found that the relative abundance of grassland soil actinomycetes was the highest. After conversion to forest and plough, the soil actinomycete content decreased signi cantly. Several studies have shown that actinomycetes are the most widely distributed in the four herbaceous vegetation soils, and their relative abundance is signi cantly higher than that of forests and ploughs; actinomycetes are the dominant mycophytes in grassland soils [22,23] . Actinomyces can degrade cellulose and chitin, which is the main source of the soil nutrient supply. It can decompose more di cult-to-decompose organic carbon by in ltrating its hyphae into large plant tissues, and the spores produced can resist unfavourable external environmental conditions and are considered to be dominant in harsh and stressful soil conditions [24] . The relative abundance of Proteobacteria was lowest in grassland soils. Liu et al. [25] found that the relative abundance of Proteobacteria may be controlled by the difference in soil nutrients. Soil total phosphorus is the main factor affecting the distribution of Proteobacteria, with an interpretation rate as high as 85.3%. Other studies have found that Proteobacteria are relatively abundant in nutrient-rich soils but also relatively abundant in soil that are nutritionally poor [26] . The relative abundance of Planctomycetes was highest in grasslands. Fei et al. [27] found that there was a signi cant positive correlation between oating mould and soil total nitrogen content. The total nitrogen content of grassland was the highest among the three land patterns in this study, making the relative abundance of oating mould the highest in grassland soil. Fu et al. [27] found that the oating fungus phylum occupied a certain proportion in the bacterial community in the green space of the fth ring road in Beijing, re ecting the alkaline and nutrient-poor soils in the study area, and the soil biological activity was low. The results of this study indicate that the relative abundance of Acidobacteria was the highest after the grassland was transformed into forest land, and it became the dominant bacteria in soil. Acidobacteria can grow on the medium using plant polymer as a substrate, indicating that Acidobacteria play an important role in the degradation of plant residues and degradation of forest litter [29] . Pankratov et al. [30] found that although the Acidobacteria degradation function is not as good as other known cellulosedegrading bacteria, it has strong resistance to stress and can survive in cold northern soils, which plays an important role in cellulose degradation under these conditions. Based on this study, there is less litter content in the grassland and plough patterns, and the forest litter content is signi cantly higher than that in forest land and grassland. Therefore, the content of insoluble matter in litter is also high. As a result, they are more susceptible to litter composition, making Acidobacteria more abundant in forests. Maestre et al. [31] found, in Northeast China, that the abundance of Acidobacteria decreased in the order of the soils of Yanji, Siping, and Tongliao, which may be the result of increased soil drought. In this study, the water content was the highest in forest, which may also be the cause of the increase in soil acid bacilli. Therefore, it can be concluded that the abundance of Acidobacteria is mainly related to the composition and content of litter, which is considered to be related to soil moisture content. After the grassland was transformed into plough, the soil actinomycete content decreased signi cantly. Clegg et al. [32] found that the addition of inorganic nitrogen reduced the abundance of actinomycetes compared with the non-fertilized grassland soil. In this study, due to the application of chemical fertilizers throughout the year, the soil NO 3 --N content increased, the soil structure changed, and the relative abundance of soil actinomycetes decreased. Therefore, it can be concluded that the main reason for the decrease in the relative abundance of actinomycetes after grassland conversion to plough may be due to the increase in soil nutrient content in plough. After the grassland was transformed into plough, the relative abundance of Proteobacteria signi cantly increased and became the dominant Mycoplasma in plough. Numerous studies have shown that the relative abundance of Proteobacteria in plough soils makes it the dominant phylum [33,34] . When substrates with high resource availability exist in the soil, Proteobacteria are more abundant in the soil [35] . Li et al. [36] found that Proteobacteria was the main group of saline-alkali soils. The soil in this study was alkaline, and it was also veri ed that Proteobacteria was the main dominant community in alkaline soil. Michael et al. [37] found that a dominant phylum was transformed from an actinomycete to a proteobacterium after conversion from a pasture to a plough. Pascault et al. [38] found that Proteobacteria had the fastest decomposition rate of beans in plough, indicating that Proteobacteria had a good effect on the degradation of crop residues. Plough soil Proteobacteria were the most abundant in this study, which shows that it plays an important role in the crop decomposition process. After the grassland was converted to plough, Gemmatimonadetes increased. Several studies have shown that Bacillus has a relatively high abundance in plough soils [39] . Gemmatimonadetes is an alkalophilic microorganism and can produce spores, which can resist dehydration and adapt to drought and extreme environmental conditions. Some Gemmatimonadetes species have strong nitrogen-xing effects and play an important role in the biological control of the production and release of plant hormones and soil-derived plant pathogens (such as fungi) [40,41] . Mahoney et al. [42] found that winter wheat soil bacterial communities were rich in Gemmatimonadetes. Monreal et al. [43] found rich Gemmatimonadetes communities in rapeseed agricultural soils in Ottawa, Canada. This result shows that Gemmatimonadetes has a higher abundance in plough soil. Plough is an arti cial ecosystem with a monoculture. Due to the effect of external nitrogen application, the available nitrogen content in the soil is high. Because Gemmatimonadetes has a strong nitrogen-xing capacity, its content is highest in ploughs. After the grassland was converted to plough, Bacteroidetes increased. Bacteroidetes are mainly anaerobic or facultative anaerobic bacteria and can be found in a variety of habitats, including soil, sediment and seawater. Li et al. [44] studied the black soil plough in Northeast China and found that Bacteroidetes were the dominant bacteria in the soil. These ora were found to be the most common ora in plough and forest soil. Turner et al. [45] and Donn et al. [46] also found that the abundance of Bacteroidetes was higher in the eld soils of wheat and pea. Gkarmiri et al. [47] and Xiao et al. [48] found a large number of Bacteroidetes in soils in rapeseed elds, alfalfa and other plants. Bergkemper et al. [49] found that the relative abundance of Bacteroidetes was positively related to available phosphorus, and available phosphorus may be one of the important factors affecting the bacterial community. In this study, due to the application of chemical fertilizers to the plough land, the available phosphorus content was the highest, and the relative abundance of Bacteroidetes increased in the plough soil. Effects of land-use patterns on the composition of soil fungal communities Among the three land-use patterns, the soil fungal groups were mainly Ascomycota, Basidiomycota, and Zygomycota. Ascomycota was the dominant phylum in grassland soil. Cao Hongyu et al. [50] also found that the grassland soil fungus Ascomycota accounted for the highest abundance, which was mainly due to the faster evolution rate of Ascomycota, drought resistance and radiation resistance, suitability for bare sand with a low vegetation canopy and harsh living environments such as land and grassland [51] . Zhang et al. [52] found that Ascomycota was the dominant bacteria in the most primitive grassland, and its dominant orders are mainly Hypocreales and SoCCAriales. Its abundance in forestland decreases signi cantly with increasing age. Most SoCCAriales are saprophytic, usually found on faeces or rotten plants. In our study area, animal dung was found in grazing grasslands, and animal and human dung in plough lands are common fertilizers. Therefore, Ascomycetes become the dominant bacteria in grasslands and ploughs. After the grassland was transformed into forestland, Basidiomycota increased signi cantly in the forestland and became the dominant phylum in the soil. This result is consistent with previous studies. After 29 years of pine planting in the wasteland, the relative abundance of Basidiomycota increased from 10.9% to 68.7% [24] . During the fungal succession of the Damma glacier fore eld in central Switzerland, it was found that the community dominated by Ascomycota became a community dominated by Basidiomycota [53] . Basidiomycota are the dominant ectomycorrhizal species, which are more abundant in oak and oak forests. The genera Mycelium and Lactobacillus in Basidiomycota are common mycorrhizal fungi in forest soils, which can be symbiotic with Pinus sylvestris var. Mongolica and thus account for a large proportion of soil fungi [54] . Other Basidiomycota ora, especially white rot fungi, can breakdown litter with high lignin and aromatic substrates. However, only a small group of fungal groups have the ability to secrete enzymes that catalyse the degradation of complex macromolecules such as lignin [55] , and they are largely con ned to the Agaricus species in Basidiomycota [56] . In this study, litterfall increased signi cantly after grassland afforestation, requiring more decomposing bacteria, which is also the reason for the increase in soil Basidiomycota. After the grassland was transformed into plough land, the dominant fungal phylum was still Ascomycetes, but Zygomycota signi cantly increased. Part of Zygomycota is a saprophytic fungus that mainly decomposes plant litter and changes soil chemical properties. Angela et al. [56] found that the majority of Zygomycota in the genus Zygomycota were predominant in Colombia, which is consistent with this study. Qian et al. [58] found that the relative abundance of soil fungal Zygomycota increased after grass growing in apple orchards, indicating that grass would affect the relative abundance of soil Zygophyta. It will convert matter to humus and provide a carbon source to increase soil organic carbon. Zygomycota are mostly saprophytic, which can make good use of the saprophytic environment. Zygomycota are also pathogenic bacteria, which can be parasitic when plants are weak, easily causing postpartum diseases. The study found that the relative abundance of Zygomycota had a signi cant positive correlation with the soil nitrate nitrogen content and with the increase in the soil nitrate nitrogen content. The highest nitrate nitrogen content in ploughs in this study may cause an increase in Zygomycota. Li et al. [59] found that the relative abundance of Zygomycota in apple eld and corn eld was greater than that in an intercropped eld, and the relative abundance of soil fungi in different categories and subgenera was also different, indicating that, due to the differences in crop roots, residues, secretions, and crop management and maintenance, the method affects the physical and chemical properties of the soil, and then changes the microbial species composition and its structure. Although the species composition of soil bacterial communities is similar between different land-use patterns, the relative abundance of soil bacterial phyla and genera may be different because of different plant patterns and differences in the form and content of nutrients provided to the soil [60,61 ] . Effects of land-use patterns on functional changes in soil bacterial and fungal communities After land-use change, the function of the inherent bacterial community changed due to differences in the aboveground vegetation community, surface litter composition, decomposition rate, and degree of interference from human activities. In this study, it was found that except for a small number of three functions, including intracellular tra cking, secretion, and vesicular transport, cytoskeleton, and extracellular structures, there were signi cant changes in the functions of other bacterial communities (Table 3). This result shows that the change in land-use patterns has a signi cant effect on the function of surface soil bacterial communities. Zhang et al. [62] found that during the transition from secondary forest to larch plantation, due to soil acidi cation and a reduction in effective nutrient content, land-use patterns had a greater impact on soil bacterial communities. We found that after a long-term change in the original grassland, the bacterial function in the 0-20 cm soil changed, and the bacterial function of the plough decreased compared with that of the grassland and forest ( Table 3). The nutrient conversion and return and the litter quality and quantity of forest and grassland were higher than those of plough, which was consistent with the research results of many scholars [63,64] . After land-use changes, the functional groups of fungal communities changed signi cantly. The grassland was transformed into forest Inocybe. The abundance of Inocybe signi cantly increased. The species belonged to Basidiomycota and was an ECM fungi. ECM fungi are reported to be most widely distributed in trees in northern temperate regions [65] . Because ECM fungi are strongly affected by the host, their richness is positively related to the proportion of ECM plants and species richness. In northern temperate deciduous forests, ECM fungi accounted for 34.1% of all taxonomic units, while in grasslands, they accounted for only 11.9%, which re ected the lack of host plants in grassland ecosystems [66] . After the grassland was converted to plough, the relative abundance of Mortierella, Chaetomium and Microdochium increased signi cantly, and they were common saprophytic fungi in soil. Liang et al. [67] studied a vineyard and found that the most abundant fungal genera included Mortierella, Chaetomium and Microdochium, which may be considered to play a key role in planting soil. Li [68] found that the inoculation of corn with Mortierella signi cantly increased soil nutrient transformation, increased the content of indole acetic acid and abscisic acid in corn roots, and increased the biomass of corn seedlings. In addition, it had the ability to decompose cellulose, hemicellulose and lignin, increase carbon nutrients, increase soil organic matter and nutrient content, and dissolve phosphorus in the soil. Therefore, it has been recognized as a bene cial soil microorganism by the genus Sporella. Most microsporum fungi are saprophytic, some species are parasitic or symbiotic, and most are phytopathogenic [69] . Therefore, the fungal functions of plough soils are mainly saprophytic, parasitic, animal pathogens, and mycorrhizal. Effects of soil physical and chemical properties on soil microbial community composition Land-use and management patterns will change the type of vegetation on the ground and then affect the physical and chemical properties of the soil [70,71] . Changes in soil physical and chemical properties will affect the structure and composition of soil microbial communities. Consistent with most other studies, pH is an important factor affecting soil microbial community structure. Barka et al. [72] found that there was a signi cant positive correlation between Actinomycetes and soil pH. Actinomycetes grew healthily in soils with a neutral pH and grew fastest between pH 6 and 9. Rousk et al. [73] found that both bacterial and fungal communities were affected by soil pH, but bacterial communities were more affected by pH than were fungal communities, which may be due to the relatively narrow optimal pH range for bacterial growth, while the pH range for fungal growth is very wide. Although soil pH has a direct impact on microbial community structure, soil pH can also indirectly change microbial communities through other variables, such as nutrient utilization and organic carbon content. As an indispensable source of energy and nutrients for microorganisms, SOC plays an important role in shaping the microbial community and signi cantly changes the proportion of bacteria and fungi in the soil [74] . However, the SOC content in this study may not have caused changes in soil microbial communities. In this study, NO 3 --N was the most important factor affecting the soil bacterial and fungal communities ( Figure 10). Nitrogen restrictions are common in most terrestrial ecosystems and often lead to erce competition between microorganisms and plants [75] . With the increase in nitrogen availability, the taxonomic and functional characteristics of soil microbial communities change, including the decrease in relative abundance of mycorrhizal fungi and the slow growth of bacterial groups. Due to the low soil nitrogen content and low litter mass in forestland, fungi appear to be the main decomposers of complex litter and soil organic matter and have largely affected related bacterial communities and their activities [76] . Soil moisture is also an important limiting factor that strongly affects soil microbial communities [77] . In this study, MC plays a key role in soil fungal diversity. Not only can it protect soil organic matter from decomposition and leaching by combining with aggregates, it can also provide a larger surface area for the growth of soil microorganisms [78] . In this study, the TP and AP contents of plough and forestland were signi cantly higher than those of grassland. The reason may be that the interception of rainwater by the forest canopy makes the surface runoff smaller, the soil surface organic matter and mineral nutrients are retained, and the loss is less. The arti cial fertilization in the plough compensates for the nutrients in the soil. Other studies have shown that under eutrophic conditions, the limiting effect of phosphorus on the original microbial community has been greatly reduced, and the metabolic activity of microorganisms has changed, which may change the species composition of microorganisms [79] . However, the grassland is not supplemented with external nutrients, and the growth of vegetation has absorbed phosphorus in the soil, which ultimately results in lower total phosphorus and available phosphorus in the soil. He et al. [80] found that P is the most critical contributor to differences in fungal communities, and phosphorus in forestland is usually less than that in managed ecosystems due to fertilization. Therefore, although the exact mechanism is not yet clear, P may be an important driving force for the construction of soil fungal communities across land-use types. DNA collection and high-throughput sequencing Genomic DNA was isolated from 0.5 g of each pooled soil sample from each sample plot (n = 18) with the PowerSoil DNA Isolation Kit per the manufacturer's instructions. The extracts of three technical repeats were mixed into a single DNA sample. Extracted genomic DNA was detected by 1% agarose gel electrophoresis. PCR was carried out on a GeneAmp 9700 PCR system. Based on previous reports, the primers 338F (5'-ACTCCTACGGGAGGCAGCA-3')-806R (5'-GGACTACHVGGGTWTCTAAT-3') were used for the 16S rRNA genes. Ampli ed products were detected by 2% agarose gel electrophoresis and recovered from the gel using the AxyPrep DNA gel extraction kit, washed with Tris-HCl, and veri ed by 2% agarose gel electrophoresis. PCR products were quanti ed using the QuantiFluorTM-ST uorometer, and the samples were adjusted as needed for sequencing. Sequencing was conducted by Shanghai Majorbio Biopharm Technology (Shanghai, China) using an Illumina MiSeq platform. Conclusions Processing of sequencing data The raw sequence les were analysed and quality-ltered using QIIME (version 1.9.1) with the following criteria: (i) the 250-bp reads were truncated at any site receiving an average quality score of b20 over a 50-bp sliding window; (ii) the exact barcode matching two nucleotide mismatches in primer matching reads containing ambiguous characters were removed; and (iii) only sequences with N10 bp overlap were assembled according to their overlap sequence. Reads that could not be assembled were discarded. The chimeric sequences were identi ed and removed using UCHIME software. The operational taxonomic units (OTUs) with a 97% similarity cut-off were clustered using UPARSE software. The representative sequence of each OTU was taxonomically classi ed by the Ribosomal Database Project (RDP) classi er against the SILVA (SSU123) database for 16S rRNA and the UNITE database for ITS rRNA using a con dence threshold of 70%. The sequencing depth of the soil bacteria and fungi in all samples was N98%, indicating that they were reliable sequencing results. Statistical analyses Mothur software was used to calculate the community richness parameters (Chao1, Ace index) and community diversity parameters (Simpson, Shannon index) as part of the alpha diversity analysis. PCoA and h-cluster analysis are based on the Bray-Curtis matrix and were implemented using R software. The bioenv method was used to test the soil environmental factors, and the environmental factors with signi cant differences were selected for CCA. One-way analysis of variance (ANOVA) was used to analyse the differences in the diversity of both soil bacterial and fungal communities among the plough, grassland, and forest sites. Tukey's HSD (honestly signi cant difference) test was used for multiple comparisons when the homogeneity of variance test was successful, and signi cance was observed at P = 0.05. Stepwise regressions were performed to identify the best independent soil factors affecting soil bacterial and fungal diversity. One-way ANOVA, Tukey's HSD test and stepwise regressions were conducted using SPSS 16.0. The functions of bacteria and fungi were analysed using PICRUSt and FUNGuild function prediction software, respectively. Declarations Ethics approval and consent to participate Sampling permission has been obtained for qiqihaer in Heilongjiang Province, Northeastern China and eld studies were conducted in accordance with local legislation. Consent Availability of data and materials All data generated or analyzed during this study are included in this published article and its supplementary information les. The raw data are available from the corresponding author on reasonable request. Competing interests The authors declare that they have no competing interests. Note: in the gure above, different colors represent different groups, the Numbers of overlapping parts represent the number of species that are common to multiple groups, and the Numbers of nonoverlapping parts represent the number of species that are unique to each group.In the gure below, the abscissa is the number of common or unique groups, and the length of the horizontal column above represents the corresponding number of species Differences in the level abundance of bacterial phylums in different land use patterns Note: The Y axis represents the name of a species at a certain taxonomic level, the X axis represents the average relative abundance of different groups of species, and the columns of different colors represent different groups; the far right is the P value, * 0.01 <P ≤ 0.05, ** 0.001 <P ≤ 0.01, *** P ≤ 0.001. The same below.
v3-fos-license
2021-09-28T01:09:16.628Z
2021-07-12T00:00:00.000
237830982
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-687966/v1.pdf?c=1631900478000", "pdf_hash": "8a1f4ba5f5364ed9fc5052ffc9d80f5d4d5e7844", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1411", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "10549b3a42150c0b70622b052338d5bb82126b9f", "year": 2021 }
pes2o/s2orc
Accuracy of Articial Intelligence-Assisted Landmark Identication in Serial Lateral Cephalograms of Class III Patients Who Underwent Two-Jaw Orthognathic Surgery To compare the accuracy of articial intelligence-assisted landmark identication in serial lateral cephalograms of Class III patients who underwent two-jaw orthognathic surgery using a convolutional neural network (CNN) algorithm. 3,188 lateral cephalograms of Class III patients were allocated into the training and validation sets (3,004 cephalograms of 751 patients) and test set (184 cephalograms of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n=23 per group)]. Each patient in the test set had four cephalograms: initial (T0), pre-surgery [T1, presence of orthodontic brackets (OBs)], post-surgery [T2, presence of OBs and surgical plates and screws (S-PS)], and debonding [T3, presence of S-PS and xed retainers (FR)]. Statistical analysis was performed using mean errors of 20 landmarks between human gold standard and the CNN model. The total mean error was 1.17 mm without signicant difference among four time-points. Before and after surgery, ANS, A point, and B point showed an increased error, while Mx6D and Md6D showed a decreased error. No difference in errors existed at B point, Pogonion, Menton, Md1C, and Md1R between the genioplasty and non-genioplasty groups. The CNN model can be used for landmark identication in serial cephalograms despite presence of OB, S-PS, FR, genioplasty, and bone remodeling. (3) assessment of surgical outcome and planning for post-surgical orthodontic treatment using post-surgical cephalograms, and (4) comprehensive assessment of orthodontic treatment and orthognathic surgery using debonding cephalograms. 3,4 In superimposition of serial cephalograms taken at different time-points is also important to assess the outcomes of pre- and post-surgical orthodontic treatment and orthognathic surgery. Accurate detection of cephalometric landmarks is mandatory to perform these procedures. identication of the hard tissue landmarks in serial lateral cephalograms, further studies are needed to investigate the accuracy of soft tissue landmark identication in serial lateral cephalograms. (AP, 50 ~ 70%), and ”low” (AP < 50%). Repeated measures analysis of variance (ANOVA) test with Tukey HSD, repeated measures multivariate analysis of variance (MANOVA) test, and independent t-test were performed using SPSS ver. 23.0 (IBM Corp., Armonk, NY, USA). P-values of < 0.05 were considered statistically signicant. Introduction Owing to the high prevalence of Class III malocclusion and negative social recognition of the prognathic appearance, 1,2 Korea has become one of the countries that performs two-jaw orthognathic surgery (TJ-OGS) extensively in patients with skeletal Class III malocclusion. To obtain successful treatment outcome, the following four steps should be performed precisely: (1) diagnosis and gross treatment planning for pre-surgical orthodontic treatment and orthognathic surgery using initial cephalograms, (2) planning for the direction and amount of surgical movement using pre-surgical cephalograms, (3) assessment of surgical outcome and planning for post-surgical orthodontic treatment using post-surgical cephalograms, and (4) comprehensive assessment of orthodontic treatment and orthognathic surgery using debonding cephalograms. 3,4 In addition, superimposition of serial cephalograms taken at different time-points is also important to assess the outcomes of pre-and post-surgical orthodontic treatment and orthognathic surgery. Accurate detection of cephalometric landmarks is mandatory to perform these procedures. An arti cial intelligence (AI) algorithm including convolutional neural network (CNN) can help clinicians detect cephalometric landmarks, whose accuracy is close to that of human experts. [5][6][7][8][9][10][11][12] Previous AI studies have regarded the accuracy within a range of 2 mm as a clinically acceptable performance in landmark identi cation. 8, [12][13][14][15] However, it appears to be a lenient standard for appropriate clinical use. Therefore, use of stricter criteria (i.e., range within at least 1.5 mm) is necessary in determining the accuracy of landmark identi cation for clinical relevance. In addition, most AI studies on the accuracy of automated landmark identi cation 8, [13][14][15] have trained and tested their models using initial lateral cephalograms only, which do not have orthodontic brackets (OB), surgical plates and screws (S-PS), xed retainer (FR), and bone remodeling changes. To the best of our knowledge, no study has compared the accuracy of automated landmark identi cation in serial cephalograms at the four time-points covering from the initial, pre-surgery, post-surgery, to debonding stages in orthognathic surgery cases. Therefore, the purpose of the study was to compare the accuracy of AI-assisted landmark identi cation in serial lateral cephalograms of Class III patients who underwent pre-and post-surgical orthodontic treatment and TJ-OGS using a cascade CNN algorithm and strict criteria for determining the degree of accuracy. Results Evaluation of total landmarks (Table 1) The total landmarks showed a good mean error value (1.17 mm), and the total AP had a high degree of accuracy (74.2%) in Table 1. Evaluation of skeletal landmarks (Table 1) Nasion and Sella showed an excellent mean error value and a very high degree of accuracy (0.59 mm and 95.1%; 0.46 mm and 100%, respectively). Porion and Orbitale showed a good mean error value and a high degree of accuracy (1.07 mm and 76.1%; 1.21 mm and 73.9%, respectively). However, Basion showed a fair mean error value (1.64 mm) and a medium degree of accuracy (63.1%). ANS and A point showed a good mean error value and a medium degree of accuracy (1.39 mm and 65.2%; 1.41 mm and 63.0%, respectively). PNS had a good mean error value (1.19 mm) and a high degree of accuracy (72.7%). Pogonion, Menton and Articulare showed an excellent mean error value and a very high degree of accuracy (0.79 mm and 91.3%, 0.77 mm and 93.5%, 0.77 mm and 93.5%, respectively). B point showed a good mean error value (1.15 mm) and a high degree of accuracy (77.2 %). Evaluation of dental landmarks (Table 1) Mx1C showed an excellent mean error value (0.44 mm) and a very high degree of accuracy (97.8%). Mx6D had a good mean error value (1.43 mm) and a medium degree of accuracy (64.1%). However, Mx1R and Mx6R had a fair mean error value and a low degree of accuracy (1.55 mm and 57.6%; 1.68 mm and 51.6%, respectively). Md1C demonstrated an excellent mean error value (0.49 mm) and a very high degree of accuracy (97.3%). Md1R had a fair mean error value (1.57 mm) and a low degree of accuracy (58.2%). Md6D had a fair mean error value (1.67 mm) and low accuracy (51.6%). Md6R exhibited an acceptable mean error value (2.03 mm) and a low degree of accuracy (41.3%). Comparison of the mean errors among the four timepoints (T0, T1, T2, and T3) ( Table 2) No signi cant difference was found in the overall mean errors (P > 0.05). Only three landmarks including ANS, Mx6D, and Md6R showed a signi cant difference in the mean errors among the four timepoints [ANS, increase in the mean error from T0 and T1 to T2, P < 0.01; Mx6D, decrease in the mean error from T0 to T2, P < 0.05; Md6R, decrease in the mean error from T0 to T2 and T3, P < 0.01]. Comparison of the mean errors between the two timepoints [(T0, T1) vs. (T2, T3)] ( Table 2) ANS, A point, and B point showed an increase of mean error after TJ-OGS than before TJ-OGS, [ANS, P < 0.01; A point, P < 0.05; B point, P < 0.01], while Mx6D and Md6D showed a decrease in the mean error after TJ-OGS than before TJ-OGS [all P < 0.01]. Comparison of the mean errors between the genioplasty and non-genioplasty groups ( Table 3) No signi cant difference in the mean errors in the landmarks located adjacent to the genioplasty area (B point, Pogonion, Menton, Md1C, and Md1R) existed in each timepoint between the two groups, except Md1R at T1 (P<0.05). Discussion Since TJ-OGS induces the position change and bone remodeling in the skeletal structures and produces the metallic images of the OB, SP-S, and FR, the accuracy and reliability of cephalometric landmark identi cation in serial lateral cephalograms are important for assessment of treatment outcomes. 16 As total landmarks exhibited a good mean error value and a high degree of accuracy (1.17 mm and 74.2%, respectively, Table 1) without signi cant difference among the four time-points (P > 0.05, Table 2), accuracy of the AI-assisted digitization was not signi cantly affected by the presence of OB, SP-S, FR, and bone remodeling change during orthodontic treatment and TJ-OGS. Regardless of the degree of accuracy of each landmark ( Table 1) Table 2). Accuracy of the cranial base landmarks can be regarded as baseline for comparison of serial lateral cephalograms because the positions of these cranial base landmarks are not affected by TJ-OGS. Three error patterns were found in the maxillary skeletal landmarks. First, the mean errors of ANS were different among the four time-points (T0, 1.07 mm; T1, 1.22 mm; T2, 1.78 mm; T3, 1.49 mm, P < 0.01; Table 2) and presented an increased error value after TJ-OGS than before TJ-OGS [(T0, T1) vs. (T2, T3), P < 0.01; Table 2], which suggested that the metal image of the SP-S adjacent to ANS as well as surgical shape modi cation of ANS 17,18 (Fig. 1) could affect the accuracy of AI-assisted landmark detection. Second, although the error of A point was not signi cantly different among the four time-points (T0, 1.27 mm; T1, 1.28 mm, T2, 1.50 mm, T3, 1.59 mm, Table 2), it presented an increase in the mean error value after TJ-OGS than before TJ-OGS [(T0, T1) vs. (T2, T3), P < 0.05; Table 2]. This occurred because A point might be less affected by the metal image of the SP-S installed at the maxilla and have a lower chance for surgical shape modi cation, compared to ANS (Fig. 1). Third, in case of posterior impaction and/or anteroposterior movement of the maxilla, the position of PNS had to be changed. However, for PNS, no signi cant difference was found either among the four time-points (T0, 1.16 mm; T1, 1.14 mm, T2, 1.29 mm, T3, 1.17 mm; P > 0.05, Table 2) or between the two time-points [(T0, T1) vs. (T2, T3), P > 0.05; Table 2]. This might be due to (1) absence of the metal image of the SP-S within the ROI of PNS and (2) the end point of the hard palate can still be easily de ned. There are three explanations of the errors in the mandibular skeletal landmarks. First, since there were no metal images within the ROI of Articulare and Menton, their mean errors were not signi cantly different among the four time-points and between the two time-points (all P > 0.05, Table 2). Second, the mean error of Pogonion was not signi cantly different among the four time-points and between the two time-points (P > 0.05; Table 2), which suggests that the metal image of the SP-S adjacent to Pognion ( Fig. 1) might not affect the accuracy of AI-assisted landmark detection. Third, although the mean errors of B point did not differ among the four time-points (T0, 1.00 mm; T1, 1.01 mm; T2, 1.29 mm; T3, 1.31 mm, P > 0.05; Table 2), comparison of the two time-points revealed an increase in error after TJ-OGS than before TJ-OGS [(T0, T1) vs. (T2, T3), P < 0.01; Table 2]. These ndings suggest that the metal image of the SP-S adjacent to the B point ( Fig. 1) might affect the accuracy of AI-assisted landmark detection. There are two sources of errors in the dental landmarks. First, regardless of the degree of accuracy in the dental landmarks (Table 1) Conclusions The cascade CNN algorithm proposed in this study can be used for landmark identi cation in serial lateral cephalograms despite the presence of OB, S-PS, FR, genioplasty, and bone remodeling. Methods Materials. A total of 3,188 lateral cephalograms of 797 patients with Class III malocclusion were used for the training and validation sets and the test set for automated landmark identi cation using the CNN model. All procedures were performed in accordance with relevant guidelines. The inclusion criteria were as follows: (1) Class III patient who underwent pre-and post-surgical orthodontic treatment and TJ-OGS with/without genioplasty and (2) Class III patient whose serial lateral cephalograms were available. The exclusion criterion was Class III patient who had craniofacial deformities.The training and validation sets for automated landmark identi cation by the CNN model included 3,004 lateral cephalograms of 751 Class III patients from 10 institutions ( Table 4). Some of the patients who belonged to the training or validation set had more than four lateral cephalograms because additional progress lateral cephalograms were taken between time-points, while some of them had missing lateral cephalograms at a speci c timepoint. For the test set, Class III patients with cephalograms obtained at the following four timepoints were selected: initial (T0), pre-surgery (T1, taken at least 1 month before TJ-OGS; presence of OBs), post-surgery (T2, taken at least 2 months after TJ-OGS; presence of OBs and S-PS), and debonding [T3, presence of S-PS, FR, and bone remodeling change). As a result, the test set consisted of 184 cephalograms of 46 Class III patients from eight institutions ( Table 4). It was subdivided into the genioplasty and non-genioplasty groups (n = 23 patients per group). Their characteristics are enumerated in Figure 1 Data sets were obtained from 10 centers using anonymized Digital Imaging and Communications in Medicine (DICOM) le format. Since nding the exact location of landmarks in a large lateral cephalogram image is relatively di cult, a fully automated landmark prediction algorithm with the cascade network was developed. 12 Two steps were followed: 1) detection of the region of interest (ROI; 256 × 256 and 512 × 512 pixels depending on the landmark) using the RetinaNet 19 and 2) prediction of the landmark using the U-Net 20 (Figure 2). De nitions of 12 skeletal and eight dental landmarks are presented in Figure 3 and Table 5. The landmarks were digitized by a single orthodontist who had 20 years of experience (human gold standard, MHH) and by the CNN model. The mean values of absolute errors for each landmark were calculated using the absolute distance between the human gold standard and AI-assisted detection. The degree of error was allocated into excellent (< 1.0 mm), good (1.0 -1.5 mm), fair (1.5 -2.0 mm), acceptable (2.0 -2.5 mm), and unacceptable (> 2.5 mm) groups. Then, the accuracy percentage (AP) was calculated using a formula (percentage of the excellent and good groups among the total degree of error groups), which means that the error range within 1.5 mm was considered accurate. The degree of accuracy was de ned as "very high" (AP > 90%), "high" (AP, 70 ~ 90%), "medium" (AP, 50 ~ 70%), and "low" (AP < 50%). Repeated measures analysis of variance (ANOVA) test with Tukey HSD, repeated measures multivariate analysis of variance (MANOVA) test, and independent t-test were performed using SPSS ver. 23.0 (IBM Corp., Armonk, NY, USA). P-values of < 0.05 were considered statistically signi cant. Accuracy Percentage (AP); error range within 1.5 mm was considered accurate.
v3-fos-license
2019-07-23T13:05:20.668Z
2019-07-23T00:00:00.000
198119930
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.01740/pdf", "pdf_hash": "b94e7b9aecc719299451a3d4942f0fd7bfc1a10f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1412", "s2fieldsofstudy": [ "Biology" ], "sha1": "b94e7b9aecc719299451a3d4942f0fd7bfc1a10f", "year": 2019 }
pes2o/s2orc
Current Flow Cytometric Assays for the Screening and Diagnosis of Primary HLH Advances in flow cytometry have led to greatly improved primary immunodeficiency (PID) diagnostics. This is due to the fact that patient blood cells in suspension do not require further processing for analysis by flow cytometry, and many PIDs lead to alterations in leukocyte numbers, phenotype, and function. A large portion of current PID assays can be classified as “phenotyping” assays, where absolute numbers, frequencies, and markers are investigated using specific antibodies. Inherent drawbacks of antibody technology are the main limitation to this type of testing. On the other hand, “functional” assays measure cellular responses to certain stimuli. While these latter assays are powerful tools that can be used to detect defects in entire pathways and distinguish variants of significance, it requires samples with robust viability and also skilled processing. In this review, we concentrate on hemophagocytic lymphohistiocytosis (HLH), describing the principles and accuracies of flow cytometric assays that have been proven to assist in the screening diagnosis of primary HLH. INTRODUCTION Hemophagocytic lymphohistiocytosis (HLH) can be described as a systemic hyperinflammatory syndrome. It is most often thought to be caused by an inability to clear an inciting infectious or other immunologic trigger. This leads to pathologic immune activation and a positive feedback loop of ever increasing cytokine secretion and cellular cytotoxicity that ultimately results in self harm (1,2). HLH can be classified as "primary" or "secondary" depending on whether it occurs as a result of an inborn error leading to a dysfunctional immune system like perforin deficiency, or occurs in settings such as infection, malignancy, rheumatologic, or other disease without a known underlying inherited defect in the immune system (3)(4)(5). Primary HLH can be caused by mutations in a number of genes which affect cytotoxic lymphocyte granule-mediated cytotoxicity including PRF1, UNC13D, STX11, STXBP2, RAB27A (Griscelli Syndrome), AP3B1 (Hermansky-Pudlak syndrome type 2), and LYST (Chediak-Higashi Syndrome). Primary HLH can also include other genetic diseases such as XIAP deficiency, which is characterized by inflammasome dysregulation, and SAP deficiency which has a complicated mechanism of disease, though these diseases are usually classified as X-linked lymphoproliferative diseases (XLP) type 1 and type 2, respectively. Regardless, the classification of HLH into primary or secondary groups is sometimes difficult due to the varied phenotype presented and delays or limitations in obtaining genetic results. This has necessitated the development of faster diagnostic screening assays. Many excellent reviews exist on the subject of primary HLH and cytotoxic lymphocyte function, and the reader would be wise to refer to them for a deeper understanding on the subject (1,(6)(7)(8)(9)(10). In this review, we will focus on summarizing the laboratory assays currently used to screen for genetic abnormalities in primary HLH linked genes and explore their accuracy. We will also briefly discuss possible pitfalls and future directions in diagnosing diseases typically associated with HLH. PERFORIN DEFICIENCY NK cells and cytotoxic T lymphocytes are often grouped together as cytotoxic lymphocytes. Their primary role is to kill virus infected or malignant cells (11,12). Perforin, the pore forming protein, is encoded by the gene PRF1 and is a key player in this process as well as the archetypical example of primary HLH (13). PRF1 is also historically the first primary HLH gene to be identified and is often referred to as familial hemophagocytic lymphohistiocytosis type 2 (FHL2) (14). Perforin is stored within cytotoxic granules. Once secreted from cytotoxic lymphocyte granules, perforin oligomerizes on the surface of target cells to create pores which allow the penetration of contents such as granzymes into the target. Perforin is easily stained for intracellularly in NK cells using a conjugated monoclonal antibody. Perforin has been shown to be absent or highly reduced in persons with biallelic mutations for PRF1 gene. Staining can be performed using fresh whole blood or peripheral blood mononuclear cell (PBMC). First, the various lymphocyte lineages are extracellularly stained followed by cell fixation and permeabilization. Intracellular perforin is then stained for and the cells finally analyzed on a flow cytometer (15). To note, while freshly isolated NK cells contain perforin and are routinely used for perforin analysis, only a minority of cytotoxic T cells in "healthy" individuals express perforin. Perforin expression in resting bulk CD8 + cells thus varies greatly between individuals. To overcome this, bona fide effector T cells can be gated using CD57 if evaluation of perforin in resting T cells is desired (16,17). This can greatly help in individuals with poor NK cell counts. The diagnostic accuracy of perforin expression in NK cells for detecting biallelic PRF1 mutations has recently been published and is highly accurate with sensitivity of 96.6% and specificity of 89.5% for an overall area under the curve (AUC) of 0.971 (Table 1) (18,20). These and other reports have also shown that PRF1 mutation carriers (a mutation in only one allele) often have clearly reduced perforin expression arguing for an allele dependent perforin expression (19,26,27). The A91V alteration in PRF1 is unique. Having a high prevalence of 0.22 to 3.9% depending on the population studied, it has been assumed to be less pathologic (Figure 1) (28)(29)(30)(31). However, in vitro studies have shown that A91V leads to reduced perforin function (32,33). Individuals with A91V in both compound heterozygous and homozygous state can be identified by laboratory assays and show low to no residual protein expression, and such results may be indiscriminable from other pathologic PRF1 mutations (30,34,35). The lack of perforin leads to an inability to kill target cells. This functional defect can be detected by lowered chromium release using the radioactive chromium cytotoxicity assay (36). Because the chromium release assay shows suboptimal accuracy, many have turned to screening for primary HLH diseases with perforin staining coupled with the degranulation/exocytosis/CD107a assay in place of or in addition to chromium release NK cell function testing. The CD107a assay examines if cytotoxic lymphocytes (NK cells and CTL) can release secretory lysosomes as described below, but this assay does not report if target cells are killed. Samples from patients with perforin deficiency will not show any degranulation abnormalities but is nonetheless often run to confirm normal degranulation. Typical perforin deficiency can thus be confidently diagnosed based on the lack of perforin staining, deficient NK cell cytotoxicity, but normal degranulation. At this juncture, it is important to differentiate between the terms "NK cell degranulation" and "NK cell function, " as they are often thought to be one and the same. The NK degranulation assay, also known as CD107a or NK exocytosis assay, evaluates if CD107a containing secretory lysosomes are able to release their content and thus deposit CD107a on the external cell membrane where it is measured as a surrogate for degranulation (Figure 2). Under the microscope, CD107a and perforin often co-localize and so it is assumed that when granules bearing CD107a are externalized, perforin would also most likely be released at the immune synapse (43,44). In the case of perforin deficiency, the CD107a assay is not useful as a screening tool because secretory lysosomes without perforin are still released and CD107a still expressed on the cell membrane. The CD107a assay is also unable to detect whether granules are headed toward the immune synapse where the target cell is being engaged. When stimulating NK cells in vitro with anti-CD16 antibody, the release of secretory lysosomes are non-polarized which would not be efficient for target cell elimination (43). The CD107a assay has been found useful for the diagnosis of FHL3-5, GS2, CHS, and HPS2, and possibly ORAI1, STIM1, and HPS10 (45-48), 100 12 because in all these cases, secretory lysosomes are unable reach the cell membrane or fail to fuse with the cell membrane leading to the absence of surface CD107a after relevant stimulation. But, in cases of preserved detection of CD107a upregulation, additional testing to evaluate NK cell killing may be needed, as lysosome degranulation does not necessarily equate to the death of target cells. As such, the often crowned "gold standard" chromium release assay still holds relevance since described in the 1960s (49,50). In this assay, K562 cells (ATCC, CCL-243) first preloaded with radioactive chromium-51 will be killed by NK cells and the extent to which the stored chromium is freed is taken to represent the percentage of K562 killed (51)(52)(53). No published data exists exploring the accuracies of NK cytotoxicity assay in diagnosing each subtype of primary HLH, possibly due to sample number limitations. Only one recent study attempted to systematically quantify the accuracy of the chromium release NK cell function assay when used in the clinical lab setting for diagnosing PRF1, UNC13D, STX11, STXBP2, RAB27A, LYST, and AP3B1 mutations, and found it lacking with a sensitivity of 60% and specificity of 72% (Table 1) (20). The low accuracy of this assay, often used during acute phase HLH, may be partly blamed on the assay's dependency on the NK cell percentage in the sample. HLH patients normally experience large expansions of CD8 T cells, and stressed blood samples from these patients often leave large numbers of RBC and cell debris in the peripheral blood mononuclear cell (PBMC) suspension after ficoll. This leads to an artificially low NK cell percentage which is often unaccounted for, giving an impression of reduced NK function when in fact it is due to the overwhelming number of other cells in the mix. Because the assay is sensitive as such, care must be taken when interpreting poor NK cytotoxicity results especially during acute HLH as it could indicate poor sample quality rather than dysfunctional NK cells. While this assay has many limitations, the result distinctly demonstrates whether or not target cells are finally killed (Figure 2) (54). Numerous flow-, colorimetric-, and imaging-based cytotoxicity assays have been touted as possible chromium release assay replacements but no large cohort of primary HLH cases has been validated on any of these platforms (55)(56)(57)(58)(59). Pending such reports, the chromium release assay is still the only published clinical standard for NK functional studies. Therefore, we currently rely on the CD107a NK cell degranulation assay for the screening diagnosis of primary HLH related to mutations in UNC13D, STX11, STXBP2, RAB27A, LYST, and AP3B1. The most commonly used NK degranulation assay tests rested PBMC stimulated with the myelogenous leukemia cell line K562 (21). After co-incubation for several hours, the percentage of NK cells bearing surface CD107a or the fluorescence intensity of CD107a positive NK cells is then evaluated. Persons with a defect in secretory lysosome transport or membrane fusion will show greatly reduced surface CD107a levels (Figure 2). A pan European study found 97% of FHL3-5 and 85% of GS2 and CHS cases had abnormal percentage of NK cell degranulation (<5% CD107a + NK cells) to give an overall sensitivity of 96% and specificity of 88% in diagnosing a genetic degranulation disorder ( Table 1) (21). A follow-up study on a North American cohort evaluated CD107a mean channel fluorescence (MCF) of NK cells instead of percentage of degranulating cells (20). It found 93.8% of patients with biallelic mutations in an HLH-associated degranulation gene FIGURE 2 | Cytotoxic lymphocyte evaluation of an STXBP2 patient. We performed NK cytotoxicity as well as NK and T cell degranulation using fresh PBMC from a case with homozygous c.1430C>T (p.Pro477Leu) mutations. While (A) control NK cells and CD8 + CD57 + T cells degranulated as expected when stimulated, respectively with K562 or anti-CD3 antibody, (B) the patient's cytotoxic lymphocytes did not. (C) NK cytotoxicity was also evaluated via 51 Cr release and found deficient. In addition, we included cytotoxicity data from a sibling carrying the same homozygous mutation. with lowered CD107a MCF but only 60.4% of individuals without biallelic mutations in relevant genes with normal CD107a levels, giving an overall area under the curve of 0.86. More recently, a cohort of 21 CHS cases has likewise confirmed the CD107a assay is able to accurately identify primary defects in NK degranulation (22). In the first two studies, a sizable portion of controls were found to have lowered NK degranulation. This could be due to technical issues, stress during blood sample transport, medications leading to reduced lymphocyte reaction, or epigenetic changes resulting in NK cells with a particularly skewed functional response (60)(61)(62)(63). So while better than the chromium release assay, the NK-K562 degranulation assay, like all diagnostic assays, is not perfect. To overcome the shortcomings stemming from an overreliance on any single test, NK degranulation can also be evaluated through other means, for example, via stimulation using PMA, activating antibodies such as anti-CD16 targeting the Fc receptor, or activation of synergistic NK receptors (16,64,65). Preliminary data has found Fc stimulation induced degranulation returns 88% sensitivity and 98% specificity in a cohort of 16 FHL3-5 ( Table 1) (16). We can thus infer that both NK cell natural cytotoxicity and antibody-dependent cellular cytotoxicity are defective in classical primary HLH. This is an important point to note as immunodeficiencies could affect only one specific pathway. For instance, a certain CD16 (FcγRIIIA) mutation was found to impair natural NK cytotoxicity but Fc specific function was intact (66). Current standard clinical tests limited to only K562 stimulation would be insufficient for detecting abnormalities in such cases. Cytotoxic T lymphocytes have also been found defective in degranulation in the context of primary HLH due to mutations in the genes required for normal degranulation. Previously, T cell blasts had to be grown up over weeks in order to sufficiently stimulate perforin production in T cells and generate enough cell numbers for experimentation (21). More recently, it was noticed that specific populations of T cells, namely CD3 + CD8 + CD57 + contain perforin and granzymes ex vivo without prior need for stimulation (17). This population of bone fide effector cells, by virtue of perforin expression, was found to efficiently degranulate upon anti-CD3 antibody stimulation. Crucial to our context, when tested on primary HLH samples, CD3 + CD8 + CD57 + T cell degranulation was defective to a similar level as in NK cells (16). A small confirmatory study found high sensitivity with a cohort of biallelic pathogenic UNC13D variants (23). With multiple ways to induce degranulation on multiple cell types, we could speculate on possible undiscovered immunodeficiencies that affect only NK cells or T cells and detectable only with a combination of various degranulation assays. Like perforin, it is possible to directly detect Munc13-4, syntaxin11, Munc18-2, and Rab27a with antibodies (67)(68)(69). However, this is usually performed with western blot. One exception is Munc13-4 detection in platelets with flow cytometry (70,71). Although this assay has been found to be highly accurate for predicting UNC13D mutations, the antibody used is polyclonal and not commercially available. Taken together, when primary HLH is suspected, performing the triad of perforin staining, NK and/or T cell degranulation, and NK cytotoxicity will give a more complete evaluation of cytotoxic cell activity and improve HLH diagnosis. While all the assays are individually accurate, we suggest moving toward a "multiplexing" of degranulation assays in the future to increase confidence in diagnosis, provide security should any one cell population be poorly represented, and pave the way for detecting degranulation deficiencies in specific pathways or cell types. Additionally, validating a radioactivity-free killing assay that accounts for effector cell counts would be highly useful for true assessment of cytotoxic lymphocyte function. X-LINKED DISEASES The genes SH2D1A and XIAP/BIRC4 encode the proteins SAP and XIAP, respectively. Deficiencies in these proteins lead to X-linked lymphoproliferative disease type (XLP) 1 and 2 (72,73). As their names imply, both genes are X-linked and often manifest HLH with Epstein-Barr virus (EBV) infection (74)(75)(76) but beyond that, XLP1 and XLP2 have quite different phenotypes and share little functional or structural similarities (77). Similar to perforin, SAP and XIAP monoclonal antibodies exist and have been validated clinically for direct intracellular protein detection (Figures 3, 4) (78-80). However, care must be taken when reading such reports as certain pathologic variants have been found to preserve antibody binding leading to false negative (false normal) results (81)(82)(83). Also, while the absence of binding can be equated with the absence of that protein and thus strongly suggests a defect, the binding of an antibody to its antigen says nothing about the function of the protein bound. As such, patients expressing normal SAP and XIAP levels, or for that matter all direct antibody phenotyping tests, should still be sequenced if clinically suspicious. Bimodal staining patterns are also useful in identifying female carriers as well as estimating the level of chimerism for transplant monitoring (24,79). For XIAP, there has been reports of non-random X inactivation in some female carriers. Lymphocytes bearing the wild-type allele have been seen selected in some while others show the opposite, skewing toward the defective X chromosome at risk for disease manifestations (73,84,85). Direct screening of SAP returns 87% sensitivity and 89% specificity for the prediction of pathologic mutations in SH2D1A while direct screening of XIAP gives 95% sensitivity and 61% specificity ( Table 1) (24,86). It has been demonstrated that both SAP and XIAP are required for the development of normal invariant NKT (iNKT) cells and for normal T cell restimulation-induced cell death (RICD) (73,76,87,88). As such, iNKT quantification and RICD assays can be performed for cases where direct staining is inconclusive, or if further supporting data is desired (Figure 3). A more sophisticated cytotoxic assay looking at inhibitory 2B4 signaling in NK cells has also been reported to discriminate functional SAP deficiency (89). Likewise, a functional test exists where XIAP function is investigated downstream of NOD2 stimulation on monocytes. Following stimulation with L18-MDP, TNF is normally produced by CD14 positive cells. However, patients with pathologic mutations in XIAP, even where XIAP protein staining was found normal or in patients with milder clinical phenotype, all had equally defective TNF production and could easily be discriminated (Figure 4) (25). A cutoff of 10% TNF-producing monocytes perfectly distinguished 12 XIAP patients from 29 healthy controls and 6 female carriers ( Table 1). Subsequent reports demonstrated the assay's usefulness in diagnosing inflammatory bowel disease (IBD) cases with novel XIAP mutations (90,91). By performing phenotyping as well as functional assays side by side, it is hoped that future cases might be more accurately identified. OTHER PRIMARY IMMUNODEFICIENICES A host of patients with other diseases such as ALPS, CGD, CVID, and SCID, as well as variants in genes including BTK, CARMIL2, CD27, ITK, LRBA, MAGT1, NEMO, PIK3CD, RAG2, WAS, NLR genes, and STAT genes, have been implicated with possible HLH (92-94). The assays described so far including NK cell degranulation and cytotoxicity will be of little diagnostic use here except to rule out defective secretory lysosome transport. For some genes, there exist flow cytometric assays that can assist with diagnosis. For example T, B, and NK specific subset phenotyping panels can pick up ALPS (increased double negative T cells), X-linked agammaglobulinemia due to mutations in BTK (low B cell counts or BTK expression), mutations in CD27 (absent surface expression of CD27), mutations in MAGT1 (lowered NKG2D expression), and a variety of SCID disorders (very low B, T, and/or NK counts, reduced recent thymic emigrants and CD45RA expression) (95). The neutrophil oxidative burst assay is an excellent assay for the diagnosis of CGD (96). WAS can be accurately diagnosed through direct staining of intracellular WAS protein (97). Multiple excellent reviews exist for PID diagnostics (98,99). A second group of primary immunodeficiency genes demonstrate defective NK cell activity without pronounced HLH. However, before suggesting that NK degranulation and cytotoxicity assays could be used in helping with the diagnosis of these PIDs, larger cohorts of patients must be collected for evaluation to confirm and explore cytotoxic lymphocytes further including: whether or not both NK and CTL are affected, if both degranulation and cytotoxicity are defective, and if the majority of mutations in that gene share the same phenotype. Genes in this group include AP3D1, CTSC, FERMT3, GATA2, IRF8, MYH9, ORAI1, and STIM1 (45,47,48,(100)(101)(102)(103)(104)(105)(106)(107). From this list, we know that not all persons for whom NK cell function is defective should be labeled primary HLH. Moreover, a thorough evaluation is hampered as many of publications lack NK degranulation or cytotoxicity data, something we hope future endeavors will address. These genes are thus currently not grouped together with the "classical" primary HLH family because clinical HLH is not usually the outstanding feature. Most are also very rare leading to difficulty in performing large cohort evaluations of cytotoxic lymphocyte activity. THE FUTURE OF HLH DIAGNOSTICS The HLH field has come some ways since the HLH-2004 criteria were established (108). A European cohort of cases with clinical HLH and PID other than defects in cytotoxicity found 63 cases, 80% of which were CGD and CID (109). Across the Atlantic, another HLH cohort was comprised of only 19% primary HLH disorders, with 58% of patients having other PIDs including genes associated with inflammasome function (92). We reason the high percentage of "non-classical-HLH" cases is a reflection of improved HLH awareness within the community and should be looked upon positively. These and other studies looking into specific sensitivities of various HLH-2004 criteria have found them wanting (110)(111)(112). The concern often cited is the inability to distinguish between primary HLH, secondary HLH, and other PIDs. A simple solution that can easily be adopted today is increased screening. As can be concluded from Table 1, many subtypes of primary HLH can be diagnosed with good accuracy. As such, the fulfillment of HLH criteria should act as an actionable gateway to seriously consider PID by performing various laboratory tests as discussed. This in tandem with advanced sequencing should more often than not provide conclusive diagnosis for all the common primary HLH cases. As previously mentioned, we believe the field of HLH diagnostics will move toward a "multiplexing" of screening assays to more quickly screen for multiple defects simultaneously. The evaluation of gene expression signatures is an exciting development that could help untangle some of the primary vs. secondary HLH questions going forward. Unique interferonstimulated gene signatures have been found in systemic lupus erythematosus differentiating it from rheumatoid arthritis and control samples (113,114). Other studies successfully used the interferon score to identify various Mendelian Type-I IFNmediated autoinflammatory diseases (115,116). Preliminary work to define a HLH signature has also been performed with favorable results (117,118). While research on this area is in its infancy today, we postulate a future where specific gene expression fingerprints from tens or hundreds of genes would be elucidated for the various shades of HLH. We could then quickly and accurately segregate HLH into several subcategories as well as deduce their disease status. The signatures could not only act as a "precision" diagnostic tool but also afford us a deeper cellular mechanistic understanding on the pathobiology of various closely related diseases, and thus opportunities for "precision" therapeutics. We are excited to see what the future holds in terms of HLH diagnostics. AUTHOR CONTRIBUTIONS RM initiated the manuscript which SC wrote and JB edited.
v3-fos-license
2018-01-12T17:42:21.963Z
2014-08-01T00:00:00.000
711844
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journals.iucr.org/e/issues/2014/09/00/hg5398/hg5398.pdf", "pdf_hash": "61ff3a43445e5961b15c026ce6f5d6bbf28a3cf4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1414", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "61ff3a43445e5961b15c026ce6f5d6bbf28a3cf4", "year": 2014 }
pes2o/s2orc
Crystal structure of anilazine The title compound [systematic name: 4,6-dichloro-N-(2-chlorophenyl)-1,3,5-triazin-2-amine], C9H5Cl3N4, is a triazine fungicide. The dihedral angle between the planes of the triazine and benzene rings is 4.04 (8)°. In the crystal, two weak C—H⋯N hydrogen bonds and short Cl⋯Cl contacts [3.4222 (4) Å] link adjacent molecules, forming two-dimensional networks parellel to the (112) plane. The planes are linked by weak intermolecular π–π interactions [3.6428 (5) and 3.6490 (5) Å], resulting in a three-dimensional architecture. S2. Experimental The title compound was purchased from the Dr. Ehrenstorfer GmbH Company. Slow evaporation of a solution in CHCl 3 gave single crystals suitable for X-ray analysis. S3. Refinement All H-atoms were positioned geometrically and refined using a riding model with d(C-H) = 0.95 Å, U iso = 1.2U eq (C) for aromatic C-H groups. Figure 1 The molecular structure of the title compound. Displacement ellipsoids are drawn at the 50% probability level. H atoms are shown as small spheres of arbitrary radius. Figure 2 Packing diagram of the title compound with C-H···N hydrogen bonds and short Cl···Cl contacts shown as dashed lines. Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
v3-fos-license
2014-10-01T00:00:00.000Z
2009-06-15T00:00:00.000
264622989
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "3f79c6e50efda3b6b01d9a50cc6bdee9049a110d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1415", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3f79c6e50efda3b6b01d9a50cc6bdee9049a110d", "year": 2009 }
pes2o/s2orc
Chronic Disease Prevention and Control: Coming of Age at the Centers for Disease Control and Prevention The Centers for Disease Control and Prevention's (CDC's) National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) has entered its 20th year — at the crossroads between adolescence and adulthood. As the 3 directors of the center to date, we offer our perspective on its developmental path and on the opportunities and challenges that lie ahead. The Centers for Disease Control and Prevention's (CDC's) National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) has entered its 20th year -at the crossroads between adolescence and adulthood. As the 3 directors of the center to date, we offer our perspective on its developmental path and on the opportunities and challenges that lie ahead. Former CDC Director William Roper described public health as the "intersection of science and politics" (1). This description speaks volumes about the strategies we employ and the resources we have. Spurred by the 2008 presidential election, current national debate includes renewed interest in health care reform. Reform discussions largely revolve around alternative mechanisms and financing needed to achieve universal coverage for medical care. Too often absent in these discussions is the critical need for population-based prevention to protect health in the first place. "Health in all policies and settings" could be a unifying strategy to complement the delivery of clinical preventive services and care. This expanded vision of health reform will depend on a robust public health system that can address the leading determinants of health and health care cost: chronic disease prevention and control. The backbone of a strong public health system is the national, state, and local public health infrastructure. Twenty years have brought growth to core chronic disease prevention and control programs at each of these levels, but the programs remain weak and fragmented and are of secondary importance in too many public health departments. The good news is that all 50 states and the District of Columbia have programs that focus on tobacco use, diabetes, breast and cervical cancer screening, comprehensive cancer control, and the Behavioral Risk Factor Surveillance System (BRFSS). These programs succeed by monitoring health risks, reducing tobacco use and exposure to secondhand smoke, detecting early disease, and improving the length and quality of life of people living with diabetes and cancer. Effective state programs reflect a complementary relationship with CDC, in which CDC provides technical and financial support, and state chronic disease programs innovate, test, and share experiences that move the field forward. In contrast, programs that promote improved nutrition and increased physical activity are inadequate. Although increasing obesity rates have raised nationwide concern in the private and public sectors and among children's advocates, only half of the states have federal resources to fund public health activities in this area. Loss of funds and declining purchasing power since 2001 have effectively removed more than $100 million from CDC resources devoted to this issue. Because states must compete for these limited funds, every 5 years programs are built up in some locations and dismantled in others, leaving millions unserved as the result of small distinctions between high-quality proposals. The same situation applies to programs that address heart disease, stroke, arthritis, and oral health. Although the past 20 years have brought a deeper and stronger scientific basis for public health approaches to chronic disease prevention, public health remains without the basic resources it needs to establish strong chronic disease prevention programs at the state and local levels. Despite the challenges that patchwork funding presents, national advances in chronic disease prevention programs have been profound. One such advance was the 2006 establishment of CDC's Division for Heart Disease and Stroke Prevention to address the nation's first and third leading causes of death. This new organizational unit, coupled with the existing Division of Cancer Prevention and Control, means that at a national level, public health is tackling the nation's biggest killers. A Public Health Action Plan to Prevent Heart Disease and Stroke -and the public and private partnership created by it -charts the course to achieve national goals for preventing heart disease and stroke through 2020 and beyond. For the first time, an action-oriented, national public health plan has been developed for the leading cause of death in the country. In cancer control, 20 years have seen the maturation of the National Breast and Cervical Cancer Early Detection (B&C) Program. Publicly funded screening services can be provided at a cost consistent with screening in the general population (2). Since 1991, more than 3.3 million women have been screened through the B&C Program (3), and thousands of cancers have been detected at treatable stages (4). The program has reached at-risk women who have historically been missed by screening programs (5). Unfortunately, current resources limit public health authorities to reaching just 15% to 20% of women who are eligible for mammography services. CDC's National Comprehensive Cancer Control Program, established in 1998, is an innovative systems approach to state, tribal, and territorial planning and program delivery in cancer control. As a result of this program, every state, the District of Columbia, 7 tribes/ tribal organizations, 6 US Pacific Island jurisdictions, and Puerto Rico have a comprehensive cancer control plan and an active cancer coalition that brings together expertise and capabilities to address cancer prevention, control, and survivorship. Through this program, many states are highlighting the importance of colorectal cancer screening, which could prevent 70% to 90% of deaths from colorectal cancer if all precancerous polyps were identified and removed (6). The recent action of Congress to add $25 million to CDC's fiscal year 2009 budget for colorectal cancer will allow CDC to begin to establish a nationwide screening program that covers screening and diagnostic follow-up care to low-income men and women with no or inadequate health insurance coverage for these services. Twenty years have also brought scientific findings that have expanded the ability of the public health community to take action. A key contributor is the CDC-supported network of 33 Prevention Research Centers (PRCs), which collaborate with community, academic, and public health partners to conduct participatory research and to put that research into practice. This network is CDC's largest extramural research program and has helped put community-based participatory research on the national map. Each center has a community advisory committee that considers the community's perspective in light of scientific evidence. A good example of this partnership is the way that researchers at the University of Washington's PRC worked with seniors to develop Enhance Fitness, recognized by the National Council on Aging as one of the top 10 physical activity programs in the United States. Evaluation results demonstrated improved outcomes in physical functioning, enhanced socialization, decreased depression, decreased physical pain, and reduced health care costs (7). In 8 years, Enhance Fitness progressed from 1 site to more than 300 sites in 26 states, reflecting the PRC network's emphasis on committed, long-term partnerships to develop, translate, and disseminate effective programs. A noteworthy piece of scientific work comes from the National Institutes of Health-funded Diabetes Prevention Program (DPP), a clinical trial aimed at discovering whether diet and exercise or the oral diabetes drug metformin could prevent or delay the onset of type 2 diabetes (8). DPP results show that type 2 diabetes can be prevented or delayed with moderate weight loss and improvements to nutrition and physical activity behaviors. Unfortunately, data from this well-controlled, well-resourced clinical trial have yet to be translated into widespread public health practice. CDC is conducting several pilot programs to identify people who are at high risk for type 2 diabetes and enroll them in diabetes prevention interventions based on the DPP. Early results suggest that outcomes similar to that of the original trial can be achieved at a fraction of the cost. In addition to applied research, evaluation and surveillance are pivotal to NCCDPHP's public health achievements. A prime example is the BRFSS. BRFSS designers recognized not only the importance of state data in influencing and evaluating the success of public health programs but also the need to tackle multiple public health issues in a single surveillance system. With the BRFSS's built-in flexibility, states can add questions of high salience. Moreover, innovative modules have opened up major areas of public health action, such as work in mental health, quality of life, and experiences of racism and its relationship to health outcomes. Perhaps most noteworthy in the past decade is the use of BRFSS data to document rising obesity rates and to drive public attention and public health response to this epidemic. Recent advances, such as the introduction of SMART (Selected Metropolitan/Micropolitan Area Risk Trends) BRFSS, which provides data for hundreds of counties by summing across multiple years to make stable estimates at the local level, demonstrate that BRFSS is an evolving, world-class data system (9). A major structural change for the center occurred in 2006 with the transfer of the Office of Public Health Genomics from CDC's Office of the Director to NCCDPHP. This office continues its work to establish public health genomics as a multidisciplinary field concerned with the effective and responsible translation of genome-based knowledge and technologies to improve population health. The Office of Public Health Genomics has led the way in using genomics-based health applications, promoting family history as a tool for disease prevention, and examining, through CDC's National Health and Nutrition Examination Survey, the prevalence and association of 90 genetic variants with specific disease outcomes. Several other accomplishments are worth noting in this 20th anniversary year. One is the achievements of the youth media campaign VERB. In his last year in office, Congressman John Porter from Chicago called for the use of paid media to market health to children. NCCDPHP embarked on one of the most innovative projects in its history. Funds at a level unheard of in public health (averaging approximately $70 million per year) were provided to use the same advertising strategies that were employed by the best marketers of children's products. By the end of the 5-year campaign, VERB had won more than 50 major industry awards. More importantly, this campaign, which was "by and for kids," achieved a 75% recognition rate among the target audience (9-to 13-year-olds). As the ultimate measure of success, children who were aware of VERB reported engaging in significantly more physical activity than did children who were unaware of VERB (10). This story ends with disappointment in terms of sustaining meaningful changes in the health of youth. Despite evidence of nationwide effectiveness, VERB funding was halted at the end of 5 years. Another area of transformation is CDC's work with communities, including programs that show success in eliminating racial and ethnic health disparities. Communities that participate in the Racial and Ethnic Approaches to Community Health (REACH) program are innovators in strategy and intervention. Their documented successes in reducing and eliminating health disparities speak powerfully to the importance of engaging local leaders and organizations, forging strong community partnerships, and recognizing cultural influences and historical legacies (11). The Healthy Communities Program (which builds on the Steps Program established in 2003) simultaneously addresses chronic diseases such as obesity, diabetes, heart disease, physical inactivity, poor nutrition, and tobacco use by creating a groundswell of activity in local communities, through schools, worksites, health care settings, and other community institutions. The Healthy Communities Program emphasizes public health interventions that are evidence-based and that reach beyond public health to community health by bringing together business, transportation, and city planning sectors. The WISEWOMAN (Well-Integrated Screening and Evaluation for Women Across the Nation) program serves women aged 50 to 64 years and builds on the B&C Program's extensive outreach to uninsured and underinsured women who are at or below 250% of the federal poverty threshold. The community-based WISEWOMAN programs provide standard preventive services, including blood pressure and cholesterol testing, as well as lifestyle programs that promote good nutrition, physical activity, and smoking cessation. CDC and partners such as the YMCA of the USA, the National Association of County and City Health Officials, the National Recreation and Park Association, the National Association of Chronic Disease Directors, and the Society for Public Health Education are working to share lessons learned from REACH, Healthy Communities, Steps, and WISEWOMAN through carefully developed tools and training. Moving these demonstrations into widespread practice will require political will at the national, state, and local levels to provide resources that enable local communities to take action. As an example of such political will, the Minnesota state legislature recently voted to invest $47 million in 2 years to establish a new statewide health improvement plan -owing to the success of the state's Steps Program in promoting health at the community level. Our work in maternal, child, and adolescent health is also reaching new heights. Together, the Division of Reproductive Health (DRH) and the Division of Adolescent and School Health (DASH) have reduced the rate of unintended teen pregnancy. DRH, through its Safe Motherhood program, tackles a wide range of maternal and child health issues, including infertility, premature birth, gestational diabetes, tobacco use during pregnancy, and postpartum depression. DASH is the nation's "go-to" location for resources and assistance to build healthy youth and healthy schools. DASH's direct involvement with the nation's state and local education agencies and organizations (in concert with traditional public health agencies) has enabled work with the nation's schools and is a model for work with other sectors. Advances in the areas of tobacco, nutrition, physical activity, and alcohol represent some of the most important work of the center. Tobacco control has set a new and powerful paradigm for prevention by documenting the influence of policy and media. We see signs of the new paradigm being applied to nutrition through innovative local and state initiatives to influence food choices in day care centers, schools, and hospitals; restrict fast food and liquor store densities; provide calorie information on menu items; improve food labeling practices; limit food advertising to children; provide incentives for full-service grocery stores in urban "food deserts"; and reduce salt in the nation's food supply. Finally, our work with the CDC Foundation, a 501(c)(3) charity, helps donors and CDC scientists achieve common goals. For example, the Avon-CDC Foundation Mobile Access Program provides mammography screening vans that serve women in geographically remote areas. Services are made possible through a $4.1 million gift from the Avon Foundation. Through such partnerships with the CDC Foundation, NCCDPHP is able to extend its reach and capabilities. Current donor investments to NCCDPHP through the CDC Foundation total $60 million, including a sizeable grant from the Bloomberg Philanthropies to establish global surveillance of adult tobacco use. Although the major chronic diseases and their risk factors are distinct in terms of biology, prevention, and treatment, they share many similarities. Populations at risk for 1 chronic disease are often at risk for multiple chronic diseases. Common settings, such as schools, worksites, health care organizations, and communities serve as intervention sites for the prevention of multiple risk factors, early detection of disease, and promotion of self-management programs for chronic disease. Lastly, coordinated strategies, such as those involving supportive public policy, social and physical environments, system changes, media, and technology, are required to address nearly all chronic disease risk factors and conditions. Recognizing the necessity for improved program integration, NCCDPHP is working with states and communities to develop and evaluate new models for chronic disease prevention that focus on populations rather than on risk factors and diseases. Four states -Colorado, North Carolina, Massachusetts, and Wisconsin -have established unified work plans that preserve the integrity of Congressional funding lines but do so in the context of a comprehensive plan. These models are precursors to a new way of doing business that maintains focus on evidence-based best practices while maximizing the impact of investments across categorical programs. We have described the substantial advances that NCCDPHP has made in chronic disease prevention and control and the potential -and need -for future development. Public health has played a central role in many of the greatest health achievements of our times and is positioned to achieve much more (12). However, we believe that the greatest challenge to public health is solving the investment problems that have plagued chronic disease prevention and control for too long. We conclude with 4 recommendations for the center and its work. • Prevention parity. Preventive actions to maintain health in the absence of disease are underused and undervalued. NCCDPHP and its partners must be the outspoken leaders on behalf of prevention, its financing, and its delivery. Prevention methods are required to demonstrate cost-effectiveness, if not cost savings, before they are employed. At the same time, costly, medical procedures and treatments are not held to the same standards. The nation needs 1) a level playing field for the assessment of both preventive and therapeutic interventions and 2) support for interventions that improve health at a reasonable cost. Public health systems research, costeffectiveness research, and translation research are all needed to advance and support the prevention mission. • Optimal health for all. High-coverage, long-lasting, and low-cost strategies, such as laws for clean indoor air and water fluoridation, are the hallmarks of effective public health practice. Constant vigilance is required, alongside these efforts, to ensure that we are reaching populations that face the greatest inequities in health. Intensive community efforts focused on achieving health equity, such as those demonstrated by REACH, also will be critical to success. • Health in all policies and settings. Increasing health care costs and subpar health outcomes are illuminating the importance of prevention. Even the broader health sector cannot deliver optimal health outcomes on its own. Policies and practices in education, housing, transportation, and agriculture have far-reaching health effects but are not engaged or evaluated for those outcomes. Work in the area of social determinants of health highlights the importance of environmental, social, political, and economic conditions on health. Given the influence of multiple sectors on health, the Department of Health and Human Services' Healthy People 2020 health objectives for the nation will have the best chance of success if they explicitly call for the engagement of key sectors and if the objectives are adopted and addressed by the president's full cabinet. • Worldwide engagement. The changing landscape of global disease patterns -from infectious to noninfectious causes -will require more leadership and a more global engagement from NCCDPHP than ever before. NCCDPHP's global surveillance activities establish a foundation for this work by providing public health data in more than 150 countries. CDC's bilateral and multinational work on social determinants of health, health promotion, and tobacco control informs progress in chronic disease prevention and control in the United States and abroad. These efforts should be leveraged and expanded. We are proud to have been part of the growth of NCCDPHP and to have participated in its support of the remarkable work of state and local health departments, partners, and colleagues in the past 2 decades. What began as a relatively new frontier in public health is now accepted as a centerpiece for health and wellness in the country. Ultimately, matching the intensity and reach of our prevention efforts to the scope of the chronic disease challenges will be necessary to deliver on the promise of optimal health for all.
v3-fos-license
2024-01-05T05:13:07.749Z
2024-01-02T00:00:00.000
266751026
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmedgenomics.biomedcentral.com/counter/pdf/10.1186/s12920-023-01785-4", "pdf_hash": "fca91e1652176681589d7a293610df906e93d0d6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1416", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "fca91e1652176681589d7a293610df906e93d0d6", "year": 2024 }
pes2o/s2orc
Integrated multi-omic analysis and experiment reveals the role of endoplasmic reticulum stress in lung adenocarcinoma Background Lung cancer is a highly prevalent malignancy worldwide and is associated with high mortality rates. While the involvement of endoplasmic reticulum (ER) stress in the development of lung adenocarcinoma (LUAD) has been established, the underlying mechanism remains unclear. Methods In this study, we utilized data from The Cancer Genome Atlas (TCGA) to identify differentially expressed endoplasmic reticulum stress-related genes (ERSRGs) between LUAD and normal tissues. We performed various bioinformatics analyses to investigate the biological functions of these ERSRGs. Using LASSO analysis and multivariate stepwise regression, we constructed a novel prognostic model based on the ERSRGs. We further validated the performance of the model using two independent datasets from the Gene Expression Omnibus (GEO). Additionally, we conducted functional enrichment analysis, immune checkpoint analysis, and immune infiltration analysis and drug sensitivity analysis of LUAD patients to explore the potential biological function of the model. Furthermore, we conducted a battery of experiments to verify the expression of ERSRGs in a real-world cohort. Results We identified 106 ERSRGs associated with LUAD, which allowed us to classify LUAD patients into two subtypes based on gene expression differences. Using six prognostic genes (NUPR1, RHBDD2, VCP, BAK1, EIF2AK3, MBTPS2), we constructed a prognostic model that exhibited excellent predictive performance in the training dataset and was successfully validated in two independent external datasets. The risk score derived from this model emerged as an independent prognostic factor for LUAD. Confirmation of the linkage between this risk model and immune infiltration was affirmed through the utilization of Gene Set Enrichment Analysis (GSEA), Gene Ontology (GO), and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses. The q-PCR results verified significant differences in the expression of prognostic genes between cancer and paracancer tissues. Notably, the protein expression of NUPR1, as determined by immunohistochemistry (IHC), exhibited an opposite pattern compared to the mRNA expression patterns. Conclusion This study establishes a novel prognostic model for LUAD based on six ER stress-related genes, facilitating the prediction of LUAD prognosis. Additionally, NUPR1 was identified as a potential regulator of stress in LUAD. Supplementary Information The online version contains supplementary material available at 10.1186/s12920-023-01785-4. Introduction Lung cancer poses a significant worldwide health issue, wherein non-small cell lung cancer (NSCLC) comprises approximately 80-85% of the reported instances [1].According to global cancer statistics, over 2 million new cases of lung cancer are diagnosed each year [2,3].Among the various subtypes of NSCLC, lung adenocarcinoma (LUAD) represented approximately 55-60% of cases [4].Despite significant advancements in immune checkpoint inhibitors and anti-angiogenesis therapies that have improved survival rates, the 5-year survival rate for patients remains around 20% [5][6][7].While several conventional clinical models have been utilized to predict the prognosis of LUAD, the inherent heterogeneity of the disease limits their ability to provide accurate results [8].Consequently, there is a need to develop new prognostic signatures that can enhance the prognosis assessment for LUAD patients. The endoplasmic reticulum (ER) is a multifunctional organelle responsible for protein folding, lipid biosynthesis, and calcium storage [9,10].Notably, it serves as a central hub for protein quality control, enabling adaptation to adverse synthesis, external stimuli, and other detrimental events.ER stress has been implicated in the development and progression of various human malignancies, as it affects multiple cancer hallmarks [11].External adverse factors can disrupt the integrity of the ER, leading to the accumulation of unfolded or misfolded proteins within its lumen, a condition known as ER stress.This triggers the activation of the unfolded protein response (UPR) [12,13].In several cancer types, overexpression of ER stress indicators has been associated with poor prognosis and clinical outcomes [14].Wei et al. conducted a study confirming that the activation of ER stress signals plays a significant role in the initiation and progression of liver cancer [15].Furthermore, they discovered that suppressing the ER stress response enhances cellular susceptibility to cisplatin therapy in NSCLC [16].A recent study demonstrated that ER stress induces oral squamous cell cancer cells to secrete exosome PD-L1, leading to upregulated PD-L1 expression in macrophages and driving the polarization of M2 macrophages [17].However, a comprehensive understanding of ER stress in LUAD, including the interplay between ER stress regulators and the tumor immune microenvironment (TIME), remains elusive. To investigate and assess the clinical significance of ER stress in LUAD, a comprehensive analysis of endoplasmic reticulum stress-related genes (ERSRGs) was conducted in this study.Additionally, a predictive model based on ERSRGs was constructed to evaluate its prognostic value in LUAD patients.Functional enrichment analysis revealed a correlation between ERSRGs and immune infiltration.The findings of this study offer insights into the potential molecular mechanisms underlying LUAD and provide valuable prognostic information for clinical management. Data collection The clinical information and RNA sequencing data for the bioinformatics analysis were obtained from publicly available databases, namely The Cancer Genome Atlas (TCGA, https://portal.gdc.cancer.gov/).A total of 453 samples diagnosed with LUAD were included in the training set, ensuring the availability of complete clinical information, including survival time, survival status, age, and gender.Additionally, to further validate the findings, datasets consisting of 352 patients diagnosed with LUAD was acquired from the Gene Expression Omnibus (GEO) database (https://www.ncbi.nlm.nih.gov/geo/).These datasets comprised of two independent cohorts, namely GSE31210 (246 samples) and GSE37745 (106 samples), from which mRNA expression matrices were extracted.Subsequently, these datasets were integrated and defined as the validation sets. Clinical sample collection For quantitative polymerase chain reaction (Q-PCR) experiments, tissue samples were obtained from a cohort of eight patients who underwent pulmonary lobectomy at the Affiliated Cancer Hospital of Nantong University between August 2022 and April 2023.The tissue samples comprised both LUAD and paired collateral cancer specimens.Additionally, six patients diagnosed with LUAD were included for immunohistochemistry experiments, and their samples were sourced from Nantong Tumor Hospital.Prior to their participation in the study, all patients provided written informed consent.The research protocol was approved by the Ethics Committee of the Affiliated Cancer Hospital of Nantong University. Quantitative polymerase chain reaction Eight pairs of cancer and adjacent non-cancerous tissues were collected for qPCR.Total RNA extraction was conducted utilizing TRIzol reagent (Thermo Fisher SCIEN-TIFIC, USA), following the manufacturer's instructions.Reverse transcription of mRNA was accomplished using the EvoM-MLV reverse transcription kit (Accurate Biology, China) [18].For reverse transcription of mRNA, the HiScript III RT SuperMix for qPCR (Vazyme, China) was employed.The primers used in this study were procured from Sangon Biotech, and their specific sequences are provided in Table 1. Immunohistochemistry The tumor tissue microarray, obtained from the Affiliated Tumor Hospital of Nantong University, was utilized for the validation of the queue.Immunohistochemistry (IHC) was performed following previously established protocols [19].In brief, the tissue sections were incubated with a primary Anti-NUPR1 antibody (dilution 1:100; catalog number 15056-1-AP, Proteintech, China) and subsequently processed using the appropriate detection system.A scanning microscope (Nikon, Japan) was employed for capturing high-resolution images of the stained sections.The evaluation of NUPR1 staining was conducted by two independent pathologists, who were blinded to the corresponding clinical information.They assessed the staining intensity, distribution, and cellular localization of NUPR1 in a semi-quantitative manner using established scoring criteria.Any discrepancies between the two pathologists were resolved through consensus discussion. Differentially expressed genes associated with ER stress Differential gene expression analysis was performed on the TCGA-LUAD dataset using the R-package "limma" to identify genes that were differentially expressed between LUAD and healthy samples.The criteria for differential expression were set as |log 2 FoldChange| > 1 and adj.p < 0.05.The resulting gene set was then intersected with the ERSRGs, leading to the identification of 106 ER stress-related differentially expressed genes (DEGs). To further identify ER stress-related prognostic genes, univariate Cox regression analysis was conducted on the TCGA-LUAD dataset.The genes were subjected to least absolute shrinkage and selection operator (LASSO) regression using the R package "glmnet" to identify genes associated with overall survival (OS).Multiple factor stepwise regression was then applied, and the smallest lambda value was considered as the optimal value.Risk score: n i=1 = Coef (gene) * Expression (gene).A nomogram was constructed using the R packages "rms" and "survival" to predict the survival of LUAD patients.The nomogram included various variables such as age, sex, TN staging, histological grading, and risk score.The accuracy of the nomogram was verified by plotting calibration curves at 1-year, 3-year, and 5-year intervals using the R package "rms". Based on the median risk score, patients were divided into high-risk and low-risk subgroups.Kaplan-Meier (K-M) curves, time-dependent receiver operating characteristic (ROC), and a riskscore plot were generated through multivariate Cox regression analysis to illustrate the distribution and survival status of LUAD patients in the two risk groups. Consensus clustering analysis Cluster analysis was performed on a cohort of 106 patients using the Pearson correlation distance measure.To ensure robustness, the clustering process was repeated 10 times on 80% of the samples.The optimal number of clusters was determined by analyzing the empirical cumulative distribution function graph. Functional enrichment analysis Gene Ontology (GO) analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis, and Gene Set Enrichment Analysis (GSEA) were employed to investigate potential mechanisms and pathways associated with the two clusters and riskscore subgroups [20]. Immune infiltration analysis and single cell sequencing analysis The degree of immune cell infiltration and the presence of immune checkpoints were compared across different riskscore subgroups.The "ESTIMATE" package was utilized to calculate the stromal score, immune score, and tumor purity for LUAD patients.To ensure the stability of the results, the "TIMER", "EPIC", and "MCP-counter" tools were employed.Additionally, single-cell RNA sequencing data from the GSE127465 dataset was screened to identify relevant information.The TISCH platform, specifically its built-in tSNE algorithm available at http://tisch1.comp-genomics.org/,was employed for dimensionality reduction and visualization of the identified clusters. Cell culture Beas-2b cells, representing a human normal lung epithelial cell line, underwent cultivation in RPMI 1640 medium supplemented with 10% fetal calf serum (Gibco, Grand Island, NY, USA) and 1% Penicillin-Streptomycin (NCM Biotech, China).H1299, H1975, and PC9 cells were cultured in RPMI 1640 medium with 10% fetal bovine serum (FBS) at 37 °C.Similarly, A549 cells were nurtured in F-12 K medium (Gibco, Grand Island, NY, USA) supplemented with 10% FBS.Cell lines were meticulously maintained in their respective culture media to facilitate optimal growth and experimental conditions. Cell counting Kit-8 assay and transwell H1299 cells were inoculated in 96-well plates at a density of 5*10 3 cells per well.Subsequently, the cells were exposed to Trifluoperazine dihydrochloride (MedChem-Express, China) at a concentration of 20 μmol /mL and incubated at 37 °C for 24, 48, and 72 h.Following the respective incubation periods, an absorbance reading at 450 nm was obtained using a microplate reader (Thermo Fisher Scientific, Waltham, MA, USA).This measurement was conducted after a 2-hour incubation at 37 °C with 10 μL of Cell Counting Kit-8 (CCK-8; Bimake, Houston, TX, USA) reagent in each well.To standardize the concentration of H1299 cells to 1 × 10 5 /mL, serum-free RPMI-1640 medium was employed.The upper chamber received 200 μL of RPMI-1640 medium containing 10% fetal bovine serum, while the lower chamber was supplemented with 600 μL of RPMI-1640 medium containing 10% fetal bovine serum.The cell culture was maintained at 37℃ with 5% CO 2 for 24 h.After chamber removal, the cells in the upper chamber were delicately swabbed with a cotton applicator, fixed with 4% paraformaldehyde for 15 min, stained with crystal violet at room temperature for 25 min, and excess staining solution was rinsed off with PBS.Observation of cells that traversed the membrane was performed under a microscope.For each sample, three randomly selected fields of view were photographed and enumerated, and the average value was computed. Drug sensitivity analysis The half-maximum inhibitory concentration (IC50) metrics for chemotherapeutic agents were acquired from the Genomics of Cancer Drug Sensitivity (GDSC) database, accessible at https://www.cancerrxgene.org/.Subsequently, "PRrophytic" R package was performed to calculate the drug susceptibility between different subgroups in R software.The outcomes are visually represented through box plots. Statistical analysis Statistical analyses and data visualization were conducted using R software version 4.1.0and GraphPad Prism version 9.5.1.Both univariate and multivariate Cox regression analyses were employed to examine the impact of various factors on the prognosis of LUAD.Statistical significance was defined as p-values < 0.05. The screening and characterization of ERSRGs in LUAD The study flow-chart is shown in Fig. 1.Differential expression analysis between the LUAD and control groups (TCGA cohort) was conducted using the limma package in R, leading to the identification of 20,724 DEGs.Among these DEGs, 14,019 were up-regulated, while 6,705 were down-regulated (Fig. 2A).To investigate the potential involvement of ER stress in LUAD, a Venn analysis was performed to identify the overlap between the LUAD-related DEGs and ER stress genes (Fig. 2B).The interaction network among the 106 genes is visualized in Fig. 2C, and the intersecting genes were functionally annotated using GO and KEGG analyses.As depicted in Fig. 2D, these genes are enriched in various biological processes, such as response to ER stress, response to topologically incorrect protein, ER-associated Degradation (ERAD) pathway, cell components including ER protein-containing complex, integral component of ER membrane, ER ubiquitin ligase complex, and molecular functions such as ubiquitin-like protein ligase binding, ubiquitin ligase binding and ubiquitin protease binding.The critical functions of the ERSRGs include ubiquitin mediated proteolysis, protein processing in ER, B cell receptor signaling pathway and so on (Fig. 2E). Consensus clustering analysis of ERSRGs in LUAD Based on the identified set of 106 genes, the cohort of LUAD patients from the training queue (TCGA cohort) was subjected to Cluster Analysis to classify them into two subgroups.Optimal stability was observed when K = 2, resulting in the classification of 232 patients into cluster 1 and 221 patients into cluster 2 (Fig. 3A-C).Principal component analysis demonstrated a distinct separation of samples into two clusters (Fig. 3D).Notably, the OS rate of patients in cluster 2 was significantly higher than that of cluster 1 (P = 0.03; Fig. 3E).These findings supported the subdivision of LUAD patients into two distinct molecular subtypes associated with differing survival outcomes.Furthermore, a volcano plot depicting the logFC and FDR values of 620 upregulated genes and 321 downregulated genes across the two clusters was generated (Fig. 3F).Subsequent GO enrichment analysis revealed that these genes were significantly enriched in specific molecular processes, such as mitotic sister chromatid segregation, humoral immune response, regulation of humoral immune response, condensed chromosome, centromeric region, and others (Fig. 3G; Table 2).Additionally, GSEA revealed that the enriched pathways were predominantly associated with immune infiltration (Fig. 3H).Consequently, it is reasonable to hypothesize that the influence of risk scores may impact the prognosis of LUAD by modulating the immune microenvironment. Prognostic signature was constructed and validation based on ERSRGs in LUAD The initial step in developing a prognostic model involved the identification of candidate prognostic ERSRGs through univariate Cox regression analysis.As depicted in Fig. 4A, the OS of LUAD patients exhibited a significant correlation with 18 ERSRGs.Subsequently, LASSO analysis was employed to detect and screen 15 DEGs associated with ER stress (Fig. 4B).Furthermore, a multiple factor stepwise regression analysis was performed, resulting in the selection of 6 genes for constructing the prognostic model (Fig. 4C).The risk scoring model was established using the following formula: Riskscore = (-0.and risk survival status plot revealed that the high-risk subgroup exhibited a worse prognosis, whereas the lowrisk subgroup demonstrated prolonged survival (Fig. 4D and F).The prognostic models were assessed by calculating the area under the curve (AUC) for 1-year, 3-year, and 5-year survival, yielding values of 0.68, 0.69, and 0.70, respectively (Fig. 4E). Assessment and external validation for ERSRGs-signature The risk distribution curve, survival status, and expression heatmap of the external validation sets (GSE37745 and GSE30210) demonstrated that patients with lowrisk scores exhibited significantly longer survival times compared to those with high-risk scores, thus validating the findings from the training set (Fig. 5A and B).To further consolidate the prognostic model, the clinical information and genetic characteristics from TCGA were integrated, and a comprehensive multi-factor Cox regression model was developed, resulting in the construction of a nomogram (Fig. 5C).Calibration plots were employed to assess the predictive accuracy of the nomogram, revealing excellent agreement between the predicted and observed OS rates at 1, 3, and 5 years (Fig. 5D).Moreover, the nomogram model was subjected to decision curve analysis (DCA) to evaluate its clinical utility and potential benefits (Fig. 5E-G).Collectively, the risk score, when combined with the ERSRGs-signature, pathological stage, and N-stage, emerged as an independent and robust prognostic indicator, providing enhanced prognostic value for patients with LUAD. Exploring immune infiltration patterns and single-cell analysis of ERSRGs-signature in LUAD To unravel the potential functions and pathways associated with prognostic features, we conducted comprehensive enrichment analyses of Gene Set, GO, and KEGG pathways.The results revealed that the genes linked to prognostic features were predominantly enriched in pathways related to immunoinfiltration.Hence, we proceeded to explore the heterogeneity of immune microenvironments among ERSRGs-signature (Fig. 6A-C).Initially, we assessed the correlation between gene expression and immune infiltration in LUAD and observed significant variations in the expression of different genes across immune cells (Fig. 6D).Subsequently, we employed the TIMER and EPIC algorithms to investigate immune infiltration patterns between the low and high-risk subgroups.The low-risk subgroup exhibited significantly elevated expression of B cells, CD4 T cells, CD8 T cells, and macrophage cells compared to the high-risk group (Fig. 6E and F).To validate the stability and robustness of these findings, we utilized additional algorithms, namely MCP-counter and ESTIMATE, which yielded consistent results (Fig. 6G and H).Furthermore, we observed substantial differences in the expression of immune checkpoints between the two subgroups (Fig. 6I). Subsequently, we conducted single-cell sequencing analysis of the ERSRGs-signature.Cluster analysis was performed, and Fig. 7A depicted the cluster display using t-distributed stochastic neighbor embedding (tSNE), where each color represented a distinct cell type identified within the clusters.Each cell was represented by a scatter plot, and the numbers in the figure corresponded to the cluster numbers.It was evident that there are 25 distinct cell populations.Figure 7B presented the annotation of clusters based on marker analysis, revealing significant differences in gene expression among different immune cells.After applying tSNE dimensionality reduction, the mRNA distribution of BAK1, EIF2AK3, MBTPS2, NUPR1, RHBDD2, and VCP was shown in Fig. 7C-H.Finally, we analyzed the differential expression of ERSRGs in the various immune cell clusters.Among them, BAK1 exhibited the lowest expression in immune cells, while VCP demonstrated the highest expression (Fig. 7I). Overall, the riskscore demonstrated an inverse correlation with the level of immune infiltration, providing novel insights into the relationship between ERSRGs and the immune status of LUAD. Validation of the expression levels of ERSRGs in LUAD To further investigate the association between the prognostic ERSRGs-signature and LUAD, in vitro experiments were conducted using qPCR analysis on peritumoral and tumor tissues.The findings revealed a significant upregulation of BAK1 and EIF2AK3 expression in LUAD tissues, whereas NUPR1, RHBDD2, and VCP exhibited the opposite trend (Fig. 8A-G).Moreover, the Human Protein Atlas (HPA) database analysis showed higher expression levels of BAK1A and EIF2AK3 in LUAD tissues compared to normal tissues (Fig. 8G).However, NUPR1 data was unavailable in the HPA database.Therefore, to explore the protein expression of NUPR1 in LUAD patients, IHC analysis was performed at Nantong Cancer Hospital.Interestingly, the protein expression of NUPR1, as determined by IHC, exhibited an opposite pattern compared to the mRNA expression patterns (Fig. 9A). Validation of NUPR1 under experiment In this study, we implemented a comprehensive validation of NUPR1 within authentic laboratory conditions.Initially, IHC analysis was conducted on pathological specimens obtained from 6 LUAD patients.The results revealed a conspicuous aggregation of NUPR1 within cancerous tissue compared to adjacent non-cancerous tissues (Fig. 9A and B).Subsequently, both RNA and protein expression levels of NUPR1 were scrutinized in normal lung epithelial cells and four distinct LUAD cell lines.Surprisingly, NUPR1 RNA exhibited its highest expression in normal cell lines (Fig. 9C), aligning with our bioinformatics analysis outcomes.In contrast, NUPR1 protein displayed heightened expression levels in LUAD cells (Fig. 9D and E, Fig. S2 and S3).We postulated that potential post-translational modifications may underlie this incongruity.To gain deeper insights into the functional role of NUPR1 in LUAD progression, we procured NUPR1 inhibitors and executed cell proliferation and transwell experiments.The results starkly indicated that upon NUPR1 inhibition, both cell proliferation and invasive capacity were markedly attenuated (Fig. 9F and G).This unequivocally underscores the contributory role of NUPR1 protein in the advancement of LUAD. Correlation between risk score and IC50 values for therapeutic agents The impact of risk scores on the IC50 values of a set of 30 distinct drug molecules was systematically assessed to discern their therapeutic efficacy.Except for BI-2536 and WIKI4, all other drugs exhibited higher resistance in the high-risk group (Fig. 10 and Fig. S1).This observation underscores the potential utility of our prognostic model in guiding the use of therapeutic agents. Discussion LUAD represents the most prevalent subtype of lung cancer, a grave malignancy arising from the accumulation of various genetic mutations.These mutations lead to uncontrolled proliferation of lung cells and the subsequent formation of tumors.Upon recognition by the immune system, these transformed cancer cells elicit an immune response aimed at their elimination [21].Nonetheless, immune escape not only expedites tumor progression but also impairs the efficacy of cancer immunotherapy [22,23].The ER pathway serves as a critical regulator of ER homeostasis.Disruption of ER function triggers a phenomenon referred to as "ER stress" [24].In the context of tumorigenesis, the rapid proliferation rate of cancer cells necessitates heightened activity of ER protein folding, assembly, and transport, thereby inducing physiological stress within the ER [25].The ER stress response is believed to confer cellular protection and is implicated in tumor growth and adaptation to challenging environments [26].Sustained ER stress represents a novel characteristic of cancer, resulting from various metabolic and carcinogenic abnormalities that disrupt protein-folding homeostasis in aggressive immune cells.Constitutive activation of the ER stress response enables malignant cells to adapt to carcinogenesis and environmental stressors by coordinating multiple immune regulatory mechanisms and promoting malignant progression concurrently [27].Nonetheless, the precise relationship between ER stress and the immune microenvironment remains inadequately investigated. In our study, we initially screened 106 genes associated with ER stress to identify differential expression patterns between cancer and para-cancer samples.K-Medoids clustering was employed for this purpose.The differential genes in the two resulting clusters were primarily enriched in processes related to the adaptive immune system, humoral immune response, and regulation of humoral immune response.Notably, patients belonging to cluster 1 exhibited a significantly longer survival time compared to those in cluster 2. This discrepancy in prognosis suggests a potential correlation with immune response.Through a series of statistical analyses, including univariate regression, LASSO, and logistic stepwise regression, we identified 6 key ERSRGs.Subsequently, we constructed a novel prognostic risk spectrum based on the expression signature of these six genes (referred to as ERSRGs).This risk spectrum allowed us to classify patients with LUAD into distinct risk subgroups, based on their respective median risk scores.Importantly, a higher risk score was associated with worse prognosis for the patients. The prognostic features of interest encompass 6 ERSRGs, specifically EIF2AK3, MBTPS2, RHBDD2, VCP, NUPR1, and BAK1.Among these, EIF2AK3, NUPR1, and RHBDD2 demonstrated protective characteristics, while MBTPS2, VCP, and BAK1 were strongly associated with poor prognosis.To assess their expression levels, qPCR analyses were conducted on cancer and para-cancer samples from 8 patients diagnosed with LUAD.The results revealed significant differential expression of EIF2AK3, RHBDD2, VCP, NUPR1, and BAK1, with NUPR1 and RHBDD2 exhibiting the most pronounced differences.EIF2AK3 has been identified as an immune-related prognostic gene in breast cancer, exerting a role in tumor cell apoptosis and facilitating sustained protective antitumor immunity [28].MBTPS2, a membrane-embedded zinc metalloprotease, activates signaling proteins involved in transcriptional control of sterol and the ER stress response [29], thus promoting the progression of prostate cancer [30] and colorectal cancer [30].The RHBDD2 (Rhomboid domain containing 2) gene is found to be overexpressed in advanced stages of colorectal cancer (CRC) and potentially modulates the UPR pathway, thereby favoring cell migration, adhesion, and proliferation [31].VCP (valosin-containing protein) is crucial for maintaining mitochondrial function, and in prostate cancer cells, it employs self-aggregation to inhibit mitochondrial activity, thereby evading cell death during nutrient deprivation and promoting malignancy [32].In a cohort study, Tao et al. demonstrated that NUPR1 serves as a protective factor in the survival prognosis of LUAD [33], while Li et al. suggested NUPR1 to be a potential risk gene [34].NUPR1, a nuclear protein, plays a critical role in redox reactions [35], and macrophages have been implicated as the most relevant immune cells associated with NUPR1 expression in bladder cancer [36].Furthermore, the mechanism through which BAK1 promotes cisplatin resistance in NSCLC is believed to involve the inhibition of cell apoptosis [37].In summary, all 6 identified genes contribute to tumor development and progression by modulating pathways associated with Nuclear Protein 1 (NUPR1) is a small, highly basic transcriptional regulator involved in the regulation of diverse cellular processes, such as DNA repair, ER stress, and oxidative stress response.The cellular localization of NUPR1 appears to be associated with pathological conditions.Prominent cytoplasmic staining has been observed in large papillary tumors, tumors exhibiting lymph node metastasis, and NSCLC [38].Our IHC analysis corroborated these findings.However, intriguingly, our realworld cohort study revealed that, in contrast to mRNA expression, NUPR1 accumulates in cancerous tissues, contributing to the malignant progression of cancer, which necessitates further investigation.Garcia Montero et al. reported that under various stress conditions, NUPR1 mRNA expression was rapidly, strongly, and transiently stimulated [39].Cancer cells endure and adapt to various types of stressful environments over prolonged periods [40], leading us to speculate that NUPR1 mRNA may be consumed more in cancerous tissues compared to adjacent tissues.Additionally, interestingly, the protein expression of NUPR1 has been shown to positively correlate with cell density [41].Considering that cancer arises Fig. 10 (A-P) Therapeutic drugs showed significant IC50 differences in high-and low-risk groups from unregulated and excessive cell division and proliferation, resulting in higher cell density [42], we hypothesize that NUPR1 expression is relatively elevated in cancer cells characterized by higher cell density compared to adjacent cells with relatively fewer cells. To verify the broad applicability of the risk assessment element group, we conducted validation using external datasets GSE31210 and GSE37745.The signature exhibited robust predictive performance not only in the internal dataset but also in the validation sets.Evidence from ROC curves and K-M analysis demonstrated the remarkable predictive effect of the ERSRGs on the prognosis of LUAD patients.Importantly, even after stratifying clinical features, this signature remained significantly prognostic in LUAD patients.Therefore, we propose that ER stress-related features possess excellent predictive performance for OS and could serve as independent prognostic indicators for LUAD.To facilitate clinical application, we constructed a nomogram model and verified its accuracy using calibration diagrams. Previous research has highlighted the role of ER stress in promoting immune escape and facilitating metastasis [43,44].Subsequent GSEA, GO, and KEGG analyses of the two subgroups revealed enrichment in immunerelated pathways.Notably, tumor purity has been identified as negatively correlated with immune response, suggesting its potential as an indicator of the immune response level in the tumor microenvironment [45].To explore this further, we employed four different immune scoring algorithms, and all results consistently indicated that individuals classified as low-risk exhibited higher expression levels of B cells, CD4 + T cells, CD8 + T cells, neutrophils, macrophages, and endothelial cells.The density of CD8 + T cells and mature dendritic cells has been closely associated with the survival rate of lung cancers, with higher CD8 + T cell density correlating with better 5-year survival rates [46], consistent with our findings.Additionally, we observed decreased expression of immune checkpoint genes in the high-risk group, which may be attributed to immune cell dysregulation.Therefore, our new prognostic model holds potential to not only assess the survival prognosis of LUAD but also shed light on the immune microenvironment. Several limitations should be acknowledged in this study.Firstly, the model primarily relies on data from the TCGA database and the Nantong cohort, thus its generalizability to other datasets may be limited.Therefore, a prospective multicenter cohort study is necessary to validate the findings and ensure their applicability to diverse populations.Secondly, in order to comprehensively elucidate the underlying reasons for the discordance between NUPR1 mRNA and protein expression levels, further evidence from additional experiments and investigations is required. Overall, this study presents a prognostic model based on six genes associated with ER stress.The model exhibits utility in predicting the survival outcomes of patients with LUAD and offers insights into tumor immune infiltration to some extent.Furthermore, the identification of key genes provides novel insights into the molecular mechanisms underlying LUAD. Fig. 1 Fig. 1 Flow Chart of this Research Fig. 2 Fig. 2 The screening and characterization of ERSRGs in LUAD.(A) Volcano plot showing DEGs between LUAD and control samples.(B) Venn diagram showing the intersection of DEGs and ER stress-related genes.(C) The PPI network shows the interactions of the ERSRGs in LUAD.(D) GO functional enrichment analysis of the intersecting genes with the top three of BP, CC and MF terms (E) The KEGG enrichment results are displayed, and the node size represents the number of genes enriched Fig. 3 Fig. 3 Clustering analysis of endoplasmic reticulum stress related gene in patients with LUAD.(A, B) When k = 2, the consistent clustering Delta area curve shows the best model construction.(C) The cluster diagram of the consistency cluster analysis of ERSRGs in 453 samples in TCGA LUAD.(D) PCA analysis of two clusters.(E) KM curve of survival between cluster 1 and cluster 2. (F) Volcano map of differential gene expression between two clusters.(G) The GO enrichments in two clusters.(H) GSEA analysis between cluster 1 and cluster 2 Fig. 4 Fig. 4 Identification of ERSRGs-signature.Univariate analysis (A), LASSO analysis (B) and stepwise Cox algorithm (C) were used to identified a prognostic ER stress-related signature.(D) Kaplan-Meier survival curves between high and low subgroups.(E) For this ERSRGs-signature, the area under the ROC curve is 0.69 (1 years), 0.68 (3 years), 0.70 (5 years).(F) Riskscore plot showed the relationship among status, survival time and ERSRGs expression Fig. 5 Fig. 5 Assessment and external validation for ERSRGs-signature.(A) Riskscore plot of 6 ERSRGs-signature in external testing set, with riskscore and survival status in GSE37745 and GSE31210.(B) The Kaplan-Meier survival curves of high-risk and low-risk subgroups in external testing set.(C) Nomogram equipped with the riskscore and clinical parameters (age, gender, T, N and pathological stage) in TCGA.(D) The calibration curves displayed the accuracy of nomogram.(E-G) Decision curve analysis of nomogram (1-, 3-, 5-years) Fig. 6 Fig. 6 Immune infiltration analysis of ERSRGs-signature in LUAD.(A) The GSEA enrichment analysis between high riskscore subgroup and low riskscore subgroup.Analysis of GO (B) and KEGG (C) in differentially expressed genes.(D) The correlation between ERSRGs-expression and immune infiltrates.The TIMER (E), EPIC (F), MCP-Counter (G) and ESTIMATE (H) algorithm between high and low risk subgroups.(I) The expression of immune checkpoints was compared between the low vs. high riskscore subgroups.*P < 0.05, **P < 0.01 Table 1 The primer sequence of 6 genes Table 2 GO enrichment analysis of ERSRGs
v3-fos-license
2019-08-23T14:13:38.645Z
2019-07-28T00:00:00.000
201348178
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.18551/rjoas.2019-07.46", "pdf_hash": "9fdec82304cdf1b6f14a0e33a80966710943ee88", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1417", "s2fieldsofstudy": [ "Mathematics", "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "cb66ca16146b4f285cb590fcccfaf5f591474f08", "year": 2019 }
pes2o/s2orc
MATHEMATICAL MODELING OF THIN LAYER DRYING OF SALTED YELLOWTAIL FISH UNDER OPEN SUN AND IN GREENHOUSE DRYER The thin layer dryer model was used to describe the characteristics of changes in water content of salted yellowtail dried under the open sun and by using a greenhouse dryer. Thirteen different thin layer dryer models were used to predict fish water content values. The results of the conducted experiments validated the values. From the results of modeling for open sun drying, it was discovered that Modified Henderson and Pabis were the most suitable models to be used. While for drying using greenhouse dryers, it was suitable to use the Diffusion Approach model and Verma et al. From the performance of modeling indicator, it was shown that the value of the correlation coefficient (R) approaching 1, where the mean square of deviation between experimental, predicted values and root mean square error analysis (RMSE) have infinitesimal values. Muhfizar (2018) conducted an experimental study of yellowtail fish drying under an active greenhouse dryer. The thin layer equation describes the overall drying phenomenon, regardless of the control mechanism. This equation has been used to estimate the drying time of some products and to generalize the drying curve (Akpinar & Bicer, 2008). Several studies have used the thin layer model for the fish drying process. Kituu et al. (2010) used mathematical thin layer models for the Tilapia fish drying process in the solar tunnel dryer. Guan, Wang, Li, and Jiang Guan, Wang, Li, and Jiang (2013) used thin layer modeling for fresh tilapia fillets using hot air convection. Sobukola and Olatunde (2011) conducted thin layer modeling for the drying process of African catfish with different brine concentrations and temperatures. Jain and Pathare (2007) used thin layer modeling for the drying process of shrimp and chelwa fish (Indian minor carp) in open sun drying. Bai, Li, Sun, and Shi (2011) used mathematical modeling for drying fish slices using the electrohydrodynamic (EHD) drying layer method. Darvishi et al. (2013) conducted thin layer modeling for sardine fish dried in microwave heaters. Yellowtail fish is a type of consumption fish that has essential economic value and is one type of reef fish that lives in warm waters around the Indo Pacific. In this study, yellowtail drying was carried out under open sun drying and greenhouse solar dryer. The mathematical thin layer dryer for drying yellowtail fish has not been done yet. Therefore, this study aims to model salted yellow fish as thin layer dryers under open sun drying and solar greenhouse dryers. METHODS OF RESEARCH Material and Experimental Procedure. The greenhouse dryer shown in Figure 1 has a parabolic roof with paving blocks as a base. The surface area is 6.5 m 2 , with dimensions of length and width are 3.25 m x 2 m. The greenhouse length sides lead to the north and south in order to suit the movement of the sun. Polyethylene plastic was used for the material of greenhouse cover. The greenhouse framework made of galvanized pipes. Six 12 VDC exhaust fans with 100 WP solar cell as the electric energy supplier are used to circulate air inside the greenhouse. The tests were carried out with two methods, i.e., open sun drying and under the greenhouse dryer. The drying was done at Politeknik Kelautan and Perikanan Sorong for three days in December 2017 to reduce the water content of fish products. The yellowtail fish was used in this study. It was dried for 8 hours every day. The fishes were given salt of 0.2 gr NaCl / gr of fish mass on the dry salted method. The mass of fish used during the experiment about 0.3 kg either for open sun drying or under the greenhouse dryer. The dried fishes for three days were dried further by using the oven for 24 hours at a temperature of 105 °C (Mujaffar & Sankat, 2005). The method was done to find out the initial water content of the fish. In this study, DHT 22 sensors were used to measure air humidity and temperature both in the environment and inside the greenhouse by using a calibrated data logger microcontroller. The data recording of air humidity and the temperature were set once every hour during the drying process. A digital scale was used to measure the changes of fish mass during the drying process. The fish mass data was manually recorded to measure changes in mass. Model Eq. RESULTS AND DISCUSSION In the drying process for three days, as shown in Figure 2, the air temperature in the greenhouse dryer ranges from 32 -54°C. The air temperature in the greenhouse tends to be higher than the ambient air temperature which only ranges from 29.8 -39°C. Unlike the temperature, the value of air relative humidity in the greenhouse tends to be lower compared to ambient relative humidity. The range of air humidity values in the drying greenhouse ranges from 24.90 -58.71%, while the air humidity value of the environment ranges from 43.26 -82.15%. Figure 3 shows the change in water content of fish (dry basis) with two drying methods are with open sun and in the greenhouse dryer. The drying process in the greenhouse dryer looks faster to reduce fish water content compared to the open sun drying method. It proves that the drying rate of fish products in the greenhouse is higher than open sun drying, which is in line with the high air temperature and low relative humidity in it. Moisture ratio data from experimental results and calculations using thin layer drier modeling was processed using a statistic program on the computer. Each thin layer model was evaluated using the correlation coefficient (R), the mean square of predicted and predicted values (χ 2 ), and the root means square error analysis (RMSE) in (Eq. 2-4). The results of statistic computer program analysis obtained shown in Tables 2 and 3. The suitable model for describes open sun drying of yellowtail fish with dry salting method of 0.2 gr NaCl / gr mass of fish as shown in Table 2 Table 3. The value of R = 0.9975, χ 2 = 0.0002, and RMSE = 0.0121 for both models have the same value even though the constants in the two equations are different. The modeling results in the drying of yellowtail fish with open sun and drying greenhouses were validated using the results of each experiment. Figures 4 and 5 shows a comparison between predictive values from modeling that match the experimental results for both drying methods. This is evidenced by the experimental data indicated by an asterisk (figure 4) or a circle (figure 5) generally around a straight line which is predictive data from modeling. CONCLUSION Modeling of thin layer dryer in this study was used to model the drying process of salted yellowtail fish (0.2 gr NaCl / gr of fish mass) in the open sun and greenhouse dryer. Thirteen models were used to describe changes in moisture content characteristics in both drying methods. Modified Henderson and Pabis is a suitable model to describe the drying process with an open sun with a value of R = 0.9934, x 2 = 0.00053, and RMSE = 0.01896. Diffusion approach and Verma et al. are two suitable models describing the drying process in a greenhouse dryer with values R = 0.9975, x 2 = 0.0002, and R.MSE = 0.0121.
v3-fos-license
2023-07-12T06:06:45.431Z
2023-07-03T00:00:00.000
259685378
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijcsrr.org/wp-content/uploads/2023/07/08-03-2023.pdf", "pdf_hash": "a4997075333ec04b8735066952d6b61cd9b51aeb", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1419", "s2fieldsofstudy": [ "Business" ], "sha1": "af955dfa83e008b6953470fed384c6b58f007c80", "year": 2023 }
pes2o/s2orc
Knowledge Mapping to Improve Organization Capability in Internal Audit of Indonesia Stock Exchange : In the recent years, the Indonesian Capital Market saw a significant increase in terms of number of investors, daily transaction turnover, transaction frequency, and number of listed companies. This tremendous growth directly affects Indonesia Stock Exchange’s (IDX) to reconsider their knowledge management process to generate optimum results from carry out their responsibilities. As the sole capital market trading infrastructure provider in Indonesia, some of the stock exchange function can only be operated by using knowledge of IDX employee which acquired by years of working experience in IDX. The objectives in this study were to determine identify the essential knowledge and to define the essential knowledge that needs to be improved in IDX’s internal audit, and then propose an implementation plan to improve knowledge in IDX’s internal audit. The research method used is qualitative methodology through document analysis and key persons interviews interview which was conducted in April 2023. This study uses a variety of knowledge management approaches to address knowledge mapping, knowledge gaps, and knowledge loss. According to the study's results, the author identifies knowledge gap and offers several recommendations regarding Knowledge Development Program to close the knowledge gap and Knowledge Retention Program to build on the knowledge that Internal Audit already possesses. INTRODUCTION Despite the COVID-19 pandemic, which has had a detrimental influence on the world economy, the Indonesian Capital Market saw a significant increase of local retail investors in 2020. This favorable environment persisted through 2021-2023. The number of capital market investors is growing tremendously from 281 thousand investors in 2012 to 1,1 million investors (299%) in 2018 and 10.3 million investors (818%) in 2022. Listed companies, which are the most contributing to IDX daily turnover from equity transactions, also growing significantly from 459 companies in 2012 to 566 companies (23%) in 2018 and 825 companies (46%) in 2022. Considering Indonesia Stock Exchange's (IDX) high growth and IDX's key responsibilities in creating a trusted and credible financial market infrastructure to deliver a fair, orderly, and efficient market, the need of developing a more robust knowledge management system to support the stock exchange operations is getting more critical. This condition is also corroborated by IDX current position as the sole capital market trading infrastructure provider in Indonesia, which makes some of the knowledge needed to operate the stock exchange function can only be obtained by years of working experience as IDX employee. To overcome this challenge, IDX started to establish IDX competency dictionaries in 2017 which comprise of behavioural and technical competencies that apply to all divisions in IDX. As the internal audit role grows to not only as a watchdog but also as a trusted advisor dan strategic partner to the management, internal audit also needs to identify its knowledge needs and establish the method to obtain and retain that knowledge to keep up with the organization's development. BUSINESS ISSUE Internal Audit, as the management's strategic business partner in carrying out its advisory and assurance role in order to support the company's strategic initiatives, is expected to add value in all of its operations. Internal Audit also serves as a catalyst for innovation in achieving the company's objectives. Internal auditing must be dynamic and constantly adapted to changes in organizations and businesses. Therefore, in order to deliver high-caliber performance to stakeholders, the internal audit organization must likewise stay current on these advances. Internal audit should expand its position as consulting and assurance services as a result of the rising stakeholder expectations, and this may be done by having the necessary expertise. The establishment of knowledge workers is necessary for the organization's knowledge to grow. As required by IPPF, internal audit must have a Quality Assessment and Improvement Program (QAIP) established by the Chief Audit Executive (CAE). The internal QAIP will be performed by internal audit personnel every year, while the external QAIP will be performed every four years. According to the most recent internal QAIP, some audit employees are still having dependence issues in performing engagement. There is also issue of limited personnel that skills and knowledge match with the engagement requirements. These conditions have an impact on longer engagement duration. Additionally, there have been numerous Quality Assurance (QA) discoveries about auditors' inadequate documentation of audit results. The Audit Management System (AMS), which is utilized in the internal audit implementation process from the planning stage through the reporting and monitoring stage, is currently a combination of several Microsoft 365 applications. However, according to observation, AMS has not been applied consistently. Additionally, Internal Audit lacks a sufficient knowledge repository at this time. Although it is currently a shared folder, the knowledge store has not been utilized to its full potential. The organization may lose knowledge if these two items are not managed. The CAE has also been the subject of a preliminary interview by the QAIP team. The CAE claimed that there are currently less auditors who are familiar with information technology than those who are not. Therefore, raising awareness is necessary so that auditors are motivated to keep learning, particularly in regard to information technology. The aforementioned circumstances show that Internal Audit's existing performance has not yet assisted in the accomplishment of the organization's initiatives. The Auditor's expertise is currently insufficient to support the organization's efforts, and knowledge management inside the company has not been successful. It may ultimately result in the knowledge gap problem. Since knowledge management is regarded as a process where the company may close the gap (Zack, 1999), it will be utilized as a reference in the analysis. This will allow internal audit to identify and assess the knowledge that is most important to auditors. In the end, the knowledge gap identified as one of the business issues would be resolved by the knowledge management concept. RESEARCH METHODOLOGY The technique used in this thesis is based on action research, a type of self-reflective investigation done by participants in social situations to enhance the fairness and rationalism of their own actions, their comprehension of these practices, and the contexts in which they are carried out (Carr & Kemmis,1986). The author wants to make a systematic process of data collection by breaking down the acquiring process into three parts; Data Needed (Input), Acquire Method (Process), Data Result (Output). Data collection process is described in table below: From the table above, it is known that to be able to answer the research question, several data are needed that acquired through analysis of document review and interview. Interview is conducted in-depth interviews to help explore the knowledge held by the auditors and to review the adequacy of this knowledge in carrying out its function as Internal Audit. As the triangulation is used to gain multiple perspectives, the interview will also involve CAE to gain multiple perspectives and to reduce bias regarding the knowledge possessed by the Internal Audit division. Some questions might be elaborated during the interview session, and it will also be different from one respondent to another, but the main structure of the question is the same. To determine the knowledge form (tacit/explicit) and source used in acquiring the knowledge. 5 Which competencies in the knowledge requirements that you think should be improved? To understand the interviewee's view about competencies in the knowledge requirements that could be improved. 6 What technical competencies or knowledge that currently not in the knowledge requirements but you think should be included as knowledge requirements? To understand the interviewee's view about potential knowledge that could be added to the knowledge requirement Sample selection The sample selection uses a purposive sampling method with criteria for auditors who are at the level of Auditor, Senior Auditor and Unit Head who have experience in the field of Internal Audit for more than 4 years. The author utilizes data triangulation to check the accuracy of the data or information they have acquired from a range of various perspectives while minimizing bias that may occur during data collecting and analysis. A document review is one of the methods used in this study to collect data. The documents utilized are internal audit-related documentation that describe roles, responsibilities, and the skills and knowledge required to carry out such duties and responsibilities, specifically: Internal Audit Competency Framework, Internal Audit Charter, Internal Audit Job Description, Competency Fit Index, Internal Audit Plan, Internal and Audit Head Direction. PROPOSED BUSINESS SOLUTION According to the knowledge mapping assessment that was done to map the knowledge assets in the Internal Audit, there is a gap between the knowledge requirements and the current knowledge possessed by the Internal Audit. Additionally, because everyone who took part in the poll had the same expectations for these areas, it has been concluded that four essential knowledge areas linked to financial accounting, information system audit, basic information technology, and data analytics need to be addressed. The author wishes to have a profound comprehension of these matters and has divided them into three levels of knowledge understanding based on information about the knowledge gaps and knowledges that need to be enhanced. These levels are used to determine the auditor's level of comprehension of this knowledge gap. There are two levels: General Level (or with an indicator that the auditor has a general understanding of the knowledge and has only received sharing related to this knowledge), and Intermediate Level (with an indicator that the auditor has received sharing and training related to this knowledge and that this knowledge is used as part of their audit assignment). Advance Level (with an indicator indicating the auditor has received sharing and training relating to this information and that the auditor has thoroughly applied this knowledge in performing an audit assignment. Including the fact that the Auditor has revealed relevant information). The author classifies knowledge gap based on the key information of interview results that there are four critical knowledges that must be improved related to financial accounting, information system audit, basic information technology, and data analytics. Auditors have different levels with the range level from General to Intermediate. The level of knowledge gap is summarized in the following The proposed recommendations will combine the interview result and theories related to Knowledge Management from Asian Productivity Organization (APO), David W. DeLong and Jay Liebowitz.
v3-fos-license
2021-05-08T13:22:16.238Z
2020-11-13T00:00:00.000
233988661
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-021-01694-6", "pdf_hash": "8333cbbd3ad9497a9cc0b2b9a566a420f4a0cf9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1423", "s2fieldsofstudy": [ "Education", "Medicine", "Sociology" ], "sha1": "868f0a980a05ad38cc6e0bde7310d2e6160ad63a", "year": 2021 }
pes2o/s2orc
Impact of child development at primary school entry on adolescent health—protocol for a participatory systematic review Background Reducing child health inequalities is a global health priority and evidence suggests that optimal development of knowledge, skills and attributes in early childhood could reduce health risks across the life course. Despite a strong policy rhetoric on giving children the ‘best start in life’, socioeconomic inequalities in children’s development when they start school persist. So too do inequalities in child and adolescent health. These in turn influence health inequalities in adulthood. Understanding how developmental processes affect health in the context of socioeconomic factors as children age could inform a holistic policy approach to health and development from childhood through to adolescence. However, the relationship between child development and early adolescent health consequences is poorly understood. Therefore the aim of this review is to summarise evidence on the associations between child development at primary school starting age (3–7 years) and subsequent health in adolescence (8–15 years) and the factors that mediate or moderate this relationship. Method A participatory systematic review method will be used. The search strategy will include; searches of electronic databases (MEDLINE, PsycINFO, ASSIA and ERIC) from November 1990 onwards, grey literature, reference searches and discussions with stakeholders. Articles will be screened using inclusion and exclusion criteria at title and abstract level, and at full article level. Observational, intervention and review studies reporting a measure of child development at the age of starting school and health outcomes in early adolescence, from a member country of the Organisation for Economic Co-operation and Development, will be included. The primary outcome will be health and wellbeing outcomes (such as weight, mental health, socio-emotional behaviour, dietary habits). Secondary outcomes will include educational outcomes. Studies will be assessed for quality using appropriate tools. A conceptual model, produced with stakeholders at the outset of the study, will act as a framework for extracting and analysing evidence. The model will be refined through analysis of the included literature. Narrative synthesis will be used to generate findings and produce a diagram of the relationship between child development and adolescent health. Discussion The review will elucidate how children’s development at the age of starting school is related to subsequent health outcomes in contexts of socioeconomic inequality. This will inform ways to intervene to improve health and reduce health inequality in adolescents. The findings will generate knowledge of cross-sector relevance for health and education and promote inter-sectoral coherence in addressing health inequalities throughout childhood. Protocol Registration This systematic review protocol has been registered with PROSPERO CRD42020210011. Supplementary Information The online version contains supplementary material available at 10.1186/s13643-021-01694-6. (Continued from previous page) Discussion: The review will elucidate how children's development at the age of starting school is related to subsequent health outcomes in contexts of socioeconomic inequality. This will inform ways to intervene to improve health and reduce health inequality in adolescents. The findings will generate knowledge of cross-sector relevance for health and education and promote inter-sectoral coherence in addressing health inequalities throughout childhood. Protocol Registration: This systematic review protocol has been registered with PROSPERO CRD42020210011. Keywords: Child development, Primary School, Adolescent health, Inequality, Public health Background Reducing child health inequalities is a global health priority and evidence suggests that optimal development of knowledge, skills and attributes in early childhood could reduce health risks from childhood through to adulthood [1]. Positive child development in the early years (age 0-3 years) brings about wide ranging human capital development in later life which strongly influences wellbeing, obesity, mental health, heart disease, literacy and numeracy, criminality and economic productivity [2]. This evidence for investment in early years on human capital development and the resultant economic gains in later life [3,4], together with the evidence for the early years as a critical period of development [5], make it a prime area for public policy and public health investment. However, current policy ('best start in life') and research on health and development has neglected children from age 5 years to adolescence, and there is scope for research and action on child health and development in this period to evolve from an emphasis on the first 1000 days and 'school readiness' to the first 8000 days in order to support development needs across children's life cycle [6]. Understanding how developmental processes affect health in the context of socioeconomic factors as children age could inform a holistic policy approach to health and development from childhood through to adolescence. Recognising the interconnected nature of health and development in childhood, and the importance of socioeconomic circumstance in determining outcomes, many programmes are in place across the UK which seek to address health and development across the wider determinants of child health, such as quality early years education [7], universal services such as welfare and health visiting [8], parenting programmes [9] and community support through children's centres [10,11]. Whilst improvements for children as a whole are being seen for some health outcomes (asthma, epilepsy, diabetes) [12], inequalities in child health are not reducing, with inequalities in outcomes in relation to socioeconomic status [12] and indeed inequalities in some outcomes are widening [13]. This is particularly the case for obesity and mental ill health in early adolescence [14] with negative consequences for weight [15] and wellbeing [16] in adulthood. Socioeconomic inequalities in child development are also apparent. Analysis of the Millennium Cohort Study (a nationally representative cohort set to follow the lives of over 18,000 children born in the year 2000) found that UK children from low-to middleincome families were 5 months behind children from high-income families in terms of vocabulary skills and had more behavioural problems at age 5 years [17]. These inequalities in early child development and health tend to tack forward and increase over time to influence inequalities in later health outcomes [18]. There is evidence that programmes which encompass parenting support and early learning opportunities in or out of the home enhance child development in readiness for school improving cognitive and non-cognitive skills in children [19]. Positive cognitive development on starting school is associated with academic achievement by age 13 years [20] and socio-emotional development by age 10 years [21]. Non-cognitive skills such as social skills and self-regulation on starting school also improve academic success and psychosocial outcomes in subsequent years [22]. Whilst the beneficial effects of education on health in adulthood acquired through knowledge, work and social status are clear [23], there is less evidence of the effect of early child development interventions on health outcomes in childhood; other than limited evidence for obesity reduction, greater social competence, improved mental health and crime prevention [24] and on reducing childhood hospitalisations for infections and injury [25]. So there is evidence that programmes to enhance child development in readiness for school improve academic success, socio-emotional and psychosocial outcomes but the evidence for whether and how measures of child development impact subsequent health in childhood is limited. Child development on starting school is defined in this study as cognitive or physical or linguistic or socioemotional development at school starting age. There is evidence that measures of cognitive development at primary school starting age, as a component part of a model incorporating routinely collected data, predict socio-emotional behaviour and obesity at age 11 years [26]. Moving beyond the predictive value of measures to understanding early education as a developmental process in a social context [27] is important if we are to understand how emerging social and cognitive pathways in children interconnect with pathways stemming from socioeconomic circumstances. To improve child health and address inequality, evidence is needed on the mediating pathways between child development on starting school and these later child health outcomes and the socioeconomic and environmental factors which shape this relationship [28]. There is evidence that family stress, material living circumstances and parental behaviours are the main pathways stemming from socioeoconomc circumstance which lead to inequalities in child health [29]. These factors are potential modifiers of the relationship between child development on starting school and adolescent health. A modifier is a variable which alters the strength of association between an exposure and an outcome. In addition to understanding what might affect the strength of the relationship, it is important to understand what variables may explain the relationship. Identifying direct pathways between child development and health (such as knowledge/literacy and cognitive/social pathways) aids understanding of mediators of the relationship. A mediator is a variable which explains the association between and exposure and an outcome. Increasing understanding of the pathways between child development and health is pertinent learning for improving health because it is the interactions between early childhood development and the biological and social changes during mid-childhood, shaped by socioeconomic factors that influence health-related behaviours in adolescents [30]. However, the relationship between child development and early adolescent health consequences is poorly understood. Better understanding this relationship could provide knowledge on targeted public health interventions in primary school age children and provide a focus for action and policy coherence across the health and education sectors; and help to mitigate the effect of detrimental socioeconomic factors on child development on later health outcomes and inequalities in those outcomes. Therefore, the aim of this review is to summarise evidence on the associations between child development at primary school starting age (3-7 years) and subsequent health in adolescence (11-15 years) and the factors that mediate or moderate this relationship. Protocol registration The present protocol has been registered within the PROSPERO database (registration number CRD42020210011) and is being reported in accordance with the reporting guidance provided in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) statement [31,32] (see checklist in Additional file 1). The planned review will be reported according to the Preferred Reporting Items for Reporting Systematic Reviews and Meta-Analyses (PRISMA) 2020 Statement [33,34] Review questions The planned review will address the following questions: What are the associations between measures of child development recorded at primary school starting age (3-7 years) and subsequent health in adolescence (8-15 years)? What are the effect modifiers (socioeconomic and environmental factors) of this relationship? (This will identify variables which alter the strength of the observed associations.) What are the mediators of this relationship? (This will identify variables or pathways which explain the observed associations.) Study design We will undertake a participatory systematic review, involving engagement with national and local stakeholders across health and education sectors. Participation will occur in the following ways: after an initial scoping search and review of papers, discussions with stakeholders will take place to identify any further relevant studies and to develop an initial conceptual model. This initial conceptual model will act as a framework for extracting and analysing evidence identified in the systematic review. The model will be revised and refined through analysis of the included literature. Narrative synthesis will be used to generate findings and produce a diagram of the relationship between child development in the early years of primary school and adolescent health outcomes. This participatory review method adds value over traditional review methods when clarifying underlying theory, ensuring all valued outcomes are captured, adding insight to relationships between outcomes and understanding of how, when and where interventions may work [35]. Participatory methods to produce diagrams, maps or models help to uncover theories of change and assumptions underpinning pathways between cause and effect [36]. They are increasingly recognised for their potential to make a contribution to systematic review methodology [37] and particularly in the field of public health [38]. Information sources and search strategy MEDLINE, PsycINFO, ASSIA and ERIC will be searched for results from November 1990 onwards. The reference lists from all included articles will be searched for eligible articles that may have been missed by the electronic search. Further relevant literature will be identified through stakeholder discussions. Grey literature searching will be undertaken by searching relevant organisations websites and discussions with stakeholders, to find all relevant literature for inclusion. The search terms relate to measures of child development in the early years of primary school and health outcomes in early adolescence. Studies will be limited to those that include children, some or all of whom are aged between 3 and 15 years and those that are in English. A pilot search strategy has been undertaken (Additional file 2). Data management Dates of searches and results will be recorded using Excel. Search results will be downloaded to EndNote desktop software. Studies identified through reference searching, stakeholder discussions and grey literature will be recorded and imported into EndNote Eligibility criteria Definition of terms In this review, child development refers to a measure of cognitive or physical or linguistic or socio-emotional development at primary school starting age (3-7 years). Inclusion criteria Observational studies (ecological, case-control, cohort (prospective and retrospective)) RCTs, quasi experimental, review level studies including theory papers which are the following: Studies of children that include a measure of child development at age 3-7 (the age most children enter pre-school or school) and weight/mental health outcomes between age 8 and 15 years. Studies that explore factors which affect associations between child development and these outcomes Studies that explore mechanisms or pathways between child development and these outcomes Cross-sectional studies, conference abstracts, dissertations and studies reporting neither outcomes data nor mechanism will be excluded. The population and context, exposure, outcomes and study designs are described below and summarised in relation to inclusion and exclusion criteria in Table 1. Population and context Studies must include children, some or all of whom are aged between 3 and 15 years, across socioeconomic strata in high-income country settings, defined as OECD membership. Exposure A measure of child development at primary school starting age (3-7 years), defined as cognitive or physical or linguistic or socio-emotional development at school starting age, including School readiness, as measured by scales such as the Bracken Basic Concepts Scale Revised (BBCS-R) [39] and Good Level of Development Cognitive development as measured by, for example, non-reading intelligence tests, vocabulary tests, maths tests or parent/teacher ratings. Language and literacy (as measured by academic achievement test scores such as pre-reading/reading, vocabulary, oral comprehension, phonological awareness, pre-writing/writing or verbal skills. Emotional well-being and social competence (behavioural assessments of social interaction, problem behaviours, social skills and competencies, child-parent relationship/child-teacher relationship), measured using the Child Behaviour Checklist. Physical development as measured by amount of physical activity or assessment of gross motor skills. Primary outcome(s) The primary outcomes of interest will be weight and mental health as quantitative data, including measures of wellbeing. The outcomes measures are the following: Weight (BMI) Mental health (as measured by standard questionnaires or clinically) Socio-emotional behaviour (as measured by social competence, emotional competence behavioural problems, self-regulation and executive function) Proxy measures such as dietary habits and behaviour and measures of wellbeing will be included. These outcome measures were highlighted in an initial scoping review of the literature and during discussions with stakeholders. Secondary outcome(s) The secondary outcome of interest is educational outcomes measured as: Performance at the end of primary school (age 10-11), measured by standardized tests. The rationale for this outcome is that it facilitates analysis through consideration of possible temporal dynamics to the relationship under study. Development of a conceptual model We have undertaken a scoping review to identify the main factors and pathways between child development at primary school starting age (3-7 years) and subsequent health outcomes at age 8-15 years. Meetings with five stakeholders from local authority, health, education and voluntary sector were held in September 2020 to explore perspectives on these pathway areas; considering in particular, the following: How health outcomes in adolescence are most affected by socioeconomic circumstances in child development at the start of primary school General perceptions of what the mediating pathways are, including how pathways are connected and feedback loops Where in the system would intervening have most impact on socioeconomic inequality in child development on later health outcomes in adolescence Participatory methods and tools, including concept mapping approaches will continue to be used in stakeholder meetings to finalise a conceptual model of the pathways (see Fig. 1a for draft). This initial model forms a framework for the review and provides initial categories for extracting and analysing evidence from published studies. The model will then be revised and refined iteratively through analysis of the included literature to produce a final diagram. This will illustrate where factors in the initial diagram were not reported in the literature and where there may be associations and relationships between factors. The model will be used to formulate a directed acyclic graph (DAG) for further statistical analysis of the associations and pathways in subsequent phase of this study (see Fig. 1b). Selection and data collection process Articles will be screened using the inclusion and exclusion criteria at title and abstract level, and then at full article level by the review team. At each stage, a sample Studies must include children, some or all of whom are aged between 3 and 15 years, across socioeconomic strata in high-income country settings, defined as OECD membership. Studies of children from non-OECD countries. Studies which focus solely on a particular subset of children with a particular health or development need. Exposure A measure of child development at primary school starting age (3-7 years), defined as cognitive or physical or linguistic or socio-emotional development at school starting age, measured by any of the following: • School readiness, as measured by scales such as the Bracken Basic Concepts Scale Revised (BBCS-R) [39] • Cognitive development as measured by, for example, non-reading intelligence tests, vocabulary tests, maths tests or parent/teacher ratings. • Language and literacy (as measured by academic achievement test scores such as pre-reading/reading, vocabulary, oral comprehension, phonological awareness, pre-writing/writing or verbal skills. • Emotional well-being and social competence (as measured by behavioural assessments of social interaction, problem behaviours, social skills and competencies, child-parent relationship/child-teacher relationship). • Physical development. Studies that explore socioeconomic and environmental factors which affect associations between child development at primary school starting age and these outcomes Studies that explore mechanisms or pathways between child development at primary school starting age and these outcomes. Studies reporting neither data nor mechanism between exposure and outcome will be excluded. Outcome Primary outcome(s) The review will incorporate evidence health and wellbeing outcomes, reported between the ages of 8-15 years, specifically: Weight (BMI) Mental Health (as measured by standard questionnaires or clinically) Socio-emotional behaviour Proxy measures such as dietary habits and behaviour and measures of wellbeing will be included. Secondary outcome(s) Educational outcomes Performance at the end of primary school (age 10-11), measured by standardized tests. Studies reporting neither data nor mechanism between exposure and outcome will be excluded. Study design and sources Observational studies (ecological, case-control, cohort (prospective and retrospective)) RCTs, quasi experimental, review level studies including theory papers Cross-sectional studies, conference abstracts, books, dissertations, opinion piece will be checked independently by another member of the review team and inter-rater reliability will be recorded. Any queries regarding inclusion will be discussed with at least one other team member. Data extraction using a bespoke form will be undertaken for all studies that meet the inclusion criteria by the lead reviewer and a sample will be checked independently by another team member. A data extraction form (Additional file 3) has been developed using previous expertise of the team and has been piloted on a sample of different sources. The following data will be extracted: study design, country, year, study population, study characteristics, child development measure, health outcomes, factors affecting associations, pathways, main findings, strengths and weaknesses. In cases where additional data from studies is required, the lead reviewer will contact the study authors. Quality assessment Quality assessment of the included studies will be conducted using the Liverpool University Quality Assessment Tool (LQAT), which allows for a specific tool to be used for each study design [40]. This tool has been independently evaluated against other quality assessment tools [41]. Quality assessments will be done by the main author and second checked by a member of the review team and any discrepancies will be discussed. Strategy for data synthesis This review is broad in scope and as such it is anticipated that there will be considerable heterogeneity between studies in terms of design and measurements of the exposures and outcomes. It is anticipated that the data will not allow for a meta-analysis and as such narrative synthesis will be used for each review question, and using the conceptual model referred to above to as a way to synthesise and illustrate the associations, mediators and moderators within the identified body of literature. The Synthesis Without Meta-analysis (SWiM) guidelines will be used to guide reporting of results [42]. To describe the associations between exposure and outcomes, studies will be grouped by exposure measure for synthesis. The quality assessment of individual studies will be used to determine the strength of the evidence and greater weight will be given to conclusions drawn from the most methodological sound and reliable studies. Summary tables will be produced for each grouping to describe the exposures, outcomes and effect sizes. Modifiers and mediators of the relationship will be described narratively using structured headings as determined by the participatory element of the review, as illustrated in the initial conceptual model (Fig. 1a). This narrative synthesis will be used to generate findings and will inform a final diagram of the relationship between child development at primary school starting age and health outcomes in early adolescence. Additional analyses Analysis by geographical context to capture any differences in the relationship by country will be considered during the data synthesis and will be identified in the narrative synthesis. Confidence in cumulative evidence In addition to assessing the quality of each individual paper the overall strength of the review findings will be assessed drawing on criteria used by Hoogendoom [43] and Baxter [37,44] together with principles of GRADE specific to observational studies [45] .The review findings by typology of papers, grouped by exposure, will be assessed for relative strength of evidence. The assessment will be based on volume, quality and consistency in effect sizes in studies. This will allow each review finding to be graded as either stronger, weaker, inconsistent or limited evidence. Assessment on the strength of evidence in relation to mediators and moderators of the relationship may be more difficult to grade using standard tools. Whereby any findings are based on theory papers or author opinion on proposed mechanisms this will be reflected in the grading of the evidence. Strength of evidence will also be illustrated in the final diagram. Agreement on grading of review findings will be agreed by the whole review team. Discussion This review will address an important knowledge gap by increasing our understanding of the associations between measures of development and health in childhood, and the factors which affect these associations. By using participatory methods alongside systematic evidence synthesis the review will elucidate how children's development at the age of starting school is related to subsequent adolescent health outcomes in contexts of socioeconomic inequality. This will inform ways to intervene to improve health and reduce health inequality in adolescents. The findings will generate knowledge of cross-sector relevance for health and education and promote inter-sectoral coherence in addressing health inequalities [46,47] throughout childhood. Any amendments made to this protocol when conducting the review will be outlined in PROSPERO and reported in the final manuscript. Results will be disseminated through conference presentations and publication in a peer-reviewed journal. Strengths and limitations This review will provide, for the first time, a systematic overview of the association between child development at primary school entry, and adolescent health and factors that shape this relationship. It will incorporate stakeholder views to add depth and insight to guide the review process. The involvement of a sample of stakeholders raises the potential for biases to be introduced by selection of stakeholders with particular views, opinions or experiences. The risk of bias will be minimised by the use of transparent and replicable systematic review methods. The review may also be limited by primary studies with limited data on the mechanisms between exposure and outcome. Additionally, risk of bias in observational primary studies may bias the overall review results. This will be addressed at the quality assessment stage by recording risk of bias and using the assessment scores to decide the weight to assign to the conclusions drawn from each review. At review level, the heterogeneity of the study designs, exposure and outcome measures will need careful consideration in the data synthesis with care taken to group studies to ensure reliable and valid conclusions are drawn.
v3-fos-license
2022-07-17T06:21:38.503Z
2022-07-15T00:00:00.000
250582442
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "6f4e177ebefda07139ee7783080fa2aaa6fd1709", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1426", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "dfd220a8421f57badc2e275956c51127520c2ec0", "year": 2022 }
pes2o/s2orc
Plasmonic hot spots reveal local conformational transitions induced by DNA double-strand breaks DNA double-strand breaks (DSBs) are typical DNA lesions that can lead to cell death, translocations, and cancer-driving mutations. The repair process of DSBs is crucial to the maintenance of genomic integrity in all forms of life. However, the limitations of sensitivity and special resolution of analytical techniques make it difficult to investigate the local effects of chemotherapeutic drugs on DNA molecular structure. In this work, we exposed DNA to the anticancer antibiotic bleomycin (BLM), a damaging factor known to induce DSBs. We applied a multimodal approach combining (i) atomic force microscopy (AFM) for direct visualization of DSBs, (ii) surface-enhanced Raman spectroscopy (SERS) to monitor local conformational transitions induced by DSBs, and (iii) multivariate statistical analysis to correlate the AFM and SERS results. On the basis of SERS results, we identified that bands at 1050 cm−1 and 730 cm−1 associated with backbone and nucleobase vibrations shifted and changed their intensities, indicating conformational modifications and strand ruptures. Based on averaged SERS spectra, the PLS regressions for the number of DSBs caused by corresponding molar concentrations of bleomycin were calculated. The strong correlation (R2 = 0.92 for LV = 2) between the predicted and observed number of DSBs indicates, that the model can not only predict the number of DSBs from the spectra but also detect the spectroscopic markers of DNA damage and the associated conformational changes. . AFM imaging of pUC19 circular DNA plasmid damaged with bleomycin and fixed on mica: A) untreated DNA; B-I) DNA reacting for 4 mins with Fe(III)-bleomycin solution: linear fragments of various length are visible, the degree of fragmentation corresponds to increasing bleomycin concentration. Since gold nanoparticles are more stable and inert (less prone to oxidation) in physiological buffer than silver 12 , gold nanoparticles were selected in these studies as the SERS substrate. The optimization of SERS technique for efficient DNA measurements considered examination of quality (SNR) of spectra achieved with the use of three different types of gold nanoparticles: stabilized with sodium borohydride (Fig. S4 A), trisodium citrate (Fig. S4 B) and cysteamine ( Fig. S4 C). SERS spectra of nanoparticles with aggregating agent (0.05 M NaCl) and DNA were compared with the Raman spectra of concentrated DNA. For sodium borohydride nanoparticles, well-defined spectrum of stabilizer was obtained. Regarding that stabilizer itself is also clearly visible in the SERS DNA spectrum, these nanoparticles were not included in further measurements (Fig. S4 A). Comparing the Raman spectrum of concentrated DNA with SERS spectrum of DNA obtained with trisodium citrate nanoparticles, only one peak associated to phosphodiester bonds vibrations in DNA molecule at 734 cm -1 is well-resolved. As the SERS spectrum of DNA under selected conditions was not obtained, these nanoparticles were not considered in further studies as well (Fig. S4 B). For nanoparticles stabilized with cysteamine, well-resolved SERS spectrum of DNA was acquired with low influence of stabilizer itself ( Regarding the concentration of DNA in the sample, 114 mg L -1 was considered as the most appropriate due to the relatively low signal from stabilizer (the band from CH2 torsion at 1266 cm -1 ) in the acquired spectra ( Fig. S5 A). Whereas for the ionic strength measurements, the concentration of 36 mM NaCl was selected as the SERS spectra of DNA collected under these conditions are characterized by the best signal-to-noise ratio and the highest intensity of the characteristic DNA Raman marker bands (Fig. S5 B). Before establishing final experimental parameters, number of additional optimization measurements were performed. We also examined the effect of sequence of experimental steps, and various incubation times (Fig. S6). Based on numerous systematic measurements, the most optimal experimental parameters were selected and then applied to the further studies of DNA damage. Synthesis of nanoparticles Cysteamine-stabilized gold nanoparticles (CHSBAuNPs) were prepared according to the modified Niidome et al. 16 The ultrafiltration process, conducted according to the protocol described by Oćwieja et al. 18 , was used for the purification of each type of obtained AuNP suspension. Physicochemical characteristic of AuNPs dispersed in aqueous suspension. The conductivity and pH of the stock AuNP suspension were measured using a multifunctional pH/conducto-meter (Elmetron). The concentration of AuNPs dispersed in the purified suspension was determined based on the density measurements described in detail elsewhere 19 . Typical TEM micrograph of AuNPs was presented on Fig. S8 A. Analysing this image one can observe that cysteamine-stabilized AuNPs exhibit nearly spherical shape. Moreover, it was found that the AuNPs were fairly monodisperse (Fig. S8 B) and their average size was equal to 13±3 nm. This size value remains in agreement with the findings presented before 18 hence one can conclude that the preparation method of cysteamine-stabilized AuNPs is highly reproducible. and pH at the temperature of 25 o C showed that at pH 5.8 the aggregation process appeared for ionic strength higher than 10 -2 M (Fig. S9 A). Independently on the ionic strength, the aggregation process of AuNPs leading to formation of aggregates of an average size of 348±24 nm was observed under alkaline conditions (for pH higher than 8) (Fig. S9 A) Thereby, the aggregation process of AuNPs detected by DLS technique explained the reason of bathochromic shift of plasmon absorption maximum under alkaline conditions (Fig. S7 B). The electrokinetic properties of AuNPs were evaluated using electrophoretic scattering technique (ELS). It was established that the AuNPs were positively charged in board range of ionic strength and pH. The zeta potential of AuNPs dispersed in the stock solution at pH 5.8 was equal to 54±2 mV (Fig. S9 B). This value was comparable with the data described previously 18 . Similarly, the drop of zeta potential values with an increase of ionic strength and pH (Fig. S9 B) was also observed. It is worth mentioning that the aggregates of AuNPs formed at ionic strength range between 10 -2 -5x10 -2 M were positively charged. with bleomycin ( Fig S10) and pUC19 plasmid exposed to UVC radiation as a damaging factor ( Fig S11). In both cases, damaging factors (BLM, UVC) induced DSBs (visible on AFM images), and conformational changes can be observed in the SERS spectra. In the SERS spectra of pUC19 plasmid DNA exposured to UVC radiation, we have observed similar spectral changes as induced by BLM treatment including partial shift of the phosphate symmetric stretching from DNA backbone, which is a marker of DNA conformational change. A comparison of the spectra acquired from control and irradiated DNA shows the intensity
v3-fos-license
2018-12-11T22:11:08.362Z
2015-01-01T00:00:00.000
54640682
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2015/13/matecconf_isot2015_06002.pdf", "pdf_hash": "f9ee31c340ffb187317ea7c0932990d730cc2f23", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1427", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "f9ee31c340ffb187317ea7c0932990d730cc2f23", "year": 2015 }
pes2o/s2orc
Microrobotic Station for Nano-Optical Component Assembly Compact photonic structures have strategic importance in several fields. In order to fabricate more complex structures with optimized optical function, robotic nano-assembly is a promising solution because it enables to integrate several types of materials from different fabrication processes into the same structure. This paper presents a microrobotic station which is designed for the assembly of nano-optical components. The robotic station has 8-DOF for positioning and 4-DOF microgripper with integrated force sensors to perform dexterous and accurate manipulation of components by considering both position and contact force to achieve precise alignment and parallelism of structures. Introduction The development of compact photonic structures presents strategic issues in many fields: biomedical, astrophysics, multimedia, defence, etc.These structures are very complex to fabricate (limits in terms of precision machining, monolithic, diversity of shapes obtainable) but also to study (coupling difficulties, local analysis of the optical behaviour). Several works have studied the importance of thin metal plates periodically structured by subwavelength holes which present special or enhanced transmission of resonance excitation (plasmonic modes or guided wave sub-length cavities) [1,2].For optical frequencies (visible/near infrared), the main limiting factor of such structures remains the absorption of metal that dramatically reduces the light transmission.In addition, it should be noted that their multilayer character adds new experimental constraints (such as alignment and parallelism) of their nanoscale dimensions [3,4]. Robotic nano-assembly presents an interesting approach to develop and fabricate more complex photonic structures by assembling several nano-structures issued from different fabrication process.It enables to ensure precise alignment and parallelism of the structures to maximize the optical function.In literature several works have studied robotic nano-assembly including the use of scanning electron microscopes (SPM), atomic force microscopes (AFM), scanning electron microscope (SEM) and nano-tweezers [5].The main issue is the size of the component to manipulate but also the predominance of surface and contact forces at this scale relative to the volume forces such as weight of the nanostructures.Integrating force sensors to measure these forces is an interesting approach to master the scaling effect, increase the dexterity for the assembly of small components, avoid breaking or damaging components that have very high dimensions ratio and control any contact between different photonic components. Based on state of the art and previous works which were conducted for microassembly of hybrid MOEMS [6][7][8], we propose in this paper a new microassembly station for nano-optical component assembly.For this reason, a robotic station made of precise positioning and rotation stages and a new two-sensing-fingersmicrogripper are used to perform dexterous microassembly.The objective of the use of the proposed robotic station is to offer a new range of threedimensional photonic structures and heterogeneous integrating different functionalities by nano-robotic assembly. The paper is organized as follows.Section 2 introduces some potential applications.The robotic station including a two-sensing fingers microgripper are presented in section 3. Experimental results are then presented in section 4. Section 5 concludes the paper. Nano-optical component assembly The fabrication of nano-optical components have been widely studied and investigated due to its importance for several domains.Previous works have been interested in the fabrication of several types and forms of photonic structures (shown in Figure 1) using cutting and Focused Ion Beam (FIB).Recently, a photonic cavity on a LiNbO 3 membrane leaded to a transmission peak, whose spectral position is extremely sensitive to the temperature for an active length of only MATEC Web of Conferences 13 μm.All these preliminary results on lithium niobate are very attractive, but it is possible to go even further towards integrated structures.In assembling these nanoguides with metallic-dielectric and polarization control, a decisive step can be taken towards integrated miniaturized electro-optical components with total space less than 1 mm 3 and low power consumption. Fabricating hybrid nano-optical structures by robotic micro-assembly approach requires considering several locks: -The small size of the components to be manipulated (tens of μm sideway and hundreds nanometers thickness) -The high aspect ratio of components which induces fragility -Assembly requires to control all 6 Degrees of Freedom of the manipulated component relative to a reference with high accuracy (better than 1 μm as a first step) -Assembly requires to minimize air gaps between components and thus to maximize contact between assembled components (i.e.controlling contact forces) Microgripper with sensorized end-effectors In the last decade, several researches have been interested by the development of force sensors for micro and nano scale applications.In literature, the most used solution is to use a microgripper with an active finger and a force sensing finger.However, in previous works [9], the use of two-sensing-fingers microgripper provides an estimation of the lateral contact force between the manipulated micropart and the microassembly substrate in addition to the gripping forces. In the microgripper design, two main points should be considered: the dimensions of the end-effectors and the force sensing and displacement specifications.For the first point, the force sensor has been designed to manipulate small components such as 7 x 7 μm parts with great dexterity.For this reason, the thickness of the force sensor is 10 μm.For the microgripper specifications, the solution of using two active fingers provides more dexterity to achieve more complex tasks.Thus, the microgripper used is composed of two active fingers with integrated force sensors. Indeed, a piezoresistive force sensor [10] is integrated into each finger of a piezoelectric actuator [11] to realize a dexterous microgripper.An image of the one force sensor is shown in Figure 3 where the dimensions of the force sensor are The proposed TSFM performances have been identified experimentally for each of the fingers actuator and force sensor.The performances of the TSFM are summarized in Table 1.The performances of the TSFM enable its use for precise nano-assembly while measuring nanonewton forces. Experimental results In this section, some preliminary experimental investigations are done to provide a proof of concept of the possibility of performing dexterous nano-assembly with the presented robotic station.A handling test, for a component with 300 µm of length, 50 µm of width and 50 µm of thickness, is shown in Figure 4-(a) where the forces from both sides of the system are measured precisely and the contact transitions are detected.Figure 4-(b) shows a detection of an undesired contact between the manipulated component and the substrate while the assembly.The undesired contact can be detected using the two force measurements and the contact force can be estimated as the difference Δf (Figure 4-(b)). Conclusion The motivation of this paper is the need of high performant photonic structures for many application fields.Robotic nano-assembly has been presented as a method to fabricate complex photonic structures with optimized optical functionalities.Indeed, it enables to integrate several types of materials from different fabrication processes into the same structure.This paper has proposed a new robotic structure to realize this nanoassembly.The robotic station has 8-DOF for positioning and 4-DOF microgripper with integrated force sensors to perform dexterous manipulation of microparts by detecting the contact and by providing precise alignment and parallelism of structures.An experimental proof of concept for the nano-assembly has been validated by testing the results of manipulation and assembly of components where there dimensions are bigger than photonic structures.The gripping forces have been measured and the contact has been detected precisely enabling to perform dexterous nano-assembly. Future works include the test of this robotic station for the assembly of photonic structures while characterising the precision of the robotic station and the optical results. DOI: 10 .1051/ C Owned by the authors, published by EDP Sciences, 2015 / Figure 1 . Figure 1.Examples of photonic structures: (a) Ridges obtained by saw dicing and polishing of the bulk material (b) 3D photonic cristal onto a membrane (c) photonic component with surface structured by small bumps. shown.Particularly, its dimensions are 1 mm of length, 100 μm of width and 10 μm of thickness. Table 1 . Performances of each finger of the TSFM.
v3-fos-license
2021-12-31T16:14:08.641Z
2021-10-16T00:00:00.000
245586230
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://iptek.its.ac.id/index.php/jps/article/download/11099/6179", "pdf_hash": "d4c93e3b09317684d0adeea3869bef9e26665e07", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1428", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "a0c852ca681d4bba749e3c2c4963a0ab353dff10", "year": 2021 }
pes2o/s2orc
The Impact of Folding Shutter on the Daylighting Performance in Tropical Climate Abstrak —Utilization of abundant sunlight in tropical climates as daylighting can help to save energy consumption in the flats building. Based on daylight conditions in tropics, there is a need for a dynamic shading to be applied in façade which can be transformed to respond to the environmental change. Folding shutter is a dynamic shading device which can be operated by using folding and rotation motion. This paper aims to discuss the impact of folding shutter on the daylight performance in the context of flats buildings in tropical climates. Simulation using Radiance was conducted to test the sub-variables of folding shutter, including the folding configurations and shutter’s rotation angles on different conditions in tropics. The results showed that folding configurations results in high increase of daylight intensity and should be applied moderately. Meanwhile the shutter’s slats can help to diffuse and reflecting incoming light to the deepest areas of room. A closed configuration with shutter’s angle of 0° and 45° performed the best to face the inconsistent cloud movement in partly cloudy sky. The integration of folding shutter and 60 cm overhang produced the most stable performance which can meet the visual comfort standard in the longest period in tropics. Abstrak-Utilization of abundant sunlight in tropical climates as daylighting can help to save energy consumption in the flats building. Based on daylight conditions in tropics, there is a need for a dynamic shading to be applied in façade which can be transformed to respond to the environmental change. Folding shutter is a dynamic shading device which can be operated by using folding and rotation motion. This paper aims to discuss the impact of folding shutter on the daylight performance in the context of flats buildings in tropical climates. Simulation using Radiance was conducted to test the sub-variables of folding shutter, including the folding configurations and shutter's rotation angles on different conditions in tropics. The results showed that folding configurations results in high increase of daylight intensity and should be applied moderately. Meanwhile the shutter's slats can help to diffuse and reflecting incoming light to the deepest areas of room. A closed configuration with shutter's angle of 0° and 45° performed the best to face the inconsistent cloud movement in partly cloudy sky. The integration of folding shutter and 60 cm overhang produced the most stable performance which can meet the visual comfort standard in the longest period in tropics. I. PENDAHULUAN AYLIGHTING importance has been highlighted by previous researches, as it has many positive impacts such as increasing psychological comforts, health, and productivity [1]. Daylighting can also help to decrease the energy consumption of the building [13] and has great benefits to be applied in flats building which mostly used by low-income residents. However, the quality of daylighting is greatly influenced by the geometry of the building, the orientation of the building, the allocation of space, and light openings [20,12]. Moreover, daylight availability also depends on the sky conditions and sun movement at a certain time or period in each climate. The tropical climate has an abundant sunlight with solar radiation between 5,500-7,500 wh/m2 [10], and the external illuminance can reach 130,000 lux [14]. The tropical sky is partly cloudy which has rapid changes according to cloud movement and can result in a high daylight intensity [5]. The composition of solar radiation that reaches the earth's surface can vary which is influenced by the sun's altitude angle [13]. To confront the changing environments in tropics, there is a need for dynamic shading systems that can be easily controlled. Dynamic shading device is shading that is adaptive and responsive to environmental conditions because its character that can be moved [17] and can be controlled according to user preferences [16]. Based on that, this paper will use a dynamic shading device as the daylighting strategy within the context of tropical climate. The performance of a dynamic shading device is strongly influenced by the control system and the type of motion applied. The type of motion which frequently used is translational and rotational, such as those applied to venetian blinds. Whereas a study about more complex dynamic movements such as foldable and deployable shading systems is still limited [11]. Moreover, the integration of different types of movement results in more rich and variable compositions [18]. Folding shutter is the development of a shutter that can be moved by folding motion and rotational motion. The folding motion has such flexibility as it combines the translation and scaling motion [15]. Based on that, this paper aims to investigate the impact of folding shutter on the daylight performance in the context of flats buildings; and to investigate the model configurations which can be applied under different conditions in the tropics. II. METHOD This paper used an experimental method by computer simulation using Radiance to investigate the daylighting performances of folding shutter variables in tropical climate. The experimental method focuses on the cause and effect relationships using control, independent and dependent variables [6]. Radiance is developed by Greg Ward at Lawrence Berkeley National Laboratories, widely recognised and validated by the lighting professionals. Radiance presents interfaces for modeling and translating space geometry, luminaire data, and material characteristics [19]. However, Radiance has some limitation because it tends to overpredict the direct and global illuminance [4]. A. Base Case The base case of this paper is Rusunawa Siwalankerto 2, a flats building with double loaded corridor and open-plan units in Surabaya. The unit is on 5th floor and has a vertical window facing at north orientation. The unit's dimension is 4,5x5,2 m with 2,7 m ceiling height. The vertical window's dimension is 120x160 cm with 70 cm height from the floor. The folding shutter is placed on outside the window as the second façade. Figure 1 shows the nodes measurement on the floor plan and section of the unit and Figure 2 shows the interior of the base case unit. The grid analysis for the simulation is placed on 80 cm height from the floor. There is a total of 14 nodes measurement with 100 cm distance D between each node. B. Variables 1. Control variables: site location, building orientation (North), tipology and dimension of the unit, color and reflectivity of the interior element, folding shutter's material and shutter slats' geometry. 2. Independent variables: folding shutter models (FS1, FS2, FS3), folding configurations from fully closed configuration (KO1) to fully opened configuration (KO5) in which each configuration is folded by 15°, and shutter's rotation angles of 0°, 45°, 90°, 135° for the daylight distribution (illuminance) analysis, and 0°, 15°, 30°, 45°, 60°, 75°, 90°, 105°, 120°, 135°, 150°, 165° for the daylight factor analysis. Figure 3 shows the illustration of the folding shutter models. FS1 is a basic folding shutter which is shutters that folded together, FS2 is an integration of shutter and light shelf 60 cm, and FS3 is an integration of shutter and overhang 60 cm. Figure 4 and Figure 5 shows the sub-variables of folding shutter from the application of folding motion and rotational motion. It should be noted that the folding motion simulation will use shutter's rotation angle of 0° and the rotational simulation will use fully closed configuration (KO1) as the control variables. 3. Dependent variables: illuminance, standard deviation of illuminance, and daylight factors. C. Simulation Procedure This paper using Radiance program to simulate the illuminance and daylight factor for daylight performance analysis. The simulation procedure for each simulation will be explained below. 1. Illuminance simulation on the average condition in tropics (March 21/equinox) under the intermediate sky. This simulation is carried out in the morning (9 AM), noon (12 PM), and afternoon (3 PM) during the equinox. The illuminance data will be used to analyze the daylight distribution from TUU1 to TUU5. 2. Daylight factor simulation on the critical condition in tropics (June 21/summer solstice) under the overcast sky. This simulation is carried out at noon (12 PM) automatically from the Radiance program. The daylight factor data will be compared with daylight factor standard for tropical climate (MS 1525:2007) with a range from 2% to 5%. A. Daylight Distribution In the morning when the sun position is still near the horizon, there is a high contrast of illuminance between TUU1 and TUU5 in the base case. Figure 6 shows the daylight distribution in the morning by the application of different shutter's rotation angles in all of the folding shutter models. FS1 can reflects the daylight until TUU5 by 22% with a rotation angle of 90°. This supported the result of previous research [9], that a sloped shutter can increase the illuminance in the deepest areas of the room. Meanwhile, the integration of shutter and light shelf (FS2) tends to produce a dim daylighting, especially FS2-45° with a -33% illuminance decrease inside the room. On the other hand, FS3 can decrease the illuminance in the area near the window (TUU1) from -21% to -34%. All the shutter's rotation angles of FS3 can also increase illuminance until TUU5 quite well by 15% with the exception of FS3-45°. Daylight distribution analysis for folding motion used the standard deviation of illuminance, in which the standard deviation with lower value indicates more even daylight distribution. Figure 7 shows the standard deviation of different folding configurations in all of the models. Both FS2 and FS3 can distribute the sun inside the room more evenly, especially in fully closed configurations (KO1). FS2 and FS3 are integrated with a solid plane such as light shelf and overhang that can block the direct sunlight more effectively and reduce the high illuminance near the window. Meanwhile, FS1 performed better in a fully opened configuration (KO5). It is because the sloped shutter on FS1 tends to produce high reflections and less effective in blocking direct sunlight. Previous research [8] also stated that folding shading with a solid plane functioned better in providing full shade and reducing the heat effects by solar radiation. At noon, the source of daylighting is a diffused light which has lower daylight intensity than direct sunlight. The base case has a high decrease of illuminance from TUU1 to TUU5. Figure 8 shows the daylight distribution at noon by the application of different shutter's rotation angles in all of the folding shutter models. From all the shutter's rotation angles, FS1-0°, FS2-90°, and FS3-0° results in the best daylight distribution in the room. FS2-90° can increase the illuminance up to TUU5 by 40% because most of the daylight is reflected by the light shelf. Rotating the shutter's slats by 0° to FS1 and FS3 can increase the illuminance in the deepest areas of the room. Previous research [3] also indicated that a rotation angle of 0° can produce the most adequate indoor illuminance. Between all the folding configurations there is no big difference in standard deviation values, but the fully closed configuration (KO1) performed slightly better than the rest. Meanwhile, FS1 results in more even distribution if the model fully folded (KO5) as it can block the direct sunlight better. Figure 9 shows the daylight distribution in afternoon by the application of different shutter's rotation angles in all of folding shutter models. Even though the average illuminance of the base case in the afternoon is quite high (2235 lux), but the daylight distribution can reach the deepest areas of the room better than in the morning. The application of FS1-0°, FS2-45° and FS3-45° provides the best shade that result in even daylight distribution. This reinforces the results of previous research [3,14] that a rotation angle of 45° can reduce the daylight contrast and the maximum illuminance for residential buildings. Whereas the rotation angle of 135° produces the worst daylight distribution with the highest decrease of illuminance inside the room. This is because the rotation angle of 135° is almost parallel to the sun's altitude angle in tropical climate during the equinox. All of the models can distribute the daylight better with fully closed configurations (KO1) as it can reduce the high illuminance in B. Comparison with Daylight Factors Standard From the simulation results, it was found that the base case produces a high daylight factors with maximum daylight factors 13.11% and average daylight factors 4.97%. The average daylight factors in the base case is almost close to 5%, which is too bright for the human visual comfort [2]. Figure 10 shows the average daylight factors of all the folding shutter models by rotational motion from 0° to 180°. The application of folding shutter can reduce the average daylight factors in the base case closer to the standard (2-5%). FS1 has a high fluctuations of daylight factors, especially FS1-45° has the highest value from all the shutter's angles. Whereas FS2 has quite constant daylight factors value, except for FS2-90° which has the highest value (4,43%). It is because the daylight reflection of FS2-90° is quite high that it can increase the daylight intensity in the rear space of the unit. Meanwhile, the average daylight factors of FS3 tend to keep increasing if the shutter's slats are rotated from 90° to 180°. From the simulation results, all of the folding shutter models can produce daylighting above the minimum daylight factors standard of 2%. This indicated that there are no dark problems on the daylighting performance. The percentage of nodes that meet the daylight factors standard (2-5%) is shown in Figure 11. In the base case, there are only 71% of nodes measurement which can meet the daylight factors standard. Meanwhile, the application of folding shutter can manage to increase the percentage by 15%. FS3 can meet daylight factor standards most angle of 60° to 90° on all folding shutter models results in 86% nodes that meet the daylight factor standard. This result is compatible with previous research [7] that the shutter's rotation angle of 60° facing the north orientation tends to produce a low illuminance but is more effective in reducing the side effects of sun radiation. The folding motion from the fully closed (KO1) to fully opened configurations (KO5) on FS2 and FS3 results in an increase of average daylight factors by 3%. FS1 produces the highest daylight factors consistently in all the folding configurations. This can be caused because FS1 has more shutter's slats than the other models, which results in more reflection of daylight. The simulation results of all folding configurations also exceeds the minimum daylight factors (> 2%). Figure 11 also shows that the configuration with the lowest daylight factors value can meet the daylight factors standard better. From the other models, FS3 can meet the daylight factors standard with the highest percentage (86%) constantly from KO1 to KO5. Meanwhile, FS1 can meet the daylight factors standard with a highest percentage on fully closed (KO1) and fully opened configurations (KO5). That is because the fully folded FS1 has the same form with overhang, which can provide a good shading [2]. On the other hand, the daylight performance of FS2 tends to exceeds the the daylight factors standard (>5%) from KO3 to KO5. C. Discussion The experimental results showed that the sloped shutter (FS1 and FS2) can reflect daylight better than the horizontal shutter because each shutter slats receives direct contact with sunlight. However, the sloped shutter is less efficient to reduce the high intensity of daylight, especially if it is applied to the flats building units on the highest floor. However, daylighting control can be done by rotating the angle of the shutter on the folding shutter. The rotation angle of 0° had been proven to reflect daylight the best on both the sloped shutter and horizontal shutter. In the most critical condition in tropical climates (summer solstice), the rotation angle of a 45° and 90° can provide the best shade to block the direct sunlight. It showed compatibility with the result of previous research [14]. Meanwhile, the rotation angle of 135° can't project the incident light properly because it is almost parallel to the sun's altitude angle in a tropical climate. From all the folding models that were tested, FS1 can reflect daylight to the deepest areas of space the best. However, the sloped shutter results in too bright daylight performance that exceeds the daylight factor standards. Even though FS1 can distribute the daylight evenly, but its application potentially causes visual discomfort because of the high daylight intensity. On the other hand, the integration of folding shutter with the light shelf (FS2) results in an uneven distribution of light. This is mostly caused by the shutter's position, which becomes an obstruction for the daylight access on the light shelf. Except for FS2-0° and FS2-90°, the application of rotation motion results in a high decrease of illuminance. Based on that, FS2 can't work well for the flats building because it potentially caused a high contrast of illuminance. The integration of folding shutter and overhang (FS3) has the most stable daylight performance and the highest percentage of nodes that can meet the daylight factor standards. The performance is greatly influenced by the overhang, which can block the direct sunlight effectively to reduce the high daylight intensity near the window. still include adequate natural lighting. The overhang was proven able to withstand exposure to light from high altitude angles in the tropics. The shutter which is positioned under the overhang also can helps to diffuse the incident light before reflecting it inside the room. From the simulation results, the application of folding motion results in the high increase of illuminance, therefore the folding motion should not be applied during the most critical conditions when the outside illuminance is already high. The daylight is better to be distributed through the gaps of the shutter because it can diffuse the daylight intensity better. To confront the changing conditions in tropical climates, the application of closed configurations (KO1) and shutter's rotation angle of 0° and 45° can reflect and diffuse daylight effectively. As for the most critical conditions in tropics under the overcast sky, the application of closed configuration (KO1) and shutter's rotation angle from 60° to 75° can help to block the direct sunlight while entering allowing enough light to enter. Meanwhile, during the critical condition when there is a need for many daylights, the application of semi to fully opened configurations (KO3-KO5) is recommended to increase the average illuminance while providing access to the outside view. IV. CONLUSION Utilization of daylighting is one of the energy conservation efforts for flats buildings that are inhabited by the middle to lower classes of residents. In dealing with the dynamics of conditions in tropical climates, it is necessary to control the daylight by using a dynamic shading device. Based on the simulation result, the folding motion can be used to control the quantity of incoming light. However, the folding motion can result in a high increase of daylight intensity, therefore it is unsuitable to apply in critical conditions in tropics. On the other hand, rotating the shutter from 0º to 90º can diffuse daylight effectively and control the direction of reflected light. However, rotating the shutter more than 90º needs to be avoided as it results in uneven daylight distribution. The integration of the folding shutter with an overhang of 60 cm produced the best daylight performance that can meet the visual comfort standard, especially in the most critical conditions in tropics. The integration of the two shadings results in synergic daylight performance because it can block the direct sunlight while reflecting the diffused lights into the room. Its application can also help to manage a stable daylight (a) (b) Figure 10. The average daylight factors by the application of (a) rotational motion; and (b) folding motion. (a) (b) Figure 11. The percentage of nodes that meet the daylight factor standard by the application of (a) rotational motion; and (b) folding motion. performance to anticipate the changing environments in tropics. By employing a dynamic shading device to control the daylight in the upper window, the lower part of the window can be utilized for ventilation access and displaying the outside views. It should be noted that the experiment in this paper only uses computer simulation and has not gone through physical testing in real conditions in tropical climates. It does not rule out the possibility that new adjustments can occur in the process of preparing the physical model, which can result in a slightly different daylight performance. Nevertheless, the simulation results showed the daylight performance of the application of the folding and rotational motion, especially in dealing with the different conditions in the tropical climate. Providing a daylighting which can meet the visual comfort standards consistently can help to reduce the energy consumption of artificial lighting in flats building. Besides, by controlling the access of direct sunlight, other side effects from the sun like solar heat gains can also be avoided. Thus, the benefits of the environment can be utilized more in buildings to support the users' comfort. Continuation of this study should include validation of the results through testing of a built prototype. Further studies are also needed in the optimization of folding motion and modification of shutter's slats.
v3-fos-license
2019-05-30T23:44:35.772Z
2018-05-21T00:00:00.000
169911027
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.journals.vu.lt/ekonomika/article/download/11694/10283", "pdf_hash": "7f8c8849acc28cf50bf88987c44251816b9ce3c4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1429", "s2fieldsofstudy": [ "Environmental Science", "Business", "Economics" ], "sha1": "4e976279520f12055887ee1fa1075587667b9a88", "year": 2018 }
pes2o/s2orc
Consumer-Side Decision Factors on Their Selection of Bottled Water Brands: Statistical Method Study in a Kosovo Sample Abstract Nowadays, to meet people’s needs in daily life for drinking water, many companies provide bottled drinking water. As this industry grows and more competition occurs, the companies should know the aspects that influence people to buy the products that are bottled drinking water. Although this increase in several bottled water producers can be attributed to market demand and technology modernization, the fact that consumers migrate from one brand to another is significant and indicates that there are factors that affect the consumer’s decision when choosing the bottled water brand. The aim of this paper is to identify and analyze the factors that influence consumers the most when choosing a bottled water brand in the market, using Kosovo as a case study. To define factors based on the consumer’s preferences and valuations of the importance, the principal component analysis was applied based on a correlation matrix, using a component extraction method with a varimax rotation and a Kaiser-Meyer-Olkin adequacy test. The findings show that the consumer’s decision is influenced mostly by six key factors, namely quality, marketing, consumer perception, price, preference and practicality. The research provides new insights into the bottled water manufacturing industry and marketers in positioning themselves in a competitive environment. Introduction The scarce availability of drinking water is becoming more of a worldwide issue every day. Industrialization and the development of transport infrastructure are to be considered as some of the main water polluters, which is a serious threat to our modern society. Therefore, people today have started to adopt different strategies for fulfilling their drinking water needs with specific attention to their health. The Kosovo Agency of Statistics (ASK 2018) has published Water Statistics in Kosovo for 2016-2017. In 2016.59% of Kosovo's population was supplied with potable water through public systems managed by the regional water companies, while about 10.41% of the population did not have access to water supply services. Although a large part of the country is supplied with potable water from local public companies, there is an ongoing increase in the number of businesses that process bottled water. This is also due to the fact that the demand for bottled water is also increasing. According to data obtained by the Ministry of Trade and Industry, the Kosovo Business Registration Agency shows that since 2012, there has been an increase of 36% in the number of registered businesses for the production of bottled water, mineral water and refreshing drinks. According to Ferrier (Ferrier 2001), the increased trend of bottled water consumption reflects our modern way of life. The same study argues that the development of urbanization deteriorates the quality of tap water and, on the other hand, the growing standard of life enables people to bring home more heavy and expensive bottled water. Furthermore, de França (Doria 2010) argues that bottled water consumption is related to demographic factors such as race, income and gender -unlike education and income, which were found to be associated with the perception of risk when drinking tap water. This made the water processing companies to realize the need for bottled drinking water and the power of profit generation from the market. As argued by (Nikitaeva 2012), in today's highly competitive business environment, an attractive, valuable package may be the last chance for the seller to influence the buyers' purchasing decisions. Therefore, advertisers spend millions each year to familiarize the consumers with their product attributes and brand image. This growth can be greatly devoted to perceptions created by bottling companies through their advertisement and promotion of their water as "pristine" and with "healing" attributes. The research purpose of this paper is to identify and gain a better understanding of factors that influence consumers the most when selecting a brand of bottle packed water. The research objective is to analyze the main factors based on the consumer's preference and valuation. The findings will contribute to the bottling water industry, particularly in understanding the factors that customers consider when choosing their water brand. This study will contribute to the water processing and packaging manufacturers by showing them how to efficiently utilize the use of their resources in meeting the needs of their consumers. Literature Review Bottled water consumption has been an increasing global trend during the last decade. Development activities and improved living standards play an important role in increasing the trend of bottled water sales and consumption, which is why there is an increased number of bottled water distributors and sales points. Although this increase in many bottled water producers can be attributed to the market demand and technological modernization, and the fact that consumers migrate from one brand to another is significant, it indicates that there are factors that affect the consumer's decision when choosing one particular brand of bottled water. The brand name of the water company is a fundamental indicator of the success of the water processing companies. According to Keller (Keller 1993), the brand name is a very significant choice, because it sometimes captures the central theme or key association of a product in a very condensed and reasonable fashion. Some authors (Aker 1991;Keller 1993) argue that the set of associations that consumers have for a brand is an important component of brand equity. Such brand associations include both user imagery and psychological benefits. Many consumer researchers (Escalas & Bettman 2003;Setterlund & M.Niedenthal 1993) have found that people choose situations including products and brands by imagining the prototypical users for each item in the choice set and choosing the item that maximizes their similarity to a desired prototypical user. In addition to the brand name, another important factor for bottled water is the taste and odor. The importance of the latter is recognized for drinking water; therefore, many people prefer bottled water simply because of its taste and odor (Foote 2011). Bottled water, packed in a dedicated source or plant, may have a more consistent taste than tap water, which comes from surface sources and must travel through pipes to reach homes (EPA 2005). Therefore, the perception of water quality is an important factor when choosing the bottled water to drink. On the other hand, the influence of the price factor on the customer's choice of bottled water brand is the key rational factor influencing the brand choice. In fact, for some customers, the price is the main factor when choosing the bottled water brand. For most, however, there is a direct trade-off between price and quality and, according to Mullarkey (2001), customers will pay a higher price if the brand is of sufficient quality. Some customers sense value if the price is low, whereas others perceive value if a balance exists between quality and price. In other words, the factors of perceived value can be weighted diversely depending on consumers. Building trust in customers through fair pricing has a positive long-term effect. Another important factor noted by several authors is the packaging of the product, with its different functionalities to ease and to communicate with consumers. There is no doubt about the increasing importance of packaging as a strategic tool to attract consumers' attention and their perception of the product quality (Deliya & Parmar 2012). Packaging materials and shapes are also found to attract attention; in fact, pictures on packages are emphasized to attract attention particularly when consumers are not very familiar with the brands (Vieira 2015). Authors like Silayoi and Speece (Silayoi 2007) argue that packaging innovations should be designed in such a way that the product can be handled without damaging the quality of the contents; furthermore, Deliya & Parmar (2012) add that packaging should also be designed to promote product sales. Innovative packaging may add value to the product if it meets a consumer need, such as portion control, recyclability, tamper proofing, child-proofing, easy-open, easy-store, easy-carry and non-breakability (Deliya & Parmar 2012). Advertising is also an important marketing element in the bottled water industry and everyone should realize the role that advertising plays in modern life (Kotler 2012). In today's dynamic world, it is almost impossible for advertisers to deliver an advertising message and information to buyers without the use of advertising. Certainly, this may be because of the globalization and accessibility of hundreds of channels for the viewers of this modern era. Today, people mostly rely on advertisements rather than other sources (Zhang 2015). Consumers are spending all that extra money on billions of gallons of bottled water because they have bought into the beverage industry's marketing magic that water in a plastic bottle is safer and healthier than tap water (About Food & Water Watch 2007) According to Collins and Wright (Collins & Wright 2014), advertisements represent bottled water as being a healthy alternative to tap water. The bottled water industry has become extremely profitable over the last decade; therefore, the consumer experience with a product is a significant factor. From all aforementioned factors, the main ones continually highlighted and defined as significant by the authors in regard to bottled water are: • Interactive marketing; • Advertising; • Innovative packing; • Trust in product; • Perceived Value; • Price; • Quality; • Brand name; • Taste and odor. Since consumers must choose between many bottled water brands, they are always challenged to consider not one but several factors before choosing their brand of bottled water. For most of aforementioned researchers, depending on the location where the study was conducted, findings were always different from each other. Therefore, researchers cannot always identify and define universal factors that will influence all customers in choosing their bottled water and this phenomenon occurs mainly because of the following circumstances: • Differences in the environment and the circumstances where the water is processed and bottled; • Differences in the attitude and behavior of consumers where the bottled water is sold. The fact that influential factors in choosing any particular bottled water brand may be different depending from the environment, conditions and circumstances where the water is sold increases the importance of the study and therefore is another reason why each market deserves attention. Research Methodology The methodology of this research study is quantitative. The data were obtained through a survey of random consumers from Kosovo. The survey was conducted with the help of fifteen volunteer students from the Marketing Department at the Faculty of Economics, Hasan Prishtina University of Prishtina. The very same students have participated in a pilot study of the questionnaire, because of which the questionnaire was refined and corrected. Consumer participation in the interview was completely on a voluntary basis and in cases when respondents didn't answer, additional respondents were approached. This study focuses on consumers in Kosovo, selecting the largest cities (included in this study) with a total population of 940 743 according to the Kosovo Agency of Statistics (ASK 2017). The number of questionnaires was calculated according to the Yamane formula (Yamane 1973): Based on the sample size calculations, the result we obtained was = 399.94 ≈ 400. Regarding the size of the sample (Tabachnick & Fidell 2007), advise that 50 cases are very poor, 100 is poor, 200 is fair, 300 is good, 500 is very good and 1 000 or more is excellent. As a result, we collected a total of 500 questionnaires, which means 100 more questionnaires than the sample size number; this also in compliance with the advice of Tabachnick and Fidell. The survey was conducted in a 1-month period from July to August 2017. The factor analysis is based on a correlation matrix using the principal component extraction method with a varimax rotation and a Kaiser-Meyer-Olkin adequacy test. The purpose of these analyses is to eliminate irrelevant factors or those that have less impact. The results were presented in the tables for statistical analysis and interpretation. The analysis was conducted using the SPSS program version 20.0 for the Windows OS. Factor Analysis The initial factors that were used in the questionnaire were originated mainly from a review of literature, discussions, interviews and consultations with experts in the field. Our initial factors derived from the perspective of consumers and bottled water manufacturers. A principal component analysis approach was used to reduce a large set of factors to a smaller number of underlying factors called the principal components (or factors) that enable the comparison and interpretation of the same later. The extracted factors were interpreted according to their correlation with their initial variables and then the analysis enabled us to synthesize the information contained in those variables by identifying the most important ones. After the execution of the analysis, to decide about whether we should keep all the variables in our model or eliminate any, first we have studied the variables to identify the ones that are poorly correlated with all the other variables. Correlation The correlation coefficient takes values from -1.00 to 1.00 and calculates whether there is a relationship between variables and what level. For example, the figure shows that there is a strong positive relationship of 0.739 between local water brand variables with that of the international brand and a weak negative relationship of -0.139 between the local brand and alternative water sources (see Table 1). Since we have identified several such variables in our model, before we made any decision, we also ran the Kaiser-Meyer-Olkin measure and Bartlett's Test of Sphericity that told us whether the overall correlation between the initial variables is strong enough or not. .000 The KMO value is .746, which means that our sampling adequacy is medium. The p-value of Bartlett's test of sphericity is lower than 5%; therefore, we refuse the null hypothesis and conclude that the correlation among variables in our model is significant. The measure of how much of the variance for the observed variables are explained by a factor is known as the Eigenvalue. Feild (2009) explains that an Eigenvalue that equals to or greater than one describes more variance than a single observed variable. Exploratory factor analysis in our data leads to the identification of six factors the Eigenvalues of which are > 1 and explain 64% of the variation out of twenty-four initial variables that we had in the beginning (see Table 3). Deciding on how many factors need to be retained is known as extraction (Feild, 2009). For the extraction process, we have applied the so-called Kaiser criteria, according to which only the factors the Eigenvalues of which are higher than 1 were retained. We have also reviewed the Evrard extraction criteria, according to which the component that corresponds with the turning point in the screen chart signifies the last variable that should be included and retained for the final solution (see Figure 1). The plotting of each Eigenvalue against the associated factor on a graph is known as a scree plot. This way, our final solution contains the correlation coefficient of six extracted components (factors) out of twenty-four initial variables, which are coded with numbers from one to six. From the component Matrix in Table 4, to get a clear factor structure and to ensure that we have no significant cross-loadings, as Feild (Feild 2009) has explained, we have applied a varimax factor rotation analysis, the results of which are shown in Table 5. As we have requested from the application, only coefficients with an absolute value greater than > .55 are presented. The analysis resulted in a clear factor structure, and, as a conclusion, we came to these points: 1. The first factor is cross-linked with the variables that have to do mainly with Quality, like the source of water, packing design, packing volume and product advertisement. 2. The second factor is cross-linked with variables related with Marketing, like promotion, brand recognition and water composition. 3. The third factor is cross-linked with variables that have to do with Consumer Perception, like the local brand, international brand, has the water having healing properties and preferences. 4. The fourth factor is cross-linked with variables that have to do with Price, like price and quantity. 5. The fifth factor is cross-linked with variables that have to do with Preference: availability in the shop, the diversification of the same brand and consumption on daily bases. 6. The sixth factor is cross-linked with variables that have to do with Practicality: handy packaging and the water being easy to carry around. Discussion and Conclusion This research intends to contribute to water treatment producers in Kosovo by providing new knowledge to bottled water producers, positioning themselves in the new competitive environment. Survey findings have been analyzed to find out what are the factors that customers consider most when choosing a brand of bottled water. The results of the study were achieved through a letter-based paper with random clients in Kosovo. Of the twenty-four variables extracted from the literature review using the exploratory factor analysis, we have identified the level of significance of each individual variable and thus have taken six important factors that affect it. People who buy bottled water are affected by the brand, water quality and packaging. It is important that these factors be maintained in the drinking water business. Entrepreneurs in that business need to ensure that their companies are working well for these factors. First, they should offer an attractive, consistent and well-known brand of their product to consumers. Second, they should ensure that the novelty of water treatment can improve product quality while maintaining the taste and good smell of water. After that, they also should worry about packaging the product (the bottle in this case). Based on the above findings, people like to have bottles that are easy to carry, store and open. In conclusion, the findings recognize the literature review, namely that the factors such as Quality, Marketing, Consumer Perception, Price, Preferences and Practice are the main factors influencing customer decision-making when choosing a brand of bottled water. Water bottle manufacturers, in designing their marketing plans and strategies, will focus more on considering factors like Quality, Marketing, Consumer Perception, Price, Advantages and Practices to generate profit and be successful in the operating their businesses.
v3-fos-license
2019-05-10T16:36:10.000Z
2019-05-10T00:00:00.000
150373650
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-7539-0.pdf", "pdf_hash": "d184979a59c24e6c2e95365b5055923354848236", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1431", "s2fieldsofstudy": [ "Physics" ], "sha1": "99483eb7cd6062314b1effafdec89017d79bff2c", "year": 2019 }
pes2o/s2orc
Revealing mass-degenerate states in Higgs boson signals The observed Higgs boson signals to-date could be due to having two quasi-degenerate 125GeV\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$125 \,\text {GeV}$$\end{document} scalar states in Nature. This kind of scenario tallies well with the predictions from the Next-to-Minimal Supersymmetric Standard Model (NMSSM). We have analysed the phenomenological NMSSM Higgs boson couplings and derived a parameterization of the signal strengths within the two quasi-degenerate framework. With essentially two parameters, it is shown that the combined strengths of the two quasi-degenerate Higgs states in the leptonic (and b-quark) decay channels depart from the Standard Model values in the opposite direction to those in the vector boson channels. We identify experimental measurements for distinguishing a single from a double Higgs scenarios. The proposed parameterization can be used for benchmarking studies towards establishing the status of quasi-degenerate Higgs scenarios. Introduction Higgs boson discovery represents the beginning of a new epoch for fundamental physics. The precise measurements of its couplings is an important aim for particle physics which could possibly give hint to physics beyond the Standard Model. With current data, the Higgs properties are compatible with the prediction of the Standard Model [1,2]. These same properties could also be due to the combination of effects arising from having two quasi-degenerate scalar states around 125 GeV. Such a tantalizing possibility have been predicted by new physics models such as the Next-to Minimal Supersymmetric Standard Model (NMSSM). The impact of the Higgs properties and precision measurements on the NMSSM scenarios with two quasi-degenerate scalars will contribute towards sharpening our understanding of the Higgs boson data and Nature -it could be that the data might have already contain some indications for new physics. a e-mail: abdussalam@sbu.ac.ir b e-mail: meugenia@ecfm.usac.edu.gt The current state of findings from the Large Hadron Collider (LHC), i.e. the absence of direct signals of physics beyond the Standard Model (BSM), has been forecasted for the case of supersymmetry (SUSY) by pre-LHC global fits of models to data. For instance, as pointed out in [3][4][5] the large mass of the Higgs was already an indication for heavy supersymmetric mass spectra. Within such models, phenomenological studies could be done via two main approaches, namely the simplified models approach [6,7] and the phenomenological model parameterization [5,[8][9][10][11]. In this article, the latter approach will be used. Several groups have addressed mass-degenerate Higgs scenarios within the NMSSM. Refs. [12][13][14] have considered two quasi-degenerate Higgs states for the real and complex NMSSM, with a mass difference large enough to use the narrow width approximation. Ref. [15] has gone beyond the narrow width approximation and showed that interference effects can account for up to 40% of total cross sections. To be able to conclude that departures from SM prediction are a consequence of the existence of more than one resonance [16,17] have proposed statistical test based on the analysis of a signal strength matrix, where all the channels are considered independent. A simplified version of their results agrees with what was proposed previously in [12]. In this article, we focus on the possibility of having two mass-degenerate states with different coupling structures that when combined mimic a single Higgs features. The main aim is to derive a set of NMSSM parameters most relevant for quasi-degenerate Higgs studies vis-á-vise collider data. For this, the NMSSM doublet-singlet mixings structure [15,18,19] of the Higgs sector will be used. In Sect. 2 we review the production and decay ratios of the two lightest NMSSM CP-even Higgs states. We focus on the couplings of these to vector bosons and heavy quarks. In Sect. 3 we perform a scan of the parameters of the NMSSM while imposing that the two lightest CP-even Higgs states reproduce the mass of the standard Higgs measured by the LHC. We describe the allowed parameter space regions and relevant parameter correlations. In Sect. 4 the sample is then used together with analytical relations for the couplings and signal strengths to show that the quasi-degenerate Higgs properties can be explained approximately by using just two free parameters. We also show how the superposition of two quasi-degenerate Higgs around 125 GeV could be in agreement with current experimental results. Finally in Sect. 5 we analyse the sample based on signal strength ratios that can discriminate between the single versus double resonance scenarios. Higgs couplings to fermions and vector bosons Right after the discovery of the Higgs the search for signals of physics beyond Standard Model in the production and decay of the Higgs became a priority. A possible excess in the γ γ channel motivated a lot of work, some of them within the NMSSM framework [12,[19][20][21][22]. In particular King et. al. [19] pointed out that the signal doublet of the γ γ channels could be enhanced for large singlet-doublet mixing. We will take these as a starting point for analysing two quasidegenerate CP-even Higgs states. For the discussion of the following sections it is important to have a clear picture of how the widths and therefore the Higgs branching ratios depend on the singlet-doublet mixing. Let us start introducing some notation, we define ψ = (H d , H u , s) and φ = (h 0 , H 0 , s) in such a way that h 0 = v and H 0 = 0: The Higgs states h = (h 1 , h 2 , h 3 ) are related to ψ and φ in the following way, where U i j are the elements of the mixing matrix, U . We consider it convenient to use the elements of U to parameterise the couplings; for example U i1 and U i2 are respectively the h 0 -component and H 0 -component of h i . In this way it is easier to make the comparison to the standard Higgs. Using the above notation we write the tree-level Higgs couplings to vector bosons and heavy quarks as: In the H 0 decoupling limit (i.e. U 12 = U 22 = 0) all the couplings are proportional to U 11 , the h 0 -component of h 1 . We are interested in the departure of the production and decay signals of h 1 in the Z 3 -invariant NMSSM with respect to the one of the standard Higgs. To weight this we will use the signal strength, Because of the small width of the Higgs states we assume they are produced on-shell, therefore the total cross sections are evaluated as the production cross section times the branching ratio. Now, in order to obtain the required properties for the Higgs states to reproduce ATLAS and CMS measurements we consider two possibilities: (I) h 1 or h 2 is the Higgs state detected at the LHC, and (II) h 1 and h 2 are the Higgs states measured by the LHC, where h 1 and h 2 are mass degenerate. We will show that these two possibilities correspond, respectively, to: (I) Small singlet-doublet mixing, and (II) Large singlet-doublet mixing. Let us analyse the case with small singlet-doublet mixing where h 1 is mainly h 0 , in other words U 11 ∼ 1. For this case it is a good approximation to consider that the width of h 1 is dominated by the decay rate of h 1 → bb and therefore the variation of the width is controlled by the square of the Higgs coupling to bottom quarks, g h 1 bb . Using the couplings described in Eq. (4) the signal strengths of the vector-boson fusion production of h 1 and further decay to W W/Z Z and bb are approximately, whereĝ = g NMSSM /g SM , the couplings g NMSSM are those in Eq. (4), and g SM are the Standard Model (SM) couplings. The enhancement or suppression of the first signal strengths depends on tan β U 12 /U 11 . As such, the absolute value and sign of this factor determines respectively the magnitude of the ratio between the signal strengths and whether there is an enhancement or suppression of μ VBF→h 1 →WW/ZZ with respect to μ VBF→h 1 →bb . A similar analysis holds when h 2 is considered the Higgs state measured at the LHC. Next, let us examine the case with large singlet-doublet mixing where h 1 has non-negligible S content. In this case, the approximation U 11 ∼ 1 is not valid any more. The assumption that the width of h 1 is almost totally controlled by h 1 → bb is no longer a good approximation. The size of tan β U 12 /U 11 may take very large values and therefore the branching ratio could significantly differ with respect to the standard Higgs. So, we would like to have a simple expression for the widths appropriate for all values of U i1 . In terms of the standard Higgs decay rates, one can write where h i → SM rest represents the rest of the decay channels. For large singlet-doublet mixing the widths of h 1 and h 2 could be much smaller than SM , producing large departures of the branching ratios with respect to the ones of the standard Higgs, unless the widths and the decay rates of each Higgs state change at the same proportion. From now on we will use Eq. (10) as the enhancement(suppression) rate of the width with respect to the SM value. The analytic expressions for the signal strengths for vector-boson fusion production and decay to W W/Z Z and bb can be written as, and μ an V BF→h2→bb Note that for a large singlet-doublet mixing the relative size of tan β U 12 /U 11 has a larger range of variation than in the case of small singlet-doublet mixing, as consequence there might be larger enhancement(suppression) to the signals. Moreover, since the H 0 -component of the Higgs states is the one responsible for large variations of the branching ratios, it is interesting to see that in the H 0 decoupling limit (U 12 0 and U 22 0), Hence for large singlet-doublet mixing it is not possible to reproduce the experimental data with a single Higgs state. But, if h 1 and h 2 are mass quasi-degenerate, assumed to be unresolved away from each other by experiments, the superposition of the two states could show up in signals as single standard Higgs with, lim Notice that the last (approximate)equalities require U 31 0 to fulfill the unitarity condition for U. It is interesting to compare the departure of the signal strengths for different channels of the same Higgs state. As described earlier, the ratio between signal strengths depends on tan β U 12 /U 11 for h 1 and on tan β U 22 /U 21 for h 2 . As such, the departure of the global signal strength will depend on the relation between U 12 and U 22 . In the following sections we analyse the scenario with large singlet-doublet mixing. We will assume that the Higgs signal measured by ATLAS and CMS is a superposition of the production and decay of two Higgs states. To get the global enhancement(suppression) we will sum the contribution of the two Higgs states. Notice that for this approximation to be valid the widths should be much smaller that the mass difference between h 2 and h 1 . The phenomenological NMSSM parameters scan Let us consider the case where the Higgs signal measured by ATLAS and CMS is a superposition of the production and decay of h 1 and h 2 , meaning that the Higgs states are close enough not to be resolved by the experiments, but with large enough separation to have negligible interference effects. To study the region of the parameter space of the NMSSM where this condition is fulfilled we perform a parameter scan as done in [23]. The phenomenological NMSSM (pNMSSM) We shall consider an R-parity conserving NMSSM with superpotential, where The chiral superfields have the following Q : 3, 2, The corresponding soft SUSY-breaking terms are with A tilde-sign over the superfield symbol represents the scalar component. However, an asterisk over the superfields as in, for example,ũ * R represents the scalar component ofŪ . The SU (2) L fundamental representation indices are donated by a, b = 1, 2 while the generation indices by i, j = 1, 2, 3. 12 = 12 = 1 is a totally antisymmetric tensor. In an approach similar to that of the pMSSM [5,[8][9][10], the pNMSSM parameters are defined at the weak scale with the non-Higgs sector set, Here, M 1,2,3 and mf are respectively the gaugino and the sfermion mass parameters. A t,b,τ represent the trilinear scalar couplings. With electroweak symmetry breaking an effective μ-term, μ eff = λ v s is developed. The μ-term, the ratio of the MSSM-like Higgs doublets' vevs tan β = H 2 / H 1 and the Z-boson mass, m Z lead to the tree-level Higgs sector parameters Next, including four SM nuisance parameters, namely, the top and bottom quarks m t,b , m Z and the strong coupling constant, α s , makes the pNMSSM parameters: 3.2 The scanning procedure . tan β is allowed between 2 and 60. For minimising fine-tuning, we subjectively let μ eff = λ v s to vary within 100 to 400 GeV not too far away from the Z-boson mass. The remaining Higgs-sector parameters were set within the ranges shown in Table 1. Constraints on the parameters of the Higgs sector From the pNMSSM parameter scan, we use a sample with two quasi-degenerate lightest CP-even Higgs bosons. It was required that h 1 and h 2 have mass equal to 125 ± 3 GeV, where the ±3 GeV accounts to the theoretical errors associated to the values of the masses computed by NMSSMtools. In addition it was required that the mass difference, m h 2 − m h 1 < 3 GeV. 1 We focus on the regions of the Higgs sector parameters for studying the correlations within those parameters and for relating them to other parameters which are directly connected with the signals measured at the LHC such as the CP-even Higgs mixing matrices. 1 The CMS resolutions for Higgs bosons are channel dependent and typically around 2.5 to 4 GeV [74,75] for bosonic channels. As such m h2 − m h1 < 3 GeV can be considered as a mass degeneracy condition for which the two Higgs cannot be resolved by CMS run-2. It is useful to have an explicit form for the Higgs mixing matrix U . We parameterise this using three angles θ 13 , θ 12 , and θ 23 such that Here c i j = cos θ i j and s i j = sin θ i j . Given the mixing matrix, obtained numerically by the SUSY spectra calculator NMSSMtools, then the mixing angles can be extracted as: Now, considering that we want to reproduce a standard Higgs signal, we determine the expected ranges for the mixing angles. In order to get the ratio between μ V B F→h 1,2 →W W/Z Z and μ V B F→h 1,2 →bb close to one, either the value of μ V B F→h i →W W/Z Z /μ V B F→h i →bb for each Higgs state has to be close to one, or a fine cancellation should take place. In this work we focus on the first case. 2 From Eqs. (12)- (15) one can see that this condition is possible when U 12 and U 22 are very small and as a result s 12 and s 23 should also be very small according to Eq. (29). On an other hand, Eq. (16) implies that the superposition of h 1 and h 2 can reproduce the standard Higgs signal for U 31 ∼ 0 (i.e. large values of m H 0 ). For this to happen either θ 12 has to be very small or θ 23 has to be close to ±π/2. In summary, θ 12 ∼ 0 and θ 23 ∼ 0 will guarantee that we are working in the regime where the superposition of the two Higgs states agrees with experimental measurements. In the limit of small θ 12 and θ 23 , where we have neglected O(θ 2 ) terms. For the results of our scan this approximation works with a 0.5% error. We have been able to constrain the parameters of the mixing matrix requiring conditions that will give us a standardlike Higgs signal. This conditions will affect the masses or couplings of the heaviest and pseudoscalar Higgs bosons. To see this, it will be useful to relate the mixing angles θ 13 , θ 23 and θ 12 to the fundamental parameters of the Higgs sector. Using Eq. (29) we relate the terms of the mass matrix with the physical masses by introducing two new parameters: m 2 h , the central value of the two lightest CP-even Higgs states, and δm 2 h , half of the squared mass difference, To simplify the expressions obtained from Eq. (33) we factorise c 12 and c 23 to write U in terms of t kl ≡ tan θ kl and use the approximations: where kl = 12, 23 and mn = 12, 23. Finally, we will focus on the relations in terms of the mass matrix elements M 22 and M 23 since M tree 22 and M tree 23 reproduce pretty well the values computed by NMSSMtools, and because we wish to get simple relations between the Higgs sector parameters, masses and mixing angles. We have checked numerically that for the rest of mass matrix elements the tree level expression are not precise enough. We can further simplify Eq. (35) where in the last equation we have further considered that δm 2 h cos(2θ) m 2 h . Using the approximation of large tan β and large M A from reference [115]: We have checked numerically that m h 3 ≈ M A is a good approximation for the pNMSSM points considered. Now, let us take M 23 from reference [115] 3 where v s = √ 2 μ/λ and ξ = μ/M A . Left panel of Fig. 1 shows in the x-axes the value of θ 23 computed by NMSSMtools and in the y-axes the analytical approximation described in Eq. (41), as one can see in the figure there is a good agreement between the analytical expression and the numerical value (green points), and it is clear that the main contribution to θ 23 comes from the first term of Eq. (41) (blue points). Right panel of Fig. 1 shows the relation between θ 23 and m h 3 for constant values of λ. There is a trend: larger values of |θ 23 | correspond to smaller values of m h 3 , except for very small values of |θ 23 | where the two parameters seem to be uncorrelated. Still, Eq. (41) shows that the value of tan θ 23 is not directly related to the scale of the heaviest Higgs, but instead it is related to the value of λ, μ and tan β. 4 Although the Higgs boson masses get important contributions from loop corrections, it is possible to get some information from the tree level expressions for m h 1 and m h 2 . For large values of tan β and M 2 A , where v S = √ 2μ/λ (see Eq. (32) of [115]). In order to get a constrain for the initial parameters from the condition of 4 Let us remember that in the decoupling limit of H 0 , small mass difference between the two lightest Higgs states, we require a small mass difference between the tree level masses showed in Eq. (42). But, since the tree level expression do not precisely reproduce the masses of the Higgs states we request the mass square difference at tree level to be smaller than M 2 Z , meaning that both terms inside the square root should be smaller than M 4 Z . Let us focus on the first term, for A κ M Z there should be a correlation between A κ and κv s such that there is a cancellation that leads to an order M 2 Z value. Note that the average of the tree-level squared masses also requires this cancellation to occur in order to get the masses of the Higgs states in the desired range. For |A κ | M Z we expect, Figure 2 shows the relation between A κ and κv s , as manifested in the figure for |A κ | 600 GeV the approximation of Eq. (43) works within an error smaller than 5%. Furthermore, using Eq. (43) it is possible to simplify other parameters relevant in the Higgs sector, Eq. (30) of [115] gives a simplified expression for the mass of the light pseudoscalar, Putting Eq. (43) into Eq. (44) we write the mass of the lightest pseudoscalar in terms of κ and v s , Figure 3 shows the comparison between Eq. (45) and the value computed by NMSSMtools. It can be seen that for m A 1 > 500 GeV Eq. (45) is a pretty good representation for the light pseudoscalar mass. For completeness, it is worth mentioning that the second term inside the squared root of Eq. (42) is suppressed by a factor v −2 s , as such we do not expect to get any good correlation of parameters from there. Comparison between the mass of the lightest pseudoscalar computed by NMSSMtools and the approximate analytical value described in Eq. (45). The colour code shows the value of A κ , which as described in Eq. (43) it is related with the value of κv s All the information, presented above, are useful for determining an optimal range of parameters in order to perform a specialised parameters scan dedicated for studying massdegenerate Higgs region(s). The two lightest CP-even Higgses at the LHC In this section we will use the results of the scan and the analytical relations for the couplings and signal strengths to study the parameter space where the two lightest CP-even Higgs states mimic the SM-Higgs signals. First, we have to verify the validity of the analytic expressions for the signal strengths comparing these expressions with the numerical values computed by NMSSMtools. 5 Figure 4 shows the comparison between the signal strengths 5 To perform this comparison we flip the order of the mass eigenstates computed by NMSSMtools, in such a way that h 1 has the largest component of h 0 , and it is not necessary the lightest mass eigenstate. The need of this transformation is due to the convention used for the Higgs computed by NMSSMtools, μ num , and the analytic approximations showed in Eqs. (12)- (15), μ an , for VBF→ h i → W W/Z Z (left panel) and VBF→ h i → bb (right panel). From the figure we see that there is a good agreement between the analytical approximation and the numerical computation. Now, let us identify the relevant parameters that produce deviation from experimental measurement. Writing the couplings, widths and signal strengths in terms of the mixing angles, for small values of θ 12 and θ 23 , see Eqs. (4) and (32), Using Eqs. (10) and (46) we get, Finally, Eqs. (12)-(13) can be written in terms of the mixing angles as (54) From Eqs. (51)-(54) we see that the signal strengths depend on four parameters: θ 13 , θ 23 , θ 12 and tan β. However, in the limit where θ 12 tan β θ 13 , which is the case for the pNMSSM posterior sample analysed, the number of parameters reduces to two: From Eqs. (51)-(54), one can see that the dependence on θ 12 always appears as a factor in the expression cos θ 13 (1 + θ 12 tan β) or sin θ 13 (1+θ 12 tan β). Therefore for θ 12 tan β 1 the contribution of θ 12 is negligible. To understand the dependence of the signal strengths with respect to θ 13 and θ 23 tan β let us start analysing the relation between the signal strengths for a given Higgs state. The top row of Fig. 5 shows the correlations between μ V B F→h i →W W/Z Z and μ V B F→h i →bb for h 1 (top left) and h 2 (top right); for h 1 we can see that the difference between the bb and W W/Z Z channel signal strengths is not small. In fact, this could be taken to imply that it is not possible to reproduce the experimental results with such differences. However, looking at the right panel of the figure and using the colour code to select regions with constant values of θ 13 , it is possible to compare the rates of the signal strengths for both Higgs bosons. The plots show that the enhancement(suppression) of one channels of h 1 is more or less compensated with a suppression(enhancement) in the same channel of h 2 . The analytic expressions for the widths of the Higgs states, Eqs. (49) and (50), show that the term proportional to θ 23 tan β has a minus sign in the width of h 1 and plus sign in the width of h 2 , decreasing(increasing) the decay rate of h 1 → bb while increasing(decreasing) the decay rate of h 2 → bb as |θ 23 | increases its value. The bottom row of Fig. 5 shows the width of h 1 and h 2 as function of θ 13 Let us analyse the global signal strengths. Figure 6 shows the sum of the signal strengths of vector-boson fusion production and decay to W W/Z Z (left panel) and to bb (right panel), these factors represent the global enhancement or suppression of the superposition of the two signals respect to the signal of the standard Higgs. It is important to keep in mind that to get the global signal strengths we sum the contributions of the individual signal strengths, which is allowed since we require the mass difference of the two lightest CP-even Higgs states to be small enough not to be resolved by current experiments, but much larger than the width of the particles to neglect interference effects. 6 There are several points we would like to comment from Fig. 6, the departure of the signal strength increases with the size of θ 23 tan β as in the case of the individual signal strengths. The modification of the signal strengths for h 1 is "compensated" by the modifications of the signal strengths for h 2 and therefore the total effect is smaller than the one for the individual rates but still not negligible. Regarding the relation between the two global signals strengths it is clear There are two regions that seem to be in full agreement with the SM (the signal strength is 1): the region where θ 23 0 and the region where θ 13 0, as we expected. There is a third region where θ 13 is between 0.2 and 0.4, where for a very precise value of θ 23 the signal strength is very close to one. On the other hand, for small values of θ 23 , let's say θ 23 tan β −0.25, the deviation from one of the signal strength is very small, very precise measurements will be necessary to resolve it. There is one last comment about Figs. 5 and 6. We are able to fully describe the rates and the widths of h 1 and h 2 in terms of two parameters: θ 13 and θ 23 tan β, instead of three, indicating that θ 12 tan β 1 for the set of successful scanned points. So far we have focused our study to two channels: VBF→ h i → W W/Z Z and VBF→ h i → bb, but the current measurements of the Higgs couplings constrain several more channels. Let us comment about the most relevant production and decays: (a) Production processes like gluon-gluon fusion (GGF) and Higgs production associated to top quarks (ttH) are very important. To analyse these let us go back to Eq. (4), which describe the couplings of the Higgs states to top quarks, Comparingĝ g i tt withĝ g i bb we see that the contribution from θ 23 is cot 2 β times smaller forĝ h i tt than forĝ h i bb , therefore we expect the contribution of θ 23 to be very tiny and the production processes of GGF and ttH to behave as vector-boson fusion for given values of θ 13 and θ 23 tan β. (b) The Higgs decay to photons was one of the most important channels for the discovery of a new particle, where the main contribution to the decay of the standard Higgs to photons is through a loop of W bosons. We expect that the decay of the Higgs states to photons with respect to the value of the standard Higgs scale as the decay to WW/ZZ. (c) The decay of the Higgs states to taus with respect to the value of the standard Higgs will scale as the decay of the Higgs states to bottom quarks. To complete the description of the signals of the two lightest CP-even Higgs states, in Fig. 7 we show the signal strengths for GGF→ h 1,2 → W W/Z Z (left panel) and GGF→ h 1,2 → γ γ (right panel). As we expected, the gluongluon fusion production of the Higgses and decay to WW/ZZ is pretty similar to the vector-boson fusion production, on the other hand, the decay to photons shows a larger departure. So far we have seen that the leading behaviour of the signal strengths is given by θ 13 and θ 23 tan β. In the limit where θ 12 0, we could write a biunivocal function to determine one (of these parameters) in terms of the other. An approximate relation between θ 13 and θ 23 tan β might be useful to study the region around 0.2 θ 13 0.4 where it seems possible to mimic the signal of the standard Higgs and make it indistinguishable even for very precise experimental measurements. To determine the relation between the parameters we choose the to solve the equation: By taking μ V B F→h 1 →W W/Z Z and μ V B F→h 2 →W W/Z Z from Eqs. (51) and (52), neglecting the terms proportional θ 12 , and rewriting the sin θ 13 and cos θ 13 in terms of sin(2 θ 13 ) and cos(2 θ 13 ) we can simplify Eq. (55) to get a quadratic equation in cot(2 θ 13 ). So, there are two solutions for θ 13 : where BR bb = 1 − BR bb . For δ = 0 the solution simplifies to With Eq. (56) we are able to determine θ 13 in terms of θ 23 tan β and δ. Figure 8 shows the comparison between the semi-analytical relation in Eq. (56) and the numerical results from our scans. Although it is not a precise relation, Eq. (56) gives a very good approximation to the correlation between θ 13 and θ 23 for a fixed value of δ. Searching for mass-degenerate Higgses As commented in references [12,16] there are ways to test the existence of mass-degenerate states. The determinant of a signal strengths square matrix could give information about the number of resonances. If the determinant of the square matrix is equal to zero then the existence of a single Higgs resonance will be enough to reproduce the signal strengths. For simplicity we will use a compact notation: μ i j = μ i→ j , where i represents the production mode and j the decay channel. Considering two square matrices, the condition for the determinant to be non-zero can be written in terms of the ratios To check if it is possible to establish the existence of two resonances in the NMSSM we consider the set of pNMSSM posterior sample described in Sect. 3 and check for points which are within one and three sigma of the particular signal strengths listed in Table 3. Figure 9 shows the comparison between the ratios of the signal strengths in Eq. (59). The upper (lower) panel shows all the points that are within three (one) sigma of the val- Table 3), it doesn't include the SM value at one sigma. The left panel of Fig. 9 shows that the ratios between W W and γ γ signal strength are basically the same, meaning that the determinant of R A is approximately zero and therefore in agreement with a single resonance hypothesis. On the other hand the ratios between τ τ and γ γ signal strength are slightly separated from the dotted line, the determinant of R B is different from zero. In general we would expect that if there is more than one Higgs state the ratio between two signal strengths with the same production process and different decay product is not going to be equal to one. However, we get that this ratio is almost the same for the rate between gluon-gluon fusion and for vector-boson fusion production processes, which indicates that both production cross-sections are very similar for a given Higgs state. Therefore, it doesn't seem possible to distinguish between single and double resonances from those measurements for this set of scanned points. Is there any observable that could be used to distinguish between single and double resonance signals? From the discussion of the previous sections we have learned that μ V B F,bb have an opposite behaviour with respect to the other signal strength we have considered, therefore we may suspect that the production of Higgs states associated to bottom quarks compared to the production associated to vector bosons would give a larger departure from the SM signal than the comparison between vector-boson fusion and gluongluon fusion. Let us consider the matrices, where BBF represents the Higgs productions associated to bottom quarks. To obtain a determinant different from zero requires that ratios of the signal strengths follow: To compute the signal strength of Higgs production associated to bottom quarks we use the reduced couplings to bottom quarks computed by NMSSMtools. Figure 10 shows the comparison of the ratios described in Eq. (10) for points that fulfill the experimental signal strength listed in Table 3 within three sigma. The figure shows that the determinant of the R C and R D is different from zero for a large part of the points, and therefore it gives a clear signature for the existence of more than one Higgs resonance. It may be surprising to see such a large deviation from zero in the determinant of R C and R D and not in the determinant of R A and R B , the main reason lies in the difference between the production processes. Although it does not seem straight forward from the analytic expressions of the full signal strength to single out this differences and directly relate them with the value of the determinants, one can always compare the production cross-sections for each Higgs state separately. If they are approximately the same, then the ratios shown in Figs. 9 and 10 will be the same and the determinant of the matrix R will be approximately equal to zero. For simplicity let us consider that the gluon-gluon fusion cross section is dominated by the coupling of the Higgs to top quarks, this consideration will allow us to have more insights of the source of discrepancy between the determinants. Equations (4) show thatĝ tth i has an extra factor −U i2 / tan β with respect to the coupling to vector bosons, using the approximation of small θ 23 and negligible θ 12 , the extra factor simplify to θ 23 / tan β times cos θ 13 (sin θ 13 ) for h 1 (h 2 ), a factor suppressed by tan β. Therefore, unless tan β is close to one, or θ 23 is large, we would expect very similar signal strengths for gluon-gluon fusion and vector-boson fusion for each Higgs state, in consequence the total signal strengths for the same final state will be also very similar, and the determinant of R A and R B will be close to zero. On the contrary, if instead of gluon-gluon fusion production process we consider Higgs production associated to bottom quarks, Eq. (4) show thatĝ bbh i has an extra factor U i2 tan β with respect to vector boson coupling, the factor is tan 2 β larger than in the case ofĝ tth i . For non-negligible values of θ 23 there will be a significant departure of signal strength of the Higgs production associated to bottom quarks with respect to the vector-boson fusion for the same final state. When computing the ratio of the total signal strength for different final states we would expect a larger deviation, in consequence the determinant of R C and R D will be different from zero. These arguments describe very well a set of points with medium to large values of tan β. For small values of tan β and large enough values of θ 23 the determinant of R A and R B will also show a departure from unity. Figure 11 shows the values of θ 12 , θ 23 , tan β and m h 3 for the pNMSSM posterior sample with m h 3 larger that 1 TeV and values of tan β larger than 10. As we expected the value of θ 23 / tan β is tiny, which explains why the determinant of R A and R B is very close to zero. The large values of tan β also explain the large departure from one for the determinant of R C and R D . Our scan focused on the region of the parameter space with medium to large values of tan β, to complete our analysis we analyse a new set of points with smaller values of tan β relative to the first sample set. We perform another small scan giving more preference to the region of small tan β and small m h 3 , covering tan β in the range of [2.5, 21] and m h 3 in the range of [435 GeV to 2 TeV], the results are summarized in Fig. 12. The top row of the figure shows the values of θ 12 and θ 23 with respect to m h 3 and tan β. To analyse these two plots in comparison with Fig. 11 we have used the same range for the variables plotted in the colour bar to make easier the comparison. First let us focus on the top-left plot of Fig. 12. Note that the range of values for |θ 23 tan β| is almost the same for both samples suggesting that this parameter is directly constrained by the experimental measurements of the Higgs couplings. Smaller values of m h 3 are correlated with larger values of θ 12 , still |θ 23 | is one order of magnitude larger than |θ 12 |, meaning that the approximation of θ 12 ∼ 0 is still valid. The top-right plot of Figs. 11 and 12 compare the values of θ 23 tan β with θ 23 / tan β that illustrate the contribution of θ 23 to the Higgs production associated to bottom quarks (x-axis) and gluon-gluon fusion production (y-axis). The bottom row of Fig. 12 show the values of R B and R D for the new set of scanned points. Here, points with θ 23 tan β ∼ 0.7 correspond to |θ 23 / tan β| up to 0.030, which is around fifty times larger than our first scan. This increment will be reflected in the value of R B , which involves the rate plotted in the left panel of the figure. Previous studies, like [12][13][14] pointed out that the determinant of R A and R B will be useful to determine the existence of more than one resonance. Our analyses indicate that this is indeed the case but mostly for pNMSSM regions with relatively smaller tan β values and lighter h 3 . The bottom-right plot of Fig. 12 shows the relevant ratios to compute the determinant of R D . There is a discrepancy in the region with |θ 23 tan β| larger than ∼ 0.65. According to the top-row plots of Fig. 12, points with |θ 23 tan β| > 0.7 correspond to m h 3 smaller that 1 TeV and tan β smaller than 10. Getting relatively larger values for |θ 23 tan β| in the new set of points scanned compared to the first pNMSSM posterior sample is in accord with the fact that |θ 23 | increases as m h 3 decreases for a fixed value of λ (as discussed in Sect. 3.3). So in the new scan by exploring m h 3 < 1 TeV, we expand the range of exploration for |θ 23 tan β|. Conclusions We studied the phenomenology of the two mass degenerate CP-even Higgs bosons in the NMSSM using a sample set from the parameter scan of the pNMSSM. In this scenario it is possible to reproduce the experimental signal measured by ATLAS and CMS. We parameterised the Higgs boson signal strengths using three angles and found that it is possible to write approximate expressions in terms of two parameters θ 23 tan β and θ 13 , where θ 23 is the mixing between the singlet and the heaviest neutral Higgs of the Higgs doublet H 0 and θ 13 the mixing between the lightest neutral scalar of the Higgs doublet and the singlet. We have focused our analysis into observables that could help to determine the existence of more that one Higgs state, leading to the following conclusions. • To obtain two mass degenerate CP-even Higgs bosons there is required tuning associated to large values of A κ , λ, κ and μ. An approximate relation between those parameters could be obtained from the tree level mass relations, although this relation simplifies the expression for the mass of the lightest pseudoscalar it does not point out to specific mass relations. • An approximate expression for θ 23 can be written in terms of μ/λ and tan β. The allowed range for |θ 23 tan β| is between 0.0 and 0.7. Greater values can be obtained if m h s 1 TeV and tan β 8 are imposed. There are no direct constraints on the mass spectra from specific values of θ 23 but it is possible to reproduce various values of m h 3 for a fixed value of θ 23 and different values of λ. • Analysing the Higgs bosons couplings to fermions and vector bosons, and the signal strengths, we found that the signal of the superposition of the Higgs bosons decaying to leptons (and bottom quarks) depart from the SM signal in an opposite direction with respect to vector boson final states. This is proportional to |θ 23 tan β|. • With respect to expectations due to previous studies, it was surprising to find that for medium to large values of tan β, it is rather difficult to distinguish the two degener-ate Higgs from the single Higgs scenario when the matrix of signal strengths are for vector-boson and gluon-gluon fusion Higgs productions (with the Higgs decaying to vector boson). • By including Higgs production in association with bottom quarks in the signal strengths square matrix we found that the matrix determinant departs significantly large from the single resonance value. Therefore the process pp → bbh can be an important channel in searches for multiple Higgs states degenerate around 125 GeV. Acknowledgements Thanks to Alberto Casas for very useful comments and discussions, and to Fernando Quevedo for encouragements towards the NMSSM project. Maria Cabrera thanks ICTP and CERN Theory Division for hosting and supporting her as short-term visitor. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Authors' comment: The scanned points used in this analysis can be obtained using the free source codes as described in Sect. 3.2.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.
v3-fos-license
2018-04-03T02:32:06.852Z
2016-09-16T00:00:00.000
18191394
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep33596.pdf", "pdf_hash": "e1f890b79339acd80cd56eb7e05da46a89636bd4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1432", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "e1f890b79339acd80cd56eb7e05da46a89636bd4", "year": 2016 }
pes2o/s2orc
Reversible Brain Abnormalities in People Without Signs of Mountain Sickness During High-Altitude Exposure A large proportion of lowlanders ascending to high-altitude (HA) show no signs of mountain sickness. Whether their brains have indeed suffered from HA environment and the persistent sequelae after return to lowland remain unknown. Thirty-one sea-level college students, who had a 30-day teaching on Qinghai-Tibet plateau underwent MRI scans before, during, and two months after HA exposure. Brain volume, cortical structures, and white matter microstructure were measured. Besides, serum neuron-specific enolase (NSE), C-reactive protein, and interleukin-6 and neuropsychiatric behaviors were tested. After 30-day HA exposure, the gray and white matter volumes and cortical surface areas significantly increased, with cortical thicknesses and curvatures changed in a wide spread regions; Anisotropy decreased with diffusivities increased in multiple sites of white matter tracts. Two months after HA exposure, cortical measurements returned to basal level. However, increased anisotropy with decreased diffusivities was observed. Behaviors and serum inflammatory factor did not significant changed during three time-point tests. NSE significantly decreased during HA but increased after HA exposure. Results suggest brain swelling occurred in people without neurological signs at HA, but no negative sequelae in cortical structures and neuropsychiatric functions were left after the return to lowlands. Reoxygenation changed white matter microstructure. Results Physiological characteristics. Lake Louise Score and body temperature were measured only during the subjects' stay at the plateau. The maximum (1.8 ± 1.6) Lake Louise score was recorded for all of the subjects on the first day at HA, and with time, the scores gradually decreased to 0 before the subjects returned to sea level (SL). The body temperature gradually decreased from 36.7 ± 0.24 °C to 36.4 ± 0.33 °C. De-acclimatization symptoms scores were 8 in two subjects, 3 in three subjects, and < 2 in the others. In Test 2 compared with Test 1, heart rate, circulating erythrocyte count, hemoglobin level, and diastolic pressure were significantly increased, while SaO 2 and pulmonary forced vital capacity (FVC) and forced expired volume in one second (FEV1) were significantly decreased (Table 1); circulating leukocytes (p = 0.002) and leukomonocytes (p < 0.001) were also significantly decreased. There were no significant differences between Test 3 and Test 1 in any of the measurements taken. Metabolic measurements. Compared with Test 1, serum NSE was significantly decreased in Test 2 (p < 0.001) and significantly increased in Test 3 (p = 0.016). There were no significant differences in serum IL-6 and CRP between the three tests. Neuropsychiatric characteristics. Neuropsychiatric characteristics of HA exposed college students are showed in Table 2. There were no significant differences in the scores of Beck Depression Inventory and Beck Anxiety Inventory between the tests at three time points. Although an increase of the behavioral scores in Wechsler Memory Scale subset tests (accumulation, figure memory, figure recognition, touch score, and Digit Span backward task) and the Rey-Osterrieth Comples Figure (ROCF) test during the tests at the three time points could be observed within both HA exposure and SL control groups, two-way repeated measures ANOVA did not discover any significant differences between the groups. Total brain volume. Compared with Test 1, there were markedly increased PBVC (2.6 ± 0.5%) and decreased cerebrospinal fluid (4.7 ± 0.8%) in Test 2. The enlarged regions in Test 2 included the bilateral inferior frontal gyrus, frontal pole, precentral gyrus, postcentral gyrus, lateral occipital cortex, temporal pole, paracingulate gyrus, and insula cortex as well as brain stem and multiple edges of cerebellum (Fig. 1A). Compared with Test 1, there were slightly increased PBVC (0.4 ± 0.4%) and decreased cerebrospinal fluid (0.2 ± 0.5%) in Test 3, which did not show significant differences between the tests at these two time points (Fig. 1B). The NBV, GMV, and WMV were significantly increased in Test 2 compared with Test 1 (p < 0.001; p = 0.003; p < 0.001, respectively), but they have not showed any significant differences between Test 3 and Test 1 (Fig. 2). Cortical thickness, surface area, and curvature. In Test 2, compared with Test 1, the cortical thickness (size > 350 mm 2 ) was significantly decreased in the bilateral superior frontal gyrus, rostral anterior cingulate gyrus, superior parietal gyrus, supramarginal gyrus, and insula, left fusiform gyrus, and right inferiorparietal gyrus and increased in the bilateral pericalcarine gyrus and precentral gyrus ( Fig. 3A; Table 3). Furthermore, the surface area was significantly increased across the whole cortices, except for the right precentral gyrus and bilateral posterior insula (Fig. 3C). Finally, the curvature was significantly increased in the bilateral precentral gyrus, superior frontal, supramarginal gyrus, inferior frontal gyrus, paracentral lobule, precuneus, superior parietal cortex, temporal gyrus, parahippocampal gyrus, insula, and fusiform gyrus and decreased in the postcentral gyrus and right cingulate gyrus (Fig. 3E). In Test 3, compared with Test 1, no significant differences in cortical thickness, surface area, and curvature were detected (Fig. 3B,D,F). Correlations of cortical thickness and surface area with physiological measurements. At HA, total cortical surface area and regional cortical thickness in the left postcentral gyrus was negatively correlated with SaO 2 , while cortical thickness in the right supramarginal gyrus was positively correlated with SaO 2 . Average global cortical thickness was negatively correlated with body temperature (Fig. 4). At HA, cortical thickness in the bilateral supramarginal gyrus, left postcentral gyrus, and left fusiform gyrus was negatively correlated with both FVC and FEV1 and cortical thickness in the left rostral anterior cingulate cortex was negatively correlated with FVC (Fig. 5). Discussion The subjects in our study had low Lake Louise scores during the entire period spent at HA, indicating no experience of AMS. However, brain swelling was still found in both GM and WM. Further analysis found an enlarged cortical surface area across the whole brain as well as changed cortical thicknesses and curvatures. The increase of the total brain volume possibly reflects the extensive enlargement of the surface area rather than the changes of cortical thickness and curvature. Disrupted integrity of fiber tracts may have contributed to the increase of WM volume. In addition, HA exposure did not seem to affect individual neuropsychiatric function. Two month after the subjects returned to SL, all of the cortical measures returned to baseline values. However, unexpectedly, an increased FA with decreased axial and radial diffusivities was observed in the WM tracts. Serum NSE significantly decreased during HA exposure and subsequently increased at SL; however, this was not associated with cortical changes. To our knowledge, this is the first ever study to look at the brain both before and after HA exposure, in which brain images were obtained at both the lowland and the extreme plateau altitude. In our study, 20 controls were also scanned and rescanned at an interval of 30 days and no differences in any cerebral measurements were found between the two MRI scans, which suggested that MRI instrument-related factors did not affect the morphological measurements. This is in-line with previous findings by Han et al. 24 showing that the MRI scanner field strength, manufacturer, machine upgrade, and pulse sequence had little effect on reliability of cortical thickness measurements. The scan-rescan reliability of automated segmentation algorithms for cortical measurements was first confirmed by Morey et al. 25 . Later on, to assess the robustness of different post-processing algorithms applied to images acquired from different MRI systems, Durand-Dubief et al. 26 scanned patients with multiple sclerosis over one year (at three time-points) on Intera and Sonata systems, using different sequences, and only small differences of 0.07% and 0.79% between the two systems were shown for the FreeSurfer and SIENAX analyses, respectively. Reuter et al. 27 suggested that FreeSurfer could be a useful tool for the investigation of longitudinal brain development and pathophysiological changes. Based on the results of those experiments, a reasonable assumption of reliability for scans at one-to two-month intervals on two different scanner platforms was made. Increased extent of vasogenic edema can significantly decrease FA and increase RD and AD 28 . Therefore, our results suggest vasogenic edema occurred in the WM during HA exposure, which may have contributed to the increase of the WM volume. Our findings were consistent with the results from several previous studies on hypoxic/ischemic brains 8,29 . Hypoxia-induced regional changes in autoregulation, cerebral blood flow, and cerebral capillary pressure may be sufficient to produce vasogenic edema 8 . However, with time spent at HA, the Lake Louise scores gradually decreased, suggesting brain edema was not associated with AMS. The increases of FA with lower MD in WM tracts two months after return to SL were consistent with our previous observations in young HA Han residents one to three years after descent to the low altitude 30 and in HA native Tibetan adolescents four years after descent to the lowlands 31 . A study conducted by Hackett et al. 8 showed that patients with AMS recovered from edema six weeks to 11 months after return to SL. However, the report did Table 3. Regional information of changed cortical thickness (mean(SD)) in subjects tested before (Test 1) and during high altitude exposure (Test 2). not provide more details on microstructural characteristic of the WM tracts. Simultaneous increase of FA and decreases of AD and RD found in our study were consistent with the findings in children during their first year of life 32,33 , which were explained with fiber organization and axonal myelination 33 . However, for young adults in our study, these changed DTI scalars should not be interpreted globally as "good" or "bad". Previous studies showed increase of FA as potentially reflecting both compensatory mechanisms 34 and poor cognitive functioning 35 . In our study, the gradually enhance of behavioral performances during the tests at the three time points may attribute to learning effect. Moreover, increase of FA and decreases of MD, AD, and RD have been found in burning mouth syndrome 36 . Neurogenesis induced by low-to-moderate level hyperoxia has been proved in vitro and vivo observations 37 and cytoskeletal α -tubulin and β -tubulin levels were strongly increased after hypobaric hypoxia/ reoxygenation 38 . Therefore, the proliferation of glia and intracellular compartments of neuronal axons may be associated with a decrease in RD and AD 33 . Definitive conclusions about these DTI scalars of WM can only be derived from direct microscopic examination of brain tissue in the future animal study. The increases of GM at HA may be associated with hypoxia-induced gliogenesis. Glial cells comprise more than 85% of the total population of brain cells. They are sensitive to changes in oxygen partial pressure 39 and can be activated by hypoxia 40 . The decreases of cortical GM may be due to the neuronal loss 41 , but it seems that this is not the case, as no related behavioral disorders occurred. Vasculature accounts for about 5% of GM 42 . The capillary length per unit volume of tissue, dilation, and density in cortex increased after three weeks of hypoxia exposure, while the hypoxia-induced increase of blood flow has returned to baseline level 43,44 . Furthermore, our study detected high blood pressure during HA exposure, which could lead to thickening and hardening of the walls of arterioles and narrowing of the lumen, resulting in cerebral hypoperfusion 45,46 . Therefore, unbalanced development between angiogenesis and reduced cerebral blood flow could also determine regional cortical thickness. Previous reports have shown the cortical lesions persisted for more than several months after brief episodes of mountain climbing 7,12 . However, in our study, measured parameters in cortical GM have reverted back to baseline within two months of the subjects' return to SL. The mechanism underlying this reversibility is likely due to reoxygenation reversing the increased capillary density, observed in hypoxia, to normoxic values 47,48 . In the present study, alterations in GM volume (thickness and surface area) at HA were identified in the anterior insular cortex, anterior cingulate cortex, dorsolateral prefrontal cortex, supplementary motor area, posterior parietal cortex, supramarginal gyrus, and fusiform gyrus and the cortical thickness in most of these regions was significantly correlated with the FVC and FEV1, which is evidence that these regions play an important role in respiratory control and perception [49][50][51] . Furthermore, the decrease of thicknesses in the anterior insula and anterior cingulate cortex may be associated with the increased heart rate and blood pressure, as lesions of the right posterior insula increased baseline heart rate and blood pressure, electrical stimulation of the left insula of awake epileptic patients produced bradycardia, and decrease of neuronal activity in the right anterior cingulate cortex was correlated with higher heart rate 52 . The increase of cortical thickness in visual cortex, which has also been found in our previous study on adults who immigrated to the Qinghai-Tibet Plateau (2300-4400 m) for 2 years 14 , may be a compensatory mechanism to overcome the damage of the visual function caused by ultraviolet radiation at HA 53 . The changes in the cortical GM of particular brain regions observed in our present study had been previously found in patients with obstructive sleep apnea and in our previous studies in subjects exposed to HA 14,30 . Hypoxia, hypobaria, and cold can exert their effects together on the brain at HA. Cerebral edema has been shown to occur after acute exposure to hypoxia in various normobaric conditions 29,54,55 , indicating the changes of the brain structures could be induced by the hypoxia alone. Our results support this suggestion, showing direct correlations of SaO 2 with cortical measurements. However, several studies have shown that AMS scores were higher 56-58 , while visual sensitivity was lower 59 in hypobaric hypoxia than in normobaric hypoxia, suggesting hypobaria at HA could also be a factor contributing to the brain lesions. WM lesions and the aggravation of depression-like behavior have been reported after exposure to nonhypoxic hypobaria 5,60 . The temperature at HA fluctuated between 4-17 °C, which was far lower than that in Xiamen (25-33 °C), and thus led to a gradual decrease of the body temperature. This hypothermia is considered neuroprotective in cerebral hypoxia 2,3 . Moreover, across all tests in this study, no significant changes in IL-6 and CRP were found, suggesting inflammation may not contribute to the structural changes of the brain. In support of our findings, gradual increases of serum NSE were found in 613 soldiers two to 15 days after return to the lowlands from a 116 days stay at HA (3700 m) 61 and in railway construction workers 19-66 months after return to the lowlands from six-to 60-month works at the Qinghai-Tibet plateau (3080-5072 m) 62 . Gradual increase of NSE expression, which peaked after five days of reoxygenation, has also been detected in rat brain during an experiment involving hypoxia/reoxygenation 22 . NSE is not only expressed in the neurons, but also in the peripheral neuroendocrine tissues and in amine precursor uptake cells 63 . In the present study, no direct correlations between serum NSE and brain structural measurements or de-acclimatization score (partially reflects brain function) were identified, suggesting the changes of serum NSE may result from oxygen-induced abnormal metabolisms in both neuronal and non-neuronal cells. The NSE level in the cerebrospinal fluid can be employed as a direct indication of neuronal damage, however it was not feasible in our study. There were several limitations in our present study. One limitation was that the lifestyle at the plateau was different to the one the subjects were used to, for example in the absence of crowds, which could have affected the subjects' emotional well-being. However, the subjects did not show obvious signs of depression or anxiety before their descent to the lowlands. In addition, subjects could also be challenged by the cultural change. Diet was not likely to have been an important factor for the observed brain changes, because the food in Xiamen and at the Qinghai-Tibetan Plateau was similar and the subjects were able to eat without any significant alterations to their dietary habits. In summary, this is the first, longitudinal, detailed investigations of the total cerebral volume, cortical GM, and subcortical WM at the time of a stay at HA and after the return to the lowlands. Vasogenic edema at HA may be attributed to the increase of WM volume, while the mechanisms underlying the changes of GM volume were speculative. The cerebral cortices changed in the regions associated with cardiovascular and respiratory regulations. The cerebral effects of HA hypoxic exposure were reversible. However, reoxygenation at the lowlands simultaneously increased fractional anisotropy and decreased diffusivities in WM. Future studies should be conducted in animals to verify these findings and to clarify the mechanisms. Moreover, it seems no neuropsychiatric sequelae accompanied the brain structural changes. Methods Participants. The subjects were 31 healthy college students (16 men and 15 women, average age 19.7 ± 0.7 years) from Xiamen University in Xiamen (China). They took part in a 30-day teaching experience as volunteer teachers on Tibetan plateau during summer holidays in August, 2014. They were lowlanders, born and living at lowlands (< 500 m), without any prior exposure to HA. All subjects had normal body mass index. The whole group had successfully finished teaching work, without the use of supplementary oxygen. Subjects were excluded if they developed mountain sickness during their teaching period, had a documented neurological disorder, or had a history of head injury. Another 20 healthy college students of comparable age, gender, and educational background, were recruited from Xiamen University as controls for verification of the reliability of the cerebral measurements that could be affected by MRI instrument-related factors, including scan-rescan using the same or different sequences; and as controls in the behavioral test performed at three time-points. These control students were tested in Xiamen. Procedures were fully explained and all subjects provided written, informed consent before participating in the study. The experimental protocol was approved by the Research Ethics Review Board of Xiamen University. All experiments were carried out in accordance with the approved guidelines. Plateau trip and experimental design. During the first three days, the subjects left Xiamen (sea level, SL) for Lasa (3650 m, Tibet, China). After four-day stay at Lasa, the subjects spent four hours travelling to Dangxiong city (4300 m, Tibet). On the 29th day, they finished teaching work and descended to Lasa. Four days later, they returned to Xiamen. At Dangxiong, the subjects had access to similar food and drink as that in Xiamen. No subject was a smoker and was allowed access to alcohol. During the entire period, the subjects stayed only at 4300 m. A baseline set of sea-level physiological and neuropsychiatric tests, metabolic measurements, and MR images, were initially acquired in Xiamen before ascent to HA (Test 1); the same set of tests was performed at the plateau one to four days before the descent to Xiamen (Test 2); the final set of tests was performed after the participants had been living at SL again for two months (Test 3). Physiological measurements. Physiological tests included heart rate, blood pressure, hematological measure, arterial oxygen saturation (SaO 2 ), and pulmonary function measure. Blood samples were taken in the morning between 07:00 and 07:30 h. The measurements at HA also consisted of the daily observations of the body temperature and the Lake Louise score, with a score greater than 4 being defined as AMS 58 . Axillary temperature and Lake Louise score were measured on the days between subjects arriving at Lasa and descending to Lasa in the afternoon between 19:00 and 19:30 h. Moreover, de-acclimation was tested within three days of subjects returning to SL. Symptom score and diagnostic criteria for de-acclimatization syndrome were adopted from He et al. 64 by calculating the number of pins that the subject was able to place in the holes in 30 seconds 65 . The test involved gross movements of arms, hands, and fingers, and fine motor control. Poor Pegboard performance is a sign of deficits in complex, visually guided, or coordinated movements. (5) In addition, all subjects completed the Beck Depression Inventory and Beck Anxiety Inventory, which assessed the severity of depression and anxiety. Metabolic measurements. Serum NSE and IL-6 were measured by electrochemiluminescence immunoassay (ECLIA) on the Roche MODULAR ANALYTICS E170 (Elecsys module) immunoassay analyzer (NSE, Roche Diagnostics GmbH, D-68298 Mannheim). The sensitivity of the assay was < 0.05 ng/ml. The inter-and intra-assay coefficients of variation were 3.8% and 1.6% respectively. High-sensitivity serum CRP was measured with particle enhanced immunonephelometry using BN-II system Nephelometer (Dade-Behring, Marburg/ Germany), with a detection limit of 0.159 mg/L. Data analysis of physiological, neuropsychiatric, and metabolic measurements. Paired samples t-test was applied to analyze the differences between the tests performed at the three time-points. Differences in the results of the behavioral test over the three time-points between the experimental group and the control group were analyzed using two-way ANOVA with repeated measures. SPSS 16.0 software was used for data analysis. Statistical significance was set at p < 0.05. SIENA and SIENX analysis. SIENA (http://www.fmrib.ox.ac.uk/fsl) was used to estimate two time-point PBVC and percentage ventricle volume change. Non-brain tissue was removed from all T1-weighted images through the combination of manual and automatic processing. For each subject, the two brain images obtained from the two time-points were first aligned to each other (using the skull images to constrain the registration scaling), with both brain images resampled into the space half-way between the two. Secondly, tissue-type segmentation was carried out to find brain/non-brain edge points and then perpendicular edge displacement (between the two time-points) was estimated at these edge points. Finally, the mean edge displacement was converted into a (global) estimate of PBVC between the two time-points. NBV was estimated with SIENAX (http://www.fmrib. ox.ac.uk/fsl). After brain extraction, the brain images were affine-registered to MNI152 space; this was done primarily in order to obtain the volumetric scaling factor, to be used as a normalization for head size. Tissue-type segmentation with partial volume estimation was then carried out in order to calculate total volume of the brain tissue. The NBV can be optionally split into GMV and WMV. A paired samples t-test was performed to detect global brain volume difference between the two time-points. The statistical parametric map was generated at p < 0.05 (Threshold-free cluster enhancement corrected for multiple comparisons). FreeSurfer analysis. FreeSurfer (version 5.1.0; http://surfer.nmr.mgh.harvard.edu/) was used for cortical: thickness, surface area, and curvature analysis. The processing stream consisted of the removal of non-brain tissue, mapping to Talairach-like space, and segmentation of the gray-white matter and pial boundaries. These maps of measurements were obtained by reconstructing representations of the GM/WM boundary and the WM boundary to the GM/cerebrospinal fluid boundary, and then calculating the closest distance from those surfaces to each vertex on the tessellated surfaces. All subjects' data were resampled to the FreeSurfer default common surface template using a high-resolution surface-based averaging technique that aligned cortical folding patterns. Finally, the surface data were spatially smoothed using a Gaussian kernel of 10 mm full-width at half-maximum. Regional variations of the cortical thickness, cortical surface area, and cortical curvature were compared using paired samples t-test. The statistical parametric map was generated at p < 0.05 (FDR corrected for multiple comparisons). TBSS analysis. Diffusion-tensor images were processed using the FSL 5.0.7 software package (http://www. fmrib.ox.ac.uk/fsl/). The images were corrected for head movement and eddy currents by applying affine alignment of each diffusion-tensor image to the b0 image, and then the b0 image was used to generate a binary brain mask with the Brain Extraction Tool. Subsequently, images were analyzed with the FMRIB's Diffusion Toolbox (FDT) to generate FA, MD, AD, and RD maps. Statistics on FA maps were performed using the TBSS package in FSL. To create a fractional anisotropy skeleton, the FA images of all subjects were aligned to a template, which was arbitrarily selected from those FA images by nonlinear registrations, and these aligned FA images were transformed to the 1 × 1 × 1 mm MNI152 space. The mean fractional anisotropy skeleton was then thresholded to FA of ≥ 0.2 to exclude peripheral and intersecting tracts, and the possibility of partial volume effects. Following these steps, individual FA was projected onto mean FA skeleton. The MD, AD, and RD images were analyzed using the FA images to achieve the nonlinear registration and skeletonization stages and also to estimate the MD, AD, and RD images from each individual subject onto the mean FA skeleton. Finally, cross-subject, voxel-wise, statistical analyses of FA, MD, AD, and RD were carried out. In all cases, the null distribution was built up over 5000 permutations by the FSL randomize program. Paired samples t-tests were performed to examine between-test Scientific RepoRts | 6:33596 | DOI: 10.1038/srep33596 differences. The statistical parametric map was generated at p < 0.001 (Threshold-free cluster enhancement corrected for multiple comparisons across space). Correlation analyses of brain structures with physiological variables. Pearson correlations were used to assess the correlations of regional cortical thickness and surface area values with body temperature, SaO 2 , and pulmonary variables. Statistical significance was set at p < 0.05.
v3-fos-license
2017-04-19T07:52:10.022Z
2015-07-01T00:00:00.000
14919863
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0003941&type=printable", "pdf_hash": "5900a0712adeaf4c5154cba6d776ca9d48693c80", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1434", "s2fieldsofstudy": [ "Medicine" ], "sha1": "61095ce837ec5459e81a14c3e7e78c4a15576269", "year": 2015 }
pes2o/s2orc
A Field Study in Benin to Investigate the Role of Mosquitoes and Other Flying Insects in the Ecology of Mycobacterium ulcerans Background Buruli ulcer, the third mycobacterial disease after tuberculosis and leprosy, is caused by the environmental mycobacterium M. ulcerans. There is at present no clear understanding of the exact mode(s) of transmission of M. ulcerans. Populations affected by Buruli ulcer are those living close to humid and swampy zones. The disease is associated with the creation or the extension of swampy areas, such as construction of dams or lakes for the development of agriculture. Currently, it is supposed that insects (water bugs and mosquitoes) are host and vector of M. ulcerans. The role of water bugs was clearly demonstrated by several experimental and environmental studies. However, no definitive conclusion can yet be drawn concerning the precise importance of this route of transmission. Concerning the mosquitoes, DNA was detected only in mosquitoes collected in Australia, and their role as host/vector was never studied by experimental approaches. Surprisingly, no specific study was conducted in Africa. In this context, the objective of this study was to investigate the role of mosquitoes (larvae and adults) and other flying insects in ecology of M. ulcerans. This study was conducted in a highly endemic area of Benin. Methodology/Principal Findings Mosquitoes (adults and larvae) were collected over one year, in Buruli ulcer endemic in Benin. In parallel, to monitor the presence of M. ulcerans in environment, aquatic insects were sampled. QPCR was used to detected M. ulcerans DNA. DNA of M. ulcerans was detected in around 8.7% of aquatic insects but never in mosquitoes (larvae or adults) or in other flying insects. Conclusion/Significance This study suggested that the mosquitoes don't play a pivotal role in the ecology and transmission of M. ulcerans in the studied endemic areas. However, the role of mosquitoes cannot be excluded and, we can reasonably suppose that several routes of transmission of M. ulcerans are possible through the world. Introduction Buruli ulcer, which is caused by M. ulcerans, is a neglected tropical disease affecting mostly poor rural communities in West and Central Africa. In 2013, 75% of all new cases of Buruli ulcer worldwide were declared by Ivory Coast, Ghana and Benin. This skin disease, which mostly affects children, causes large ulcerative lesions often leading to permanent disabilities [1,2,3]. The cutaneous lesions are caused by a M. ulcerans toxin called mycolactone with cytotoxic, immunomodulatory and analgesic effects [4]. At early stages, Buruli ulcer can be treated with a combination of streptomycin and rifampin for eight weeks; at later stages, antibiotic therapy is associated with extensive surgery [5,6,7,8]. Buruli ulcer occurs mostly in low-lying swampy areas [9,10]. Epidemiological studies have shown that the aquatic environment is the main reservoir of M. ulcerans, with many aquatic vertebrates and macro-invertebrates harboring this bacillus. The exact ecological features and mode of transmission of M. ulcerans to humans remain to be identified. In recent decades, several studies have suggested that water bugs and mosquitoes may play a role in M. ulcerans transmission [11,12,13,14,15,16,17,18,19,20,21,22,23,24,25]. Water bugs have been implicated as potential hosts and vectors of the bacillus in laboratory experiments and field ecology studies in Africa [26,27,28,29,30]. Outside the aquatic environment, adult mosquitoes tested positive for M. ulcerans DNA in an area of endemic Buruli ulcer in Australia, leading to the suggestion that these insects might transmit the bacterium to humans [26,28,29,30]. However, this hypothesis was not confirmed by laboratory experiments, and, surprisingly, no study has investigated the possible involvement of mosquitoes in M. ulcerans ecology in Africa, the continent with the highest level of endemicity for Buruli ulcer. The objective of this study was to investigate the presence of M. ulcerans DNA in flying insects, including mosquitoes, in an area of Buruli ulcer endemicity in Benin. We monitored, in parallel, the levels of M. ulcerans in the aquatic environment, as a marker of the presence of the bacterium in the study area. Study area The study was carried out in the Oueme administrative area in South-East Benin, where Buruli ulcer has been endemic for several decades [31,32,33,34,35,36]. Sampling was carried out in three districts crossed by the Oueme River (Bonou, Adjohoun and Dangbo). The districts were selected for study because they are accessible throughout the year (including the rainy season) and because data were available for relevant epidemiological studies. Flying insects were sampled at four sites and aquatic sampling was carried out at nine sites (Fig 1). The Oueme River originates in the Taneka hills in the Atacora Mountains and flows into the Atlantic Ocean close to Cotonou. The study area is characterized by a subequatorial climate with two rainy seasons. The first rainy season extends from April to July and the second extends from October to November. Mean annual precipitation is 1122 mm, and temperatures range from 22°C to 26°C. There are two main types of soil: alluvial soils, which are fertile but liable to flooding, and sandy soils, which are less fertile but suitable for growing coconut, palm, and other tropical trees. Most of the population in this area is engaged in farming (rice, maize, cassava, cowpeas, market garden crops, etc.), fishing and trade. The natural vegetation consists of grassy savannah and swampy mangrove forest. Flying insect sampling This study focused on the adult stage of mosquitoes and other flying insects and the immature stages of mosquitoes. Insects were collected during four surveys in June, July, November, and December 2013, at four sites in the Bonou Centre, Kode, Gbada and Houeda areas (Fig 1). The collection periods correspond to the start, middle and end of the rainy season and the dry season, respectively. Flying insects were collected with Centers for Disease Control (CDC) light traps. A CDC light trap consists of a 150 mA incandescent light bulb and a fan, powered by 6 V batteries. At each survey, once consent had been received from the heads of household, insects were trapped from two selected houses in each village, over a period of two days. Traps were placed both indoors and outdoors at each house, from 6:00 pm to 6:00 am, corresponding to the period from dusk to dawn. The indoor traps were suspended from the ceiling, about 2m above the ground. The outdoor traps were hung on trees at about the same height. The insects collected were identified in the field in two steps. In the first step, mosquitoes were separated from the other insects. All mosquitoes were identified to species level under stereoscopic microscopes, according to morphological criteria in dichotomous keys [37,38,39]. They were counted and stored, in pooled groups of up to 15 individuals of the same species, in 70% ethanol for transport to the laboratory. In the second step, the remaining flying insects were identified to order level on the basis of their morphology under a stereoscopic microscope, with the appropriate keys [40,41]. They were stored in 70% ethanol, in pooled groups of up to 15 individuals from the same order, and were transported to the laboratory for PCR analysis (Fig 2). Sampling of mosquito larvae During each survey, mosquito larvae were collected throughout the selected area by dipping with a 350 ml ladle. Samples were collected from various temporary and permanent bodies of water constituting potential habitats for the development of populations of mosquito larvae. All larvae were transported in clean water, in plastic containers, to the field laboratory. Larvae were identified to genus level with appropriate morphological keys [37,38,39]. The larvae of each genus were then separated into two groups. The larvae of the first group were preserved in 70% ethanol, in pools of 20 individuals for each genus. The larvae of the second group were reared to emergence. The resulting adults were then stored in 70% ethanol, in pools of up to 15 individuals. Exuviae were also preserved in 70% ethanol, in pools of 20, for laboratory analysis (Fig 2). Aquatic sampling Samples were collected from the principal sources of water for domestic washing, bathing, fishing and recreation. The sampling sites were located in nine villages in the three districts: Bonou Centre, Agbonan, Agbomahan, Agonhoui, Gbame, Kode, Assigui, Houeda, and Mitro (Fig 1). Aquatic sampling was carried out with the same methods at each site, at least twice, between January 2013 and December 2013. Invertebrates and fish were captured with a square net (32 x 32 cm and 1 mm in mesh size), from the surface down to a depth of 0.2 to 1 m, over a distance of 1 m. A sample was considered to correspond to all the insects collected in 10 such sweeps with the net. All insects were preserved in 70% ethanol for laboratory identification. For the detection of M. ulcerans DNA, the insects were sorted into pooled groups, each including no more than 20 specimens from the same family. For each body of water, we collected plant samples from the predominant and the second most frequent types of living plant. Each of these plant samples consisted of one to five plant leaves, stems or roots, depending on the size of the plant sample. They were placed directly in a clean 100 ml bottle containing 70% ethanol (Fig 2). Extraction and purification of DNA Pooled insect bodies were ground and homogenized in 50 mM NaOH. Tissue homogenates were heated at 95°C for 20 min. The samples were neutralized with 100 mM Tris-HCl, pH 8.0. DNA was extracted from the homogenized insect tissues with the QIAquick PCR purification kit (Qiagen), according to the manufacturer's recommendations. Negative extraction and purification controls were included in each series of manipulations. The homogenizers were decontaminated by incubation overnight in 1 M NaOH, to eliminate any traces of DNA. For each aquatic plant sample, the material was cut into small pieces with a scalpel and then ground in 50 mM NaOH. The extract was heated and neutralized and the DNA was purified with the Mobio purification kit, according to the manufacturer's recommendations. Quantitative PCR Oligonucleotide primer and TaqMan probe sequences were used for detection of the IS2404 sequence and the ketoreductase B (KR) domain of the mycolactone polyketide synthase (mls) gene from the plasmid pMUM001 [13,42,43]. PCR mixtures contained 5 μl of template DNA, 0.3 μM of each primer, 0.25 μM probe, and Brilliant QPCR Master Mix (Agilent Technologies) in a total volume of 25 μl. Amplification and detection were performed with a Thermocycler StepOne (Applied Biosystems), using the following program: heating at 95°C for 10 min, followed by 40 cycles of 95°C for 15 s and 60°C for 1 min. DNA extracts were tested at least in duplicates, and negative controls were included in each assay. Quantitative readout assays were set up, based on an external standard curve generated with five tenfold serial dilutions of M. ulcerans (strain 1G897) DNA. Samples were considered positive only if both the IS2404 sequence and the gene sequence encoding the ketoreductase B domain (KR) were detected, with threshold cycle (Ct) values strictly < 35 cycles. An inhibition control was performed as previously described [44] and 10% negative controls (water alone) were included in each assay. Data analysis Mosquito abundance was compared between sites and between seasons in nonparametric Kruskal-Wallis tests. Diversity of flying insect orders collected We collected 7230 flying insects from nine orders: Coleoptera, Diptera, Heteroptera, Homoptera, Hymenoptera, Lepidoptera, Nevroptera, Odonate, Tricoptera. At all sites, Diptera was by far the most frequent order of flying insects caught, accounting for 84% of all insects trapped. Heteroptera was the least abundant order at each site and was not detected at Gbada and Houeda ( Table 1). Diversity of mosquito species collected The 6047 dipteran specimens collected during the four surveys included 4322 mosquitoes from 10 species. Mansonia africana (50%), Culex nebulosus (27%), and Culex quinquefasciatus (22%) were the most abundant species, accounting for 98% of all the mosquitoes trapped. The four least represented species were Anopheles pharoensis, Aedes vittatus, Culex decens, and Culex fatigans, with no more than four individuals each (S1 Table). Spatio-seasonal variation of the total mosquitoes and flying insects collected No significant differences in the abundance of the mosquitoes and other flying insects caught were found between sites (p>0.05). Flying insects were significantly more abundant (p<0.05) in the wet season than in the dry season, whereas no significant difference in mosquito densities was observed within seasons (p>0.05; S2 Table). Collection of larvae During the surveys, we collected a total of 5407 mosquito larvae. These larvae were identified as Culex spp., Anopheles spp. and Aedes spp. In total, 3146 mosquito larvae belonged to the genera Culex and Anopheles. Culex spp. were the most abundant, accounting for 66.35% of the mosquito larvae collected. For the adults emerging in the laboratory following the rearing of larvae collected in the field, 2261 individuals belonging to the genera Culex, Anopheles and Aedes were identified. Culex was the most abundant genus, accounting for 79.08% of the sample (S3 Table). Diversity of aquatic sampling During the survey, we collected 3377 aquatic vertebrates and macro-invertebrates from various bodies of water in the Oueme administrative area ( Table 2). Insecta accounted for 72% of the animals collected, with a majority of Hemiptera. The bodies of water studied were of various natures (flooded land, river and swamp) and were scattered around the Oueme, making it possible to sample diverse types of specimens from different ecological niches. In total, 95 plants were collected from the various bodies of water. They were identified as belonging to the Poaceae, Lemnaceae, Nymphaeaceae, Araceae and Potamogetonaceae families. Detection of M. ulcerans DNA in environmental samples We tested flying insects, larvae, aquatic vertebrates and invertebrates, and plants collected in 2013 from various sites in Oueme for the presence of M. ulcerans DNA. We found that 942 pools of flying insects (corresponding to the 7230 captured flying insects and the 5407 collected larvae) tested negative for M. ulcerans DNA by PCR. Positive PCR results were obtained for 8.7% (28/322) of aquatic animal sample pools from the various bodies of water. No positive specimens were obtained at two sites, and 5.5 to 25% of the sample pools at the other seven sites tested positive (Table 2 and S4 Table). Decapoda was the invertebrate family with the highest level of mycobacterial contamination (26%). We performed 295 PCR analyses on the 95 plants collected. These analyses were carried out on leaves, stems and roots, and three samples tested positive for M. ulcerans DNA by PCR: a leaf pool and a stem pool from the same plant from a water body in Kode and a leaf pool from Mitro (S4 Table). Both plants concerned belonged to the Poaceae plant family. Discussion The ecological characteristics and mode of transmission of M. ulcerans are not entirely understood, and several fundamental questions remain unanswered. One key concern relates to the routes by which M. ulcerans crosses the human skin barrier. There are currently two main hypotheses: (i) direct contact between an existing wound and water containing M. ulcerans; (ii) the inoculation of M. ulcerans into the skin [2,45]. Comparisons with the modes of transmission of other environmental mycobacteria in immunocompetent humans (e.g. M. fortuitum, M. chelonae, M. xenopy) and recent studies of M. ulcerans [46] have suggested that direct inoculation into the skin is the most likely mode of transmission. In this context, the two most likely scenarios for the inoculation with the bacterium are either inoculation by an active vector harboring M. ulcerans, as described for various microorganisms, including parasites (e.g. Leishmania sp. or Plasmodium sp.), arboviruses (e.g. the Dengue and Chikungunya viruses), and bacteria (e.g. Yersinia pestis and Borrelia sp.), or inoculation by a mechanical vector, such as aquatic plant thorns or sharp leaves, biting or sucking insects (bacilli present on the outside of the insects) [13,15,16,17,18,19,20,21,25,26,27,28,29,30,47,48,49,50,51,52]. M. ulcerans ecology is highly complex. It is therefore possible for these scenarios to co-exist, and their importance or significance is dependent on a number of different criteria (e.g. human behavior, including access to drinking water, rural or urban life and work, fauna and flora biodiversity, presence of permissive species, season). Several experimental studies have explored the role of aquatic hemipterans as passive or active vectors of M. ulcerans. These approaches were supported by various environmental and epidemiological studies conducted in Africa. However, the importance (unique, major, or marginal) of this transmission route has yet to be established and other transmission routes should therefore be explored. For instance, it has been suggested that mosquitoes act as vectors of M. ulcerans in Australia, but, surprisingly, this possibility has never been explored in Africa. In this context, the aim of our study was to assess the role of mosquitoes in M. ulcerans ecology. We carried out an extensive field study in an endemic area in Benin, involving temporal and spatial monitoring of the presence of M. ulcerans in mosquitoes and other flying insects, used as a control for the distribution of M. ulcerans in aquatic flora and fauna. M. ulcerans DNA was detected in various aquatic macro-invertebrates and vertebrates, and some aquatic plants. The global rate of detection was about 9%, consistent with the findings of other environmental studies [26,27,28,29,30]. M. ulcerans DNA was not detected in any of the flying insects collected in CDC light traps inside and around houses over the same period (including mosquito families in which M. ulcerans DNA was detected in Australia). As only one type of sampling method was used to collect flying insects (CDC light traps), it is possible that this introduced a bias in terms of species diversity. Nevertheless, in a recent study performed in the same area with three other types of sampling method for mosquito collection, the three most abundant mosquito species were the same as in our study, and eight of the 14 species identified were common to our study [53]. Our results suggest that mosquitoes and non-aquatic flying insects are not involved in the ecology and dissemination of M. ulcerans in an area of South-East Benin in which Buruli ulcer is highly endemic, and confirm that the aquatic environment is the main environmental reservoir of the bacillus. However, a role for mosquitoes in other areas, including Australia, cannot be definitively excluded. The ecology and mode of transmission of micro-organisms may differ between geographic locations, with biological diversity affecting bacterial adaptation and human activities. This concept could be applied to M. leprae, a mycobacterium that also causes a dermatosis. Indeed, a recent study has suggested that the ecological features, reservoirs and transmission routes of M. leprae may differ between continents. It has been shown that, in North America, wild armadillos harbor the same strain of M. leprae as leprosy patients. Leprosy may thus be a zoonosis in this region [54]. This situation cannot be transposed to other continents in which leprosy is highly endemic such as Africa and Asia, where there are no armadillos and no other mammal is known to harbor the bacillus. A similar situation may apply to M. ulcerans. In Australia, mammals such as possums have been shown to be hosts of M. ulcerans and may play a key role in its dissemination, together with mosquitoes. However, there are no possums in Africa, and M. ulcerans has never been detected in the tissues of any mammal other than humans in Africa. Based on the results of various studies performed in recent decades and aiming to decipher the ecological characteristics of M. ulcerans, it seems likely that M. ulcerans can be transmitted via several routes, potentially differing between locations in different parts of the world. Supporting Information S1
v3-fos-license
2019-04-07T02:53:47.905Z
2019-04-01T00:00:00.000
111388254
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/24/7/1279/pdf", "pdf_hash": "4d94a96a67f554bab5caafc4501fb7fda538609f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1435", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "4d94a96a67f554bab5caafc4501fb7fda538609f", "year": 2019 }
pes2o/s2orc
Anti-Herpetic, Anti-Dengue and Antineoplastic Activities of Simple and Heterocycle-Fused Derivatives of Terpenyl-1,4-Naphthoquinone and 1,4-Anthraquinone Quinones are secondary metabolites of higher plants associated with many biological activities, including antiviral effects and cytotoxicity. In this study, the anti-herpetic and anti-dengue evaluation of 27 terpenyl-1,4-naphthoquinone (NQ), 1,4-anthraquinone (AQ) and heterocycle-fused quinone (HetQ) derivatives was done in vitro against Human Herpesvirus (HHV) type 1 and 2, and Dengue virus serotype 2 (DENV-2). The cytotoxicity on HeLa and Jurkat tumor cell lines was also tested. Using plaque forming unit assays, cell viability assays and molecular docking, we found that NQ 4 was the best antiviral compound, while AQ 11 was the most active and selective molecule on the tested tumor cells. NQ 4 showed a fair antiviral activity against Herpesviruses (EC50: <0.4 µg/mL, <1.28 µM) and DENV-2 (1.6 µg/mL, 5.1 µM) on pre-infective stages. Additionally, NQ 4 disrupted the viral attachment of HHV-1 to Vero cells (EC50: 0.12 µg/mL, 0.38 µM) with a very high selectivity index (SI = 1728). The in silico analysis predicted that this quinone could bind to the prefusion form of the E glycoprotein of DENV-2. These findings demonstrate that NQ 4 is a potent and highly selective antiviral compound, while suggesting its ability to prevent Herpes and Dengue infections. Additionally, AQ 11 can be considered of interest as a leader for the design of new anticancer agents. Introduction In recent years, the treatment and control of some viral agents has become a major challenge for the pharmaceutical industry due to their pathophysiological features and development of drug-resistance. Among these, Human Herpesviruses type 1 and 2 (HHV-1 and HHV-2), associated with cold sore and herpes genitalis respectively [1], are neurotropic and neuro-invasive double-stranded DNA (dsDNA) Antiviral Activity The in vitro antiviral evaluation of the 27 substances against Human Herpesvirus type 1 (HHV-1) and 2 (HHV-2) was made on infected Vero cells, using the end-point titration technique (EPTT) [22]. The compounds that showed a fair reduction of viral titer at concentrations ≤50 µg/mL, after 48 h, were considered active. Table 2 shows the reduction values of viral titer (Rf) and the antiviral activity (µg/mL) of those NQs, AQs and HetQs active against at least one virus serotype. Antiviral Activity The in vitro antiviral evaluation of the 27 substances against Human Herpesvirus type 1 (HHV-1) and 2 (HHV-2) was made on infected Vero cells, using the end-point titration technique (EPTT) [22]. The compounds that showed a fair reduction of viral titer at concentrations ≤50 µg/mL, after 48 h, were considered active. Table 2 shows the reduction values of viral titer (Rf) and the antiviral activity (µg/mL) of those NQs, AQs and HetQs active against at least one virus serotype. Antiviral Activity The in vitro antiviral evaluation of the 27 substances against Human Herpesvirus type 1 (HHV-1) and 2 (HHV-2) was made on infected Vero cells, using the end-point titration technique (EPTT) [22]. The compounds that showed a fair reduction of viral titer at concentrations ≤50 µg/mL, after 48 h, were considered active. Table 2 shows the reduction values of viral titer (Rf) and the antiviral activity (µg/mL) of those NQs, AQs and HetQs active against at least one virus serotype. Antiviral Activity The in vitro antiviral evaluation of the 27 substances against Human Herpesvirus type 1 (HHV-1) and 2 (HHV-2) was made on infected Vero cells, using the end-point titration technique (EPTT) [22]. The compounds that showed a fair reduction of viral titer at concentrations ≤50 µg/mL, after 48 h, were considered active. Table 2 shows the reduction values of viral titer (Rf ) and the antiviral activity (µg/mL) of those NQs, AQs and HetQs active against at least one virus serotype. Table 2. Reduction of viral titer and antiviral activity against Human Herpesvirus type 1 (HHV-1) and 2 (HHV-2) on infected Vero cells of selected 1,4-naphthoquinones (NQ) and 1,4-anthraquinones (AQ). According to the estimate of Vlietinck et al. [23], a purified natural molecule is considered to have a relevant or moderate antiviral activity when the reduction factor (Rf ) of viral titer is ≥1 × 10 3 or 1 × 10 2 , respectively. In this study, we define the Rf of 1 × 10 1 or ≥1 × 10 2 for mild or moderately active substances, respectively. These results reveal that, in general, 1,4-NQs are more potent anti-herpetic substances than 1,4-AQs, and that NQ 4 is the only quinone that showed moderate anti-herpetic activity against both HHV-1 and HHV-2 serotypes, suggesting a wide spectrum of antiviral activity for this molecule. From the chemical point of view, the presence of one or two chlorine atoms in the quinone ring, such as in NQs 2, 4 and 6, and AQs 10 and 12, contributes to enhance the antiviral activity. This feature probably correlates with the broad spectrum of activity of these substances, also in accordance with our previous study, in which the chlorinated quinones also showed significant antifungal activity [20]. In general, we note that the molecules tested were effective mainly against HHV-1 ( Table 2). This fact should be related to the own molecular nature of the compounds and to the dose of infectious virus employed in our assays. Nonetheless, it is important to note that each HHV serotype uses different cellular receptors to attach and enter the host. Notably, HHV-1 has more receptors available for infection than HHV-2 [24]. This feature can be a disadvantage for HHV-1, considering that there are also a greater number of binding sites accessible for the antiviral agents, thus leading to a possible disruption of the viral entry and the subsequent replication steps. To define the stage of the viral replicative cycle where NQ 4 exerts its action, simultaneous and post-infection treatments against HHV-1 and HHV-2 were performed. Additionally, this experiment was also extended to DENV-2 to prove the potential antiviral broad-spectrum of NQ 4 ( Figure 2). First, we determined the inhibitory concentration 50% (IC 50 ) for the compound and drug positive controls on infected cells and their effects on non-tumoral Vero cells, to define the concentration of evaluation and to calculate the antiviral selectivity index (SI). Then, the concentration of NQ 4 that reduced the number of viral plaques by 50% (EC 50 ) was determined from the dose-response curves. In this study, we have considered that a molecule has interesting antiviral selectivity for a SI (IC 50 /EC 50 ) value > 10. On Vero cells, NQ 4 showed an IC 50 value > 200 µg/mL, and the antiviral controls: dextran sulfate (DS), heparin (H), acyclovir (A) and ribavirin (R) showed IC 50 values > 400 µg/mL (r 2 = 0.84, data not shown). In the simultaneous treatment, NQ 4 showed anti-herpetic activity in all concentrations tested (0.4-3.1 µg/mL) in a significant manner (p value < 0.001) for both HHV-1 ( Figure 2A) and HHV-2 ( Figure 2B), thus, it was not possible to calculate dose-response curves and antiviral SI. As expected, dextran sulfate positive control was active against HHV-1 and HHV-2 at 5 µg/mL. Against DENV-2 ( Figure 2C), NQ 4 was evaluated at lower concentrations (0.4-1.6 µg/mL) that against HHV serotypes, showing significant antiviral activity only at 1.6 µg/mL (p value < 0.001). This effect was comparable to that displayed by the positive control heparin (10 µg/mL). In post-infective stages, NQ 4 showed no activity, neither against HHV-1, HHV-2 nor DENV-2. However, positive controls acyclovir and ribavirin employed against HHV serotypes and DENV-2 respectively, were active. These results suggest that NQ 4 produces its antiviral effect during early stages of the infectious cycle, i.e., on the attachment and/or the viral entry. To define the stage of the viral replicative cycle where NQ 4 exerts its action, simultaneous and post-infection treatments against HHV-1 and HHV-2 were performed. Additionally, this experiment was also extended to DENV-2 to prove the potential antiviral broad-spectrum of NQ 4 ( Figure 2). First, we determined the inhibitory concentration 50% (IC50) for the compound and drug positive controls on infected cells and their effects on non-tumoral Vero cells, to define the concentration of evaluation and to calculate the antiviral selectivity index (SI). Then, the concentration of NQ 4 that reduced the number of viral plaques by 50% (EC50) was determined from the dose-response curves. In this study, we have considered that a molecule has interesting antiviral selectivity for a SI (IC50/EC50) value > 10. On Vero cells, NQ 4 showed an IC50 value > 200 µg/mL, and the antiviral controls: dextran sulfate (DS), heparin (H), acyclovir (A) and ribavirin (R) showed IC50 values > 400 µg/mL (r 2 = 0.84, data not shown). In the simultaneous treatment, NQ 4 showed anti-herpetic activity in all concentrations tested (0.4-3.1 µg/mL) in a significant manner (p value < 0.001) for both HHV-1 ( Figure 2A) and HHV-2 ( Figure 2B), thus, it was not possible to calculate dose-response curves and antiviral SI. As expected, dextran sulfate positive control was active against HHV-1 and HHV-2 at 5 µg/mL. Against DENV-2 ( Figure 2C), NQ 4 was evaluated at lower concentrations (0.4-1.6 µg/mL) that against HHV serotypes, showing significant antiviral activity only at 1.6 µg/mL (p value < 0.001). This effect was comparable to that displayed by the positive control heparin (10 µg/mL). In post-infective stages, NQ 4 showed no activity, neither against HHV-1, HHV-2 nor DENV-2. However, positive controls acyclovir and ribavirin employed against HHV serotypes and DENV-2 respectively, were active. These results suggest that NQ 4 produces its antiviral effect during early stages of the infectious cycle, i.e., on the attachment and/or the viral entry. To prove this hypothesis, we evaluated if NQ 4 exerts its effect on attachment, when the reversible interaction between viral glycoproteins and cellular receptors occurs, or on the viral entry, when membrane fusion and virus internalization happen. Taking into account the high activity of this molecule in simultaneous treatment, lower concentrations (0.8-0.1 µg/mL) of NQ 4 were employed. This assay was performed only against HHV-1 considering, as mentioned above, that this serotype uses more cellular receptors for attachment and entry. Results showed that NQ 4 reduces significantly (*** p < 0.001) the HHV-1 attachment on Vero cells (EC50 = 0.12 µg/mL and SI = 1728) compared to DS and DMSO controls. Meanwhile, the effect of this compound on viral entry was not significant ( Figure 3). Some reports indicate that HHV-1 make the initial contact with the host cells by binding to glycosaminoglycan receptors, such as heparan sulfate; this interaction is reversible but necessary for (c) To prove this hypothesis, we evaluated if NQ 4 exerts its effect on attachment, when the reversible interaction between viral glycoproteins and cellular receptors occurs, or on the viral entry, when membrane fusion and virus internalization happen. Taking into account the high activity of this molecule in simultaneous treatment, lower concentrations (0.8-0.1 µg/mL) of NQ 4 were employed. This assay was performed only against HHV-1 considering, as mentioned above, that this serotype uses more cellular receptors for attachment and entry. Results showed that NQ 4 reduces significantly (*** p < 0.001) the HHV-1 attachment on Vero cells (EC 50 = 0.12 µg/mL and SI = 1728) compared to DS and DMSO controls. Meanwhile, the effect of this compound on viral entry was not significant ( Figure 3). To prove this hypothesis, we evaluated if NQ 4 exerts its effect on attachment, when the reversible interaction between viral glycoproteins and cellular receptors occurs, or on the viral entry, when membrane fusion and virus internalization happen. Taking into account the high activity of this molecule in simultaneous treatment, lower concentrations (0.8-0.1 µg/mL) of NQ 4 were employed. This assay was performed only against HHV-1 considering, as mentioned above, that this serotype uses more cellular receptors for attachment and entry. Results showed that NQ 4 reduces significantly (*** p < 0.001) the HHV-1 attachment on Vero cells (EC50 = 0.12 µg/mL and SI = 1728) compared to DS and DMSO controls. Meanwhile, the effect of this compound on viral entry was not significant (Figure 3). Some reports indicate that HHV-1 make the initial contact with the host cells by binding to glycosaminoglycan receptors, such as heparan sulfate; this interaction is reversible but necessary for (c) Some reports indicate that HHV-1 make the initial contact with the host cells by binding to glycosaminoglycan receptors, such as heparan sulfate; this interaction is reversible but necessary for the virus's location on the cell surface and thus, to allow the binding of viral ligands to specific receptors [25]. Additionally, it has been described that exist different ways of HHV entry to cells, including low pH-dependent or independent endocytosis and fusion at the plasma membrane. Even though these viral entry pathways are cell-type dependent, it should be noted that glycoproteins gB, gD and gH/gL are required for both entry pathways [26]. In recent years, antiviral activity has been described for certain quinones and other structurally related molecules with therapeutic potential. NQs and AQs have shown antiviral activities against DNA and RNA viruses by inhibition of viral entry, replication and genome transcription, as well as affecting the function of important enzymes for the viral replicative cycle [27][28][29][30][31][32][33]. Emodin, a 9,10-anthraquinone isolated from the roots of Rheum tanguticum, showed in vitro and in vivo anti-herpetic activity affecting viral replication, i.e., in late stages of infectious cycle [15]. Likewise, other antiviral mechanisms have been described for this quinone, including inhibition of UL12 [34], a protein related to the uncoating and DNA processing, and the inhibition of enzymes involved in viral proteins phosphorylation such as casein kinase [35]. Moreover, denbinobin, a phenanthrenequinone, has been reported as dual inhibitor of the HIV-1 LTR promoter and the transcription factor NF-kB affecting the viral transcription [36]. Our results demonstrate that NQ 4 acts selectively on early infection stages of the HHV-1 strain, specifically on viral attachment. This suggests that this NQ should mainly disrupt the interaction between the viral glycoproteins and glycosaminoglycan receptors, therefore, the viral entry would be partially affected ( Figure 3). This antiviral approach is more attractive because these steps are mandatory for successful viral infection. Molecular Docking Study of NQ 4 on DENV-2 To reinforce the hypothetical mode of action of NQ 4 on DENV-2 attachment or entry, a molecular docking analysis of this compound with the β-OG (n-octyl-β-D-glucoside) binding cleft of envelope protein (ENV) pre-fusion form (PDB:1OKE) was run. Based on the co-crystallized complex, we determined the potential pocket site for the discovery of small-molecule fusion inhibitors [37]. On this pocket, several amino acids have been reported to be critical for membrane fusion during virus entry, among which are Thr48, Glu49, Ala50, Lys51 and Gln52 [37,38]. Our studies show that NQ 4 fits its aliphatic substructure into the β-OG binding cleft of the dengue virus E glycoprotein dimer with a predicted score of −7.7 kcal/mol. The molecule forms a key hydrogen bond with Ala50 and Gln200, and a set of hydrophobic interactions within the cleft with Glu49, Leu135, Phe193, Leu198, Leu207 and Ile270 ( Figure 4A). According to the predicted mode of action, the enthalpic contribution is mostly governed by hydrophobic interactions that are positively induced by the preferred opposite orientation of the two chlorine atoms. Molecules 2019, 24, x FOR PEER REVIEW 8 of 17 possible to propose that NQ 4 acts in a similar way as SA-17, probably affecting DENV-2 entry, rather than its attachment as we found for HHV-1. The antiviral activity of several quinones against herpesviruses in late stages of infection has been reported [29][30][31]. In this study, we have found a different kind of antiviral effect of quinones, showing that NQ 4 exerts anti-HHV-1 activity in early stages, specifically avoiding viral attachment. To understand the impact of such interactions, we also docked the anthracycline antibiotic doxorubicin. This drug was evaluated by Kaptein et al. [16], together with a doxorubicin derivate (SA-17) that has a squaric acid amide ester moiety at the carbohydrate group. Both were active against DENV-2 at low concentrations during the very early stages of the viral replication cycle (i.e., virus attachment and/or virus entry). During SA-17 treatment, time-of-addition studies revealed that at concentration of 10 µg/mL (15 µM), the drug attained 98% and 42% inhibition of virus replication when was added at first 0 and 2 hours of infection, respectively. Additionally, this compound failed to efficiently inhibit viral replication when was added between 4 and 12 h post-infection and did not inhibit DENV-2 RNA replication. Using the same β-OG binding site of ENV protein, Kaptein et al. [16] reported molecular docking studies for SA-17, that formed hydrogen bonds with the amino acids Ala50, Tyr137 and Gln200, and had hydrophobic contacts with Thr48, Pro53, Lys128, Leu135, Phe193, Leu198, Ala205, Ile270, Gln271, Thr280 and Gly281, some of which are critical in membrane fusion during virus entry [16]. In our case, we found that doxorubicin forms hydrogen bonds with Glu26 and His27 and displays π-stacking interactions with Phe279 ( Figure 4B). The molecule fits part of its structure into the β-OG binding cleft of the dengue virus E glycoprotein dimer with a similar but lower score (−6.5 kcal/mol) than NQ 4. However, no interactions were formed with any of those amino acids found for NQ 4, or those reported for SA-17 [16]. Considering this information, it is possible to propose that NQ 4 acts in a similar way as SA-17, probably affecting DENV-2 entry, rather than its attachment as we found for HHV-1. The antiviral activity of several quinones against herpesviruses in late stages of infection has been reported [29][30][31]. In this study, we have found a different kind of antiviral effect of quinones, showing that NQ 4 exerts anti-HHV-1 activity in early stages, specifically avoiding viral attachment. This compound possibly disrupts the entry of DENV-2. Some quinone derivatives may affect dramatically phospholipidic membranes and may be responsible for remarkable changes in their physical and biological properties. These alterations might consist in changes of lipid/water interface in negatively charged phospholipids and disruptions on the core of lipid bilayer [39]. This report may be consistent with our findings, considering that the lipidic membrane reconfiguration caused by this type of quinones may affect the initial interaction virus-cellular receptor, and the subsequent viral entry to the host cell. Indeed, on the basis of the predicted logP (clogP) values 3.47 (6.487), and the low tPSA (total Polar Surface Area) value of 44.6 estimated through the ChemDraw algorithms [40] for NQ 4, its membrane affinity appears fairly supported. However, additional experiments are necessary to confirm or refuse this mechanistic hypothesis. Cytotoxicity The 27 NQs, AQs and HetQs were tested to evaluate their in vitro cytotoxicity on tumor (HeLa and Jurkat) and non-tumor (Vero) cells. The common anticancer drug doxorubicin was included in the assays as reference. As seen in Table 3, most of the molecules were cytotoxic for at least one cancer cell line, and from the results obtained, some general considerations can be made for the three groups of quinones tested. To analyze the results, we consider that compounds with IC 50 ≤ 10 µM have good cytotoxicity against neoplastic cells, while those with IC 50 values between 10 and 50 µM have a moderate cytotoxicity, and IC 50 values ≥ 50 µM correspond to low to null activity. Additionally, several recent studies have defined that the selectivity index (SI) value of 14.3 with respect to HepG2 cells is considered as indicative of potential therapeutic use for anticancer agents [41]. Such a SI value serves as criterion in this work to define a selective anticancer substance. As a general observation, AQs were more cytotoxic than NQs, and both showed better results than HetQs in accordance with our previous studies [20,21]. Among the NQs, the presence of halogens on the quinone ring improved the cytotoxicity on both cancerous and normal cells, while in the AQ group an arylamino substituent induced higher selectivity leading to the best neoplastic cytotoxicity with low toxicity for normal cells. Within the group of HetQs, only the imidazole-fused HetQ 23 and the The quinone with the highest in vitro cytotoxicity on HeLa cells was AQ 11, bearing a p-methoxyphenylamino substituent. It showed the lowest IC 50 value of 10 nM and the best SI value near 14 × 10 3 with respect to Vero cells, much better than that of the reference drug doxorubicin. In addition, AQs 16 and 17 which were cytotoxic for HeLa cells with IC 50 values of 5.6 µM and 7.5 µM, displayed SI values of 32 and 16, respectively. A SI of 14 was also observed for the dichlorinated NQ 4, though it was in the range of low cytotoxicity (IC 50 = 46.6 µM). On Jurkat cells, again AQ 11 showed a high cytotoxicity (1.4 µM) and the relevant SI value of 168, followed by NQs 2 and 3 and AQs 13 and 17, which showed IC 50 values of 6.2, 1.6, 6.3 and 9.6 µM, respectively; though with moderate to low SI values of 15, 8, 11 and 12, respectively. In addition, AQ 11 and AQ 17 were quinones with fair cytotoxicity on HeLa and Jurkat cells at lower concentrations. Finally, according to their SI values, the AQs 11, 16 and 17 were the most selective compounds on both types of cancer cells, unlike NQ derivatives. It must be noted that in addition to the very high selectivity towards cancer cells, AQ 11 also showed an interesting selectivity depending on the type of cancer, resulting some 800 times more potent against HeLa than against Jurkat cells ( Figure 5). This fact also defines a qualitative difference with respect to doxorubicin, which inversely resulted much less selective, being only some ten times more cytotoxic for Jurkat than for HeLa cells. with respect to doxorubicin, which inversely resulted much less selective, being only some ten times more cytotoxic for Jurkat than for HeLa cells. Aiming to obtain further validation of AQ 11, its structure was submitted online to predictive screenings to get data on its druggability potential. Therefore, on examination under the prediction algorithms of Molinspiration virtual screening engine v2018.03 [42], AQ 11 was qualified as potential kinase inhibitor (score: 0.29) and as potential nuclear receptor ligand (0.23), that is recognizing its probable intrinsic bioactivity. Further examination under the Osiris property explorer, in addition to physicochemical data like MW (361.44) and clogP (4.14) values within those permitted by the Lipinski Rule of Five, revealed that AQ 11 was potentially exempt of carcinogenic, mutagenic, irritant and on the reproductive cycle effects; that is devoid of main adverse effects [43]. All these experimental facts and calculated or predicted data support the selection of AQ 11 for carrying out mechanistic studies, target definition and structure optimization, oriented to configure preclinical toxicity and efficacy assays. Among the mechanisms of action of quinones in different tumor types, it has been reported the decrease in mitochondrial membrane potential through ROS-mediated pathway [44,45], as the G2/Mphase arrest through the down-regulating of G2/M regulatory proteins cyclin B1 and Cdc25B [46], or the DNA damage induction through double-strand breaks by inhibition of topoisomerase II and the glutathione depletion [47]. Additionally, the quinone moiety is present in some clinically useful anticancer agents as mitomycin C or doxorubicin. Mitomycin C is used in the treatment of gastrointestinal tumors acting as a double-strand DNA alkylating agent [48], and doxorubicin is commonly used in cancer treatment, including breast, lung and gastric carcinomas among other, acting as DNA intercalating agent and generating free radicals that damage cellular membranes, DNA and proteins [49]. All these facts and these examples of clinically useful drugs demonstrate the potential of quinone derivatives as antitumor agents. Furthermore, our results suggest that cell death induced by AQ 11, AQ 16 and AQ 17 could involve one or more of those above-mentioned mechanisms of action or even different ones. Consequently, additional studies are required to establish the proper cytotoxic mechanism for these AQs. Chemistry NQs 1, 2, 3, 5 and 8 were obtained by previously described procedures [50]. AQs 10 and 12 were also obtained as described before [51]. NQs 4, 6, 7, 9 and AQs 11, 13 and 14-17 and 18 were obtained as described previously [20]. HetQs 19-21 and 22-27 were obtained as described previously [21]. Aiming to obtain further validation of AQ 11, its structure was submitted online to predictive screenings to get data on its druggability potential. Therefore, on examination under the prediction algorithms of Molinspiration virtual screening engine v2018.03 [42], AQ 11 was qualified as potential kinase inhibitor (score: 0.29) and as potential nuclear receptor ligand (0.23), that is recognizing its probable intrinsic bioactivity. Further examination under the Osiris property explorer, in addition to physicochemical data like MW (361.44) and clogP (4.14) values within those permitted by the Lipinski Rule of Five, revealed that AQ 11 was potentially exempt of carcinogenic, mutagenic, irritant and on the reproductive cycle effects; that is devoid of main adverse effects [43]. All these experimental facts and calculated or predicted data support the selection of AQ 11 for carrying out mechanistic studies, target definition and structure optimization, oriented to configure preclinical toxicity and efficacy assays. Biological Evaluation Among the mechanisms of action of quinones in different tumor types, it has been reported the decrease in mitochondrial membrane potential through ROS-mediated pathway [44,45], as the G2/M-phase arrest through the down-regulating of G2/M regulatory proteins cyclin B1 and Cdc25B [46], or the DNA damage induction through double-strand breaks by inhibition of topoisomerase II and the glutathione depletion [47]. Additionally, the quinone moiety is present in some clinically useful anticancer agents as mitomycin C or doxorubicin. Mitomycin C is used in the treatment of gastrointestinal tumors acting as a double-strand DNA alkylating agent [48], and doxorubicin is commonly used in cancer treatment, including breast, lung and gastric carcinomas among other, acting as DNA intercalating agent and generating free radicals that damage cellular membranes, DNA and proteins [49]. All these facts and these examples of clinically useful drugs demonstrate the potential of quinone derivatives as antitumor agents. Furthermore, our results suggest that cell death induced by AQ 11, AQ 16 and AQ 17 could involve one or more of those above-mentioned mechanisms of action or even different ones. Consequently, additional studies are required to establish the proper cytotoxic mechanism for these AQs. Screening for Anti-Herpetic Activity The antiviral activity of molecules against 1 and 10 Cell Culture Infectious Dose 50% (10TCID 50 ) of HHV-1 and HHV-2, respectively, was determined using end-point titration technique (EPTT) [22]. Vero cells grown in 96-well plates at a density of 2.0 × 10 4 cells/well were incubated at 37 • C in 5% CO 2 atmosphere until constitute 80% of cell monolayer. Then, viral suspensions of HHV-1 or HHV-2 with concentrations of 6.25 µg/mL to 50 µg/mL of compounds were performed in DMEM supplemented with 2% FBS containing 1% and 0.5% carboxymethylcellulose (CMC) for HHV-1 and HHV-2, respectively. Mixture was incubated during 15 minutes at room temperature and was added to the cell monolayer. After 48 h of incubation at 37 • C (5% CO 2 ), the cytopathic effect was examined, the microplates were fixed with 3.5% formaldehyde and stained with 0.2% crystal violet. Two independent experiments by quadruplicate for each viral serotype and each were carried out. Positive controls, such as dextran sulfate (DS) and acyclovir (A) for early stages and late stages of infection, respectively, were included. Simultaneous and Post-Infection Treatment on HHV-1, HHV-2 and DENV-2 The potential antiviral mechanism of NQ 4 was evaluated by the plaque reduction assay as previously described by us [52]. Vero cell monolayers grown in 24-well plates were infected with 100 PFU/well of each virus. For simultaneous treatment, the compound and the virus were added simultaneously to the cell monolayers and incubated for 1 h at 37 • C (5% CO 2 ). Then, cells were washed with phosphate buffered saline (PBS, pH = 7.0) and CMC at 1% and 0.75% was added for HHV-1 and HHV-2, respectively. In post-infection treatment, virus was added to cell monolayer and incubated for 1 h at 37 • C (5% CO 2 ). After incubation, washing was performed with PBS and molecule previously prepared in CMC 1% and 0.75% for HHV-1 and HHV-2 respectively, was added. In both treatments, NQ 4 was prepared at concentrations from 0.4 µg/mL to 3.1 µg/mL and the cell monolayers allowed to incubate for 72 h. Subsequently, cells were fixed and stained with a solution of 3.5% formaldehyde with 0.2% crystal violet, and the viral plaques were counted. Dextran sulfate (DS) was included as positive control in simultaneous assay and acyclovir (A) was the positive control in after infection treatment. Against DENV, the effect of NQ 4 either simultaneously or post-infection was also evaluated by the plaque reduction assay as previously described. Concentrations employed for this assay were from 0.4 µg/mL to 1.6 µg/mL. In this test, BHK-21 cell monolayers grown in 24-well plates were infected with 100 PFU/well of DENV-2 and treatments were performed in the same way to describe for Herpesviruses. Moreover, the monolayers were incubated for 6 days, fixed and stained with a solution of 3.5% formaldehyde with 0.2% crystal violet, and the viral plaques were counted. Heparin (H) was included as a positive control in the simultaneous assay, and ribavirin (R) was the positive control in after infection treatment. Evaluation of Anti-HHV-1 Mechanism of Action in Pre-Infective Stages The effect of NQ 4 in the initial phases of the HHV-1 viral replication was done as previously described by Cardozo et al., 2011 [53]. In the attachment assay, pre-chilled (1 h at 4 • C) Vero cell monolayers were exposed to viruses (100 PFU/well) in presence or absence of the compound and were incubated for 2 h at 4 • C. After incubation, the substance and unbounded viruses were removed with cold PBS; cells were overlaid with CMC 1% and incubated for 72 h at 37 • C (5% CO 2 ). In the entry assay, pre-chilled cells were infected with viruses (100 PFU/well) and incubated for 2 h at 4 • C. After this time, the unbound viruses were removed with cold PBS; cells were treated with different concentrations of pre-warmed compound, and then incubated for 1 h at 37 • C. Unabsorbed viruses were inactivated using citrate buffer (pH = 3.0) and then cells were washed with PBS, overlaid with CMC 1%, and incubated for 72 h at 37 • C (5% CO 2 ). In both assays, further procedures were the same mentioned previously for the plaque reduction assay and dextran sulfate (DS) was included as positive control. In these tests, the evaluated NQ 4 concentration range was from 0.1 to 0.8 µg/mL and the effective concentration fifty (EC 50 ), the concentration that reduces the 50% of plaque forming units, was determined from dose-effect curves by linear regression methods for each compound. EC 50 values were expressed as the mean ± SEM (standard error of the mean) of at least four dilutions by quadruplicate. Additionally, to determinate if NQ 4 was selective for infected cells rather than uninfected cells, it was calculated the antiviral selectivity index (SI), defined as the ratio between the inhibitory concentration 50 (IC 50 ) on Vero cells and the EC 50 for each virus. Molecular Docking with DENV-2 Prefusion Envelope Protein Parametrization of ligands (NQ 4 and doxorubicin) and DENV-2 prefusion envelope protein (PDB:1OKE) was done using the AutoDock Tools suite [54]. Hydrogen atoms were added to the polar side chains and partial charges were calculated through the Gasteiger methodology. Then, a grid box was delimited in a binding site previously reported with some studied inhibitors [16]. Molecular docking was run with a modified version of AutoDock Vina that includes a scoring function parameterized also for halogen interactions [55]. We used an exhaustiveness (number of internal repetitions) of 20 for each protein-compound pair. The interactions (hydrogen bonds and hydrophobic interactions) and the predicted free energy scores in kcal/mol were obtained. Visualization of the docking results was generated using the Discovery Studio package. Cytotoxicity Assay The in vitro cytotoxicity evaluation of quinones was performed using the 3-(4,5 dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT, Sigma, Cream Ridge, NJ, USA) assay as described by Betancur-Galvis et al. 2002 [22]. Briefly, Vero and HeLa cell lines were seeded at 2.0 × 10 3 cells per well of 96-well plates in DMEM supplemented with 10% of inactivated FBS, and were incubated for 24 h at 37 • C, 5% CO 2 . Then, each diluted molecule was added to the cells and incubated for further 48 h at 37 • C, 5% CO 2 . Jurkat cell line at 3 × 10 3 cells per well in a 96-well round-bottomed plate and diluted substances in RPMI-1640 medium (Sigma) supplemented with 10% FBS were plated simultaneously. After 48 h of treatment at 37 • C, 5% CO 2 , the media was carefully removed and 28 µL of MTT solution (4 mg/mL) was added to each well, and the plates were incubated for 2 h at 37 • C, 5% CO 2 . The DMSO was then added to dissolve the formed formazan crystals and absorbance was determined spectrophotometrically at 570 nm. The minimal dilution of compound that caused 50% inhibition of the cells (IC 50 ) was calculated by linear regression analysis of the dose-response curves generated from the absorbance data with the statistical GraphPad Prisma 5.0. IC 50 values were expressed as the mean ± standard deviation (M ± SD) of two independent experiments done in quadruplicate. To define which molecules were more selective against cancerous cells, the selectivity index (SI), defined as Vero IC 50 over HeLa or Jurkat IC 50 values, was calculated. Statistical Analysis Statistical analyses were performed using the statistical software GraphPad Prism®v. 5.0 (GraphPad Software Inc., La Jolla, CA, USA). In all cases, p value < 0.05 was statistically significant. Conclusions Currently, it is necessary to discover new and better antivirals with novel mechanisms of action for the treatment of Human Herpesvirus type 1 and 2 infections, mainly for the treatment of immunocompromised and transplanted patients, considering the continuous emergence of HHV acyclovir-resistant strains. Moreover, to discover medicines for Dengue disease is imperative, taking into account the impact of this disease on public health. In our study, the naphthoquinone NQ 4 has shown important anti-herpetic (EC 50 : <0.4 µg/mL, <1.28 µM) and anti-dengue (1.6 µg/mL, 5.1 µM) activities on early infection stages, mainly in the initial formation of complexes between the viral glycoprotein and the host cell surface receptors, thereby preventing all events related to fusion, and subsequent viral genome replication, and production of new virions. Additionally, NQ 4 disrupted the viral attachment of HHV-1 to Vero cells (EC 50 = 0.12 µg/mL, 0.38 µM) with a very high selectivity index (SI = 1728). Some in silico analysis performed with NQ 4 predicted that it could bind to the prefusion form of the E glycoprotein of DENV-2. In this context, our findings are a starting point to follow biopharmaceutical and pre-clinical toxicity and efficacy evaluations, focused towards the development and formulation of a pharmaceutical product that can prevent herpes and dengue infections avoiding the appearance of new drug-resistant strains. Related to the potentiality of these quinones as antineoplastic, the anthraquinone AQ 11 was the most cytotoxic either on HeLa (IC 50 = 0.01 µM) and Jurkat (IC 50 = 1.4 µM) cell lines, with a low toxicity against Vero cells (IC 50 = 321.9 µM) and therefore, with high SI, much better than the reference drug doxorubicin. These facts make AQ 11 a new, highly selective and promising lead compound, due not only to the experimental results found, but also to the favorable physicochemical properties and ADME (absorption, distribution, metabolism and excretion) data predictions, as to the lack of main toxicity risks. However, its drug likeness score (0.26) must be increased, and further virtual and experimental studies oriented to the development of a more consistent candidate to experimental preclinical toxicity and anticancer assays should be carried out.
v3-fos-license
2023-03-08T16:06:49.985Z
2023-03-01T00:00:00.000
257394300
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-4991/13/5/952/pdf?version=1678098902", "pdf_hash": "fa0f17ce0a99b5246d8b0a8d17e5574a11369fd8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1436", "s2fieldsofstudy": [ "Engineering" ], "sha1": "a41faff59cd6b40a54d06e7c0b32671f2e652fa5", "year": 2023 }
pes2o/s2orc
Asymmetric Alternative Current Electrochemical Method Coupled with Amidoxime-Functionalized Carbon Felt Electrode for Fast and Efficient Removal of Hexavalent Chromium from Wastewater A large amount of Cr (VI)-polluted wastewater produced in electroplating, dyeing and tanning industries seriously threatens water ecological security and human health. Due to the lack of high-performance electrodes and the coulomb repulsion between hexavalent chromium anion and cathode, the traditional DC-mediated electrochemical remediation technology possesses low Cr (VI) removal efficiency. Herein, by modifying commercial carbon felt (O-CF) with amidoxime groups, amidoxime-functionalized carbon felt electrodes (Ami-CF) with high adsorption affinity for Cr (VI) were prepared. Based on Ami-CF, an electrochemical flow-through system powered by asymmetric AC was constructed. The mechanism and influencing factors of efficient removal of Cr (VI) contaminated wastewater by an asymmetric AC electrochemical method coupling Ami-CF were studied. Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR), and X-ray photoelectron spectroscopy (XPS) characterization results showed that Ami-CF was successfully and uniformly loaded with amidoxime functional groups, and the adsorption capacity of Cr (VI) was more than 100 times higher than that of O-CF. In particular, the Coulomb repulsion effect and the side reaction of electrolytic water splitting were inhibited by the high-frequency anode and cathode switching (asymmetric AC), the mass transfer rate of Cr (VI) from electrode solution was increased, the reduction efficiency of Cr (VI) to Cr (III) was significantly promoted and a highly efficient removal of Cr (VI) was achieved. Under optimal operating conditions (positive bias 1 V, negative bias 2.5 V, duty ratio 20%, frequency 400 Hz, solution pH = 2), the asymmetric AC electrochemistry based on Ami-CF can achieve fast (30 s) and efficient removal (>99.11%) for 0.5–100 mg·L−1 Cr (VI) with a high flux of 300 L h−1 m−2. At the same time, the durability test verified the sustainability of the AC electrochemical method. For Cr (VI)-polluted wastewater with an initial concentration of 50 mg·L−1, the effluent concentration could still reach drinking water grade (<0.05 mg·L−1) after 10 cycling experiments. This study provides an innovative approach for the rapid, green and efficient removal of Cr (VI) containing wastewater at low and medium concentrations. Introduction Global water pollution and water shortage are key challenges for human society in the 21st century. Among them, is a large amount of chromium-polluted wastewater produced from metal plating, leather manufacturing, textile dyeing and other industries, which seriously threatens human and environmental health [1][2][3]. In the natural environment, chromium exists mainly as trivalent chromium (Cr (III)) and hexavalent chromium (Cr (VI)) [4]. Cr (III) is a micronutrient element that maintains the normal physiological activities of organisms and is a common form in nature [5]. When chromium exists in hexavalent oxygen-containing anions (e.g., Cr 2 O 7 2− , CrO 4 2− , HCrO 4 − ) [6], it can cause cell damage in low concentrations and cause skin and stomach allergies as a result of short-term exposure, as well as liver, kidney and nervous tissue damage as a result of long-term exposure [3,7]. The World Health Organization (WHO) has set the maximum allowable level of Cr (VI) in drinking water as 0.05 mg·L −1 [8], and the Ministry of Ecology and Environment of China has stipulated that the content of Cr (VI) in treated domestic sewage should not exceed 0.2 mg·L −1 [9]. Therefore, developing efficient Cr (VI) removal technology in polluted water has become an area of great interest. In general, Cr (VI) is highly mobile and soluble in a wide pH range, increasing its migration and potential harm, whereas Cr (III) has about 500-1000-times lower toxicity and mobility than Cr (VI) and is very easy to be removed by precipitation and adsorption [10]. Therefore, reducing Cr (VI) to Cr (III) is one of the important ways to treat chrome-containing wastewater. Although the traditional adsorption method has the advantages of low cost and simple operation, it is difficult to use on a large scale because the pores of adsorbents (such as activated carbon, zeolite, resin, etc.) are easy to plug, and the regeneration efficiency is low [11]. Recently, advanced sorbents such as porous carbon [12], graphene-based nanomaterials [13], and a metal-organic framework [14] were developed. They exhibited good adsorption performance; however, complicated fabrication processes and high cost hindered their application. Moreover, Cr (VI) removal via adsorption is only phase transfer, high toxic Cr (VI) was not detoxified to Cr (III). Although chemical reduction can effectively reduce and remove medium/high concentration Cr (VI) in wastewater, continuous use of reducing agents (such as FeSO 4 , Na 2 S 2 O 5 and HS 2 , etc.) and hydroxides (such as NaOH, KOH, and Ca(OH) 2 , etc.) will produce a large amount of Cr-containing sludge and high alkaline solution, resulting in potential secondary pollution [15]. In contrast, electrochemical technology has the advantages of no additional chemical reagents, mild reaction conditions, simple operation and high efficiency [16][17][18]. Traditional electrochemical methods mediated by direct current (DC) systems include electrocoagulation [18], electrodialysis [19], electrodeionization [20] and electrochemical oxidation [21]/reduction [22] for sacrificial anodes (such as Fe and Al). However, due to the action of coulomb force, the cathode will repel the negatively charged hexavalent chromium containing oxygen (e.g., Cr 2 O 7 2− and HCrO 4 − ) so it cannot be effectively reduced on the electrode surface, which decreases the reduction and removal efficiency of Cr (VI). At the same time, water cracking on the electrode surface leads to a large amount of energy loss, which is a common problem in electrochemical methods [23]. In addition, parallel electrodes (iron plate, graphite plate, etc.) in traditional electrochemical reactors lack effective active sites, which is not conducive to the convective diffusion of Cr (VI) to the electrode, often showing low current efficiency and high energy consumption, and it is difficult to achieve rapid and effective removal of Cr (VI) [24]. The development of electrodes that possess interconnected macropores, high conductivity, and abundant active sites is necessary to enhance both mass transfer and current efficiency. Whereas DC can only adjust the voltage (current) to regulate the electrochemical reaction, asymmetric pulsed square-wave alternating current (AC) can achieve accurate electrode interface reaction regulation because it has four parameters: frequency, duty ratio and positive and negative bias. It has shown great advantages in electrodeposition (electroplating) [25], lithium extraction from seawater [26] and remediation of soils contaminated by heavy metals [27]. For example, Yue et al. [17] successfully recovered a large amount of Pb from wastewater using chitosan modified carbon felt electrode based on an AC electrochemical system. Lu et al. [27] designed an asymmetric AC electrochemical system-mediated soil remediation technology to achieve sustainable remediation of soil contaminated by complex heavy metals (Cu 2+ , Zn 2+ , Pb 2+ , Cd 2+ ). However, there is no research on the application of AC electrochemical technology to Cr (VI) removal from wastewater. Herein, amidoxime modified carbon felt electrode (Ami-CF) was prepared, and a novel penetrating asymmetric AC electrochemical system was constructed as a research platform to reveal an AC electrochemistry-mediated Cr (VI) removal mechanism. Due to the existence of amidoxime groups, Ami-CF is very hydrophilic and could make full use of the high surface area of the electrode. Meanwhile, amidoxime groups on Ami-CF provide strong chelating sites that can bind Cr (VI), resulting Ami-CF a 2.1-5.5 times saturated adsorption capacity than other adsorbents reported in the literature. The influences of applied voltage, duty ratio, initial Cr (VI) concentration, pH, flow rate and other coexisting ions on Cr (VI) removal by asymmetric AC electrochemical system were investigated and discussed. By employing Ami-CF as working electrode and coupled with high frequent cathode-anode conversion, AC electrochemistry not only enhances the mass transfer process while reducing side reactions, but also periodically attracts Cr (VI) at the active site under positive bias and reduces Cr (VI) to Cr (III) and repels Cr (III) under negative bias, thereby releasing the active site and constantly regenerating the Ami-CF. Under optimal operating conditions, the asymmetric AC electrochemical system based on Ami-CF can achieve fast (30 s) and efficient removal (>99.11%) for wastewater with a wide concentration (0.5-100 mg·L −1 ) of Cr (VI) with a high flux of 300 L h −1 m −2 , which is superior to other methods reported. Furthermore, the removal mechanism of Cr (VI) in AC electrochemical system was thoroughly discussed combined with advanced characterizations. This study provides a new idea for the treatment of Cr (VI) containing wastewater by AC electrochemical technology in the future. Chemical and Materials Carbon felt (CF020, thickness 2 mm), purchased from Carbon Energy Technology Co., LTD. (Taiwan, China), Super P Carbon Black purchased from Alfa Aesar (Alfa Aesar, UK), polyacrylonitrile, N, N-dimethylformamide, hydroxylamine hydrochloride, sodium carbonate, dibenzoyl hydrazine, potassium dichromate, anhydrous copper sulfate, anhydrous zinc sulfate, anhydrous calcium sulfate, copper nitrate, phosphoric acid and sulfuric acid were all purchased from Alding Chemical Reagent Co., LTD (Shanghai, China). DC and AC power supply were purchased from UNI-T Co., LTD (Dongguan, China). The peristaltic pump was purchased from River Fluid Technology Co., LTD (Baoding, China). The experimental water was deionized. Electrode Modification and Characterization The original carbon felt (O-CF) was cut into discs with a diameter of 1.0 cm, and then polyacrylonitrile (PAN), Super P carbon black and N, n-dimethylformamide (DMF) were mixed and stirred at a mass ratio of 1:1:30 for 12 h to form a uniform slurry. The PAN-CF electrode was prepared by dipping the round carbon felt sheet with slurry and drying it in the oven (70°C). Then, the dried PAN-CF was put into a water bath at 70°C (100 mL), and 8 g hydroxylamine hydrochloride and 6 g sodium carbonate were added to the water bath successively for hydroxylamine reaction (90 min). After the reaction, the carbon felt sheet was washed with deionized water and dried in a vacuum furnace (80°C) to obtain an amidoxime functionalized electrode (Ami-CF). The electrode surface morphology was characterized by scanning electron microscopy (SEM Hitachi Regulus 8100, Tokyo, Japan). Surface functional groups were determined by Fourier to transform infrared spectroscopy (FTIR, Nicolet 6700, Thermo Scientific, Waltham, MA, USA) with a scanning range of 400 to 4000 cm −1 and a scanning accuracy of 2 cm −1 . The surface chemical properties of the electrodes were analyzed by X-ray photoelectron spectroscopy (XPS, EscaLab 250Xi, Thermo Fisher Scientific, Waltham, MA, USA). Batch Adsorption Experiments for Cr (VI) Adsorption experiments were conducted using a batch approach as described in our previous studies [28,29]. The centrifuge tubes were placed in the centrifuge tubes containing 50 mL Cr (VI) solution with an initial concentration of 50 mg·L −1 (pH = 6 ± 0.05). A total of 0.6 mL equal samples were taken at the time points 0 min, 10 min, 30 min, 1 h, 3 h, 6 h, 12 h, 24 h, 36 h, 48 h, 60 h and 72 h, respectively. Samples were filtered and diluted properly before analysis of final Cr (VI) concentration. Each data point, including blanks (without O-CF, PAN-CF, and Ami-CF), was run in triplicate. The quasi-first-order kinetic equation (Equation (1)) and the quasi-second-order kinetic equation (Equation (2)) were used to fit the adsorption kinetic data, respectively. The formula is as follows [4,28,30]: (1) where, Q e and Q t are the adsorption capacity (mg·g −1 ) of Cr (VI) on the electrode at adsorption equilibrium and at adsorption time t, respectively. Reaction time is t (h); The adsorption rate constants of K 1 and K 2 for the quasi-first order and quasi-second order kinetics, respectively. For adsorption isotherm, 20~25 mg O-CF, PAN-CF, and Ami-CF were weighed and placed in the centrifuge tubes, 50 mL of 0 mg·L −1 , 0.5 mg·L −1 , 1 mg·L −1 , 2.5 mg·L −1 , 5 mg·L −1 , 10 mg·L −1 , 25 mg·L −1 , 50 mg·L −1 , 100 mg·L −1 , 250 mg·L −1 , 500 mg·L −1 and 1000 mg·L −1 Cr (VI) solution (pH = 6 ± 0.05) were added afterwards. Then the centrifuge tubes were placed in the water bath thermostatic oscillator and agitated under the same condition as described above. The adsorption time was 36 h (equilibrium time) obtained in the kinetic experiment. After the adsorption is completed, 0.6 mL equal samples are respectively taken for the determination of Cr (VI) content, calculation of adsorption capacity and removal rate. The Langmuir model (Equation (3)) and the Freundlich model (Equation (4)) are used to fit the adsorption isotherm data [28,31,32]. The formula is as follows: where, C e is the equilibrium concentration (mg·L −1 ), Q e is the equilibrium adsorption capacity (mg·g −1 ), Q m is the maximum adsorption capacity (mg·g −1 ), K L is the Langmuir constant (L·mg −1 ), K F is the Freundlich adsorption coefficient (mg·g −1 ·L 1/n ·mg −1/n ). N is a heterogeneous factor. Electrochemical Experiments As shown in Figure 1, the penetrating electrochemical treatment device in the system for the treatment of wastewater containing Cr (VI) by asymmetric AC electrochemical reduction is composed of organic glass plates with a "sandwich" structure. Three plexiglass plates are fixed by four stainless steel screws, and silicone gaskets are placed between the plexiglass plates to increase airtightness and prevent water leakage; the central position of the plexiglass plate in the middle layer is provided with a groove for parallel placement of the functional carbon felt electrode. The distance between the anode and the cathode is 5 mm. The carbon felt electrode is connected to the AC power supply (Youlide UTG2025A) through the copper sheet. The plexiglass panels on the left and right sides have interfaces for connecting the liquid inlet bottle and the liquid outlet bottle through the silicone tube. During the experiment (at room temperature), 50 mL Cr (VI) solution is filled in the liquid inlet bottle, and the appropriate flow rate (mL·min −1 ) is adjusted through the peristaltic pump. The solution slowly flows to the electrochemical processing device through the silicone tube. Appropriate AC power parameters (bias, duty ratio, etc.) are set. Cr (VI) is finally reduced to Cr (III) and flows into the discharge bottle. of the functional carbon felt electrode. The distance between the anode and the cathode is 5 mm. The carbon felt electrode is connected to the AC power supply (Youlide UTG2025A) through the copper sheet. The plexiglass panels on the left and right sides have interfaces for connecting the liquid inlet bottle and the liquid outlet bottle through the silicone tube. During the experiment (at room temperature), 50 mL Cr (Ⅵ) solution is filled in the liquid inlet bottle, and the appropriate flow rate (mL•min −1 ) is adjusted through the peristaltic pump. The solution slowly flows to the electrochemical processing device through the silicone tube. Appropriate AC power parameters (bias, duty ratio, etc.) are set. Cr (Ⅵ) is finally reduced to Cr (III) and flows into the discharge bottle. Analysis Method Cr (Ⅵ) was determined and analyzed by dibenzoyl dihydrazine spectrophotometry (GB 7467-87). Total chromium was determined and analyzed by ICP-OES [9]. The adsorption capacity Qe of O-CF, PAN-CF and Ami-CF on Cr (Ⅵ) is calculated according to Equation (5). The removal rate (R) of Cr (Ⅵ) is calculated according to Equation (6). where, C0 refers to the initial concentration of Cr (Ⅵ) (mg•L −1 ), Ct refers to the concentration of Cr (Ⅵ) at moment t (mg•L −1 ), V refers to the volume of solution (L) and M refers to the mass of carbon felt electrode (g). All experimental data were expressed as an average of three replicates with standard deviation. To compare the pollutant removal efficiencies of different treatment strategies; statistical analyses were performed through the statistical program SPSS Statistics Grad-Pack (IBM Inc., Chicago, USA), including analysis of variance, Bartlett's and Levine's tests for homogeneity of variance and normality. Differences between individual means were identified using Tukey HSD-procedure at the 5% significance level. Tamhane's T2 was selected for that equal variance between groups was not assumed. Electrode Characterization SEM characterization showed that both O-CF and Ami-CF were composed of fibers with a diameter of about 10 μm (Figure 2a,b). In contrast to the smooth surface of O-CF, Analysis Method Cr (VI) was determined and analyzed by dibenzoyl dihydrazine spectrophotometry (GB 7467-87). Total chromium was determined and analyzed by ICP-OES [9]. The adsorption capacity Q e of O-CF, PAN-CF and Ami-CF on Cr (VI) is calculated according to Equation (5). The removal rate (R) of Cr (VI) is calculated according to Equation (6). where, C 0 refers to the initial concentration of Cr (VI) (mg·L −1 ), C t refers to the concentration of Cr (VI) at moment t (mg·L −1 ), V refers to the volume of solution (L) and M refers to the mass of carbon felt electrode (g). All experimental data were expressed as an average of three replicates with standard deviation. To compare the pollutant removal efficiencies of different treatment strategies; statistical analyses were performed through the statistical program SPSS Statistics GradPack (IBM Inc., Chicago, USA), including analysis of variance, Bartlett's and Levine's tests for homogeneity of variance and normality. Differences between individual means were identified using Tukey HSD-procedure at the 5% significance level. Tamhane's T2 was selected for that equal variance between groups was not assumed. Electrode Characterization SEM characterization showed that both O-CF and Ami-CF were composed of fibers with a diameter of about 10 µm (Figure 2a,b). In contrast to the smooth surface of O-CF, the Ami-CF surface is evenly coated with a film made of carbon black nanoparticles and a polymer. Figure 2c,d) compares the hydrophilicity/hydrophobicity of O-CF, PAN-CF and Ami-CF. O-CF can trap water droplets stably on the surface for more than 6 h, PAN-CF is soaked by water droplets after 20 s, and Ami-CF is wet at the moment of contact with water droplets (less than 0.1 s). This is because the hydroxylamine reaction converts the PAN's nitrile group (-CN) into amidoxime. Ami-CF electrode is more hydrophilic than O-CF and Pan-CF electrode due to the rich functional groups of O and N (amidoxime functionalization), which is conducive to the diffusion and reaction of Cr (VI) to the effective action site on the electrode surface in aqueous solution [33]. FTIR results confirmed the presence of amidoxime-group functional groups on the surface of Ami-CF after the hydroxylamine reaction ( Figure 2e). The characteristic peaks at 2924 cm −1 and 2854 cm −1 are due to the symmetric and asymmetric stretching of -CH 2− on the O-CF surface [27]. The appearance of -C≡N at 2240 cm −1 indicates that PAN is coated in CF before the hydroxylamine reaction, and -C≡N disappears after the hydroxylamine reaction. Meanwhile, peaks at 3100-3500 cm −1 , 1641 cm −1 and 903 cm −1 , respectively, represent -NH/-OH, -C=N-, and N-O in amidoxime [34], verifying the transformation of the nitrile group to amidoxime group after hydroxylamine reaction. XPS results further confirmed the existence of an amidoxime group on the surface of Ami-CF. It can be clearly seen that Ami-CF has a strong N 1s peak (Figure 2f), and further peak segmentation results show N-H (398.3 eV), C=N (399.7 eV) and N-O (400.7 eV) groups [33]. a polymer. Figure 2c,d) compares the hydrophilicity/hydrophobicity of O-CF, PAN-CF and Ami-CF. O-CF can trap water droplets stably on the surface for more than 6 h, PAN-CF is soaked by water droplets after 20 s, and Ami-CF is wet at the moment of contact with water droplets (less than 0.1 s). This is because the hydroxylamine reaction converts the PANʹs nitrile group (-CN) into amidoxime. Ami-CF electrode is more hydrophilic than O-CF and Pan-CF electrode due to the rich functional groups of O and N (amidoxime functionalization), which is conducive to the diffusion and reaction of Cr (Ⅵ) to the effective action site on the electrode surface in aqueous solution [33]. FTIR results confirmed the presence of amidoxime-group functional groups on the surface of Ami-CF after the hydroxylamine reaction (Figure 2e). The characteristic peaks at 2924 cm −1 and 2854 cm −1 are due to the symmetric and asymmetric stretching of -CH2− on the O-CF surface [27]. The appearance of -C≡N at 2240 cm −1 indicates that PAN is coated in CF before the hydroxylamine reaction, and -C≡N disappears after the hydroxylamine reaction. Meanwhile, peaks at 3100-3500 cm −1 , 1641 cm −1 and 903 cm −1 , respectively, represent -NH/-OH, -C=N-, and N-O in amidoxime [34], verifying the transformation of the nitrile group to amidoxime group after hydroxylamine reaction. XPS results further confirmed the existence of an amidoxime group on the surface of Ami-CF. It can be clearly seen that Ami-CF has a strong N 1s peak (Figure 2f), and further peak segmentation results show N-H (398.3 eV), C=N (399.7 eV) and N-O (400.7 eV) groups [33]. Adsorption Kinetics and Adsorption Isotherms The adsorption kinetics and isothermal adsorption process of Cr (Ⅵ) on carbon felt electrode material are shown in Figure 3a,b. It can be seen that under the condition that the initial Cr (Ⅵ) concentration is 50 mg•L −1 , the adsorption capacities of O-CF, PAN-CF and Ami-CF on Cr (Ⅵ) show obvious differences with the change of adsorption time. The adsorption of Cr (Ⅵ) on Ami-CF is a typical kinetic process. The adsorption capacity increases rapidly within the first 12 h, and the adsorption equilibrium is reached in about 36 h (36 h is taken as the adsorption equilibrium time in subsequent isothermal adsorption experiments), and the highest adsorption capacity is 33.86 mg•g −1 . It was much higher than the equilibrium adsorption capacity of O-CF and PAN-CF (~1.1 mg•g −1 ). Table S1 lists the Adsorption Kinetics and Adsorption Isotherms The adsorption kinetics and isothermal adsorption process of Cr (VI) on carbon felt electrode material are shown in Figure 3a,b. It can be seen that under the condition that the initial Cr (VI) concentration is 50 mg·L −1 , the adsorption capacities of O-CF, PAN-CF and Ami-CF on Cr (VI) show obvious differences with the change of adsorption time. The adsorption of Cr (VI) on Ami-CF is a typical kinetic process. The adsorption capacity increases rapidly within the first 12 h, and the adsorption equilibrium is reached in about 36 h (36 h is taken as the adsorption equilibrium time in subsequent isothermal adsorption experiments), and the highest adsorption capacity is 33.86 mg·g −1 . It was much higher than the equilibrium adsorption capacity of O-CF and PAN-CF (~1.1 mg·g −1 ). Table S1 lists the fitting parameters of the quasi-first-order and quasi-second-order kinetic models for the adsorption kinetics of Cr (VI) by Ami-CF, and the equation fitting coefficients R 2 are 0.976 and 0.993, respectively. The pseudo-first-order kinetics are based on the assumption that the adsorption rate is controlled by the diffusion step and the pseudo-second-order kinetics are based on the assumption that the adsorption rate is controlled by the chemisorption mechanism [35][36][37]. Therefore, the quasi second-order kinetic equation can better describe the adsorption process of Cr (VI) by Ami-CF, indicating that the adsorption process is mainly controlled by chemical adsorption [32]. and 0.993, respectively. The pseudo-first-order kinetics are based on the assumption that the adsorption rate is controlled by the diffusion step and the pseudo-second-order kinetics are based on the assumption that the adsorption rate is controlled by the chemisorption mechanism [35][36][37]. Therefore, the quasi second-order kinetic equation can better describe the adsorption process of Cr (Ⅵ) by Ami-CF, indicating that the adsorption process is mainly controlled by chemical adsorption [32]. Langmuir and Freundlich isotherm models are often used to reveal the interaction mechanism between heavy metal ions and adsorbents. Among them, the Langmuir model assumes that metal ions occur at a uniform interface through monolayer deposition. In contrast, the Freundlich isotherm model describes adsorption at a non-uniform interface [32]. The adsorption isotherms of Cr (Ⅵ) on different carbon felt electrodes are shown in Figure 3b. Due to the lack of effective adsorption sites, the saturated adsorption capacity of O-CF and PAN-CF on Cr (Ⅵ) is only 1.99-2.71 mg•g −1 , whereas that of the Ami-CFcontaining amidoxime group on Cr (Ⅵ) can reach 101.73 mg•g −1 . It is 2.1-5.5 times the saturated adsorption capacity of biochar, oxidized composite materials and other adsorbents reported in the literature (Table S3) [30,38,39]. The fitting coefficient R 2 of the Langmuir model for Ami-CF adsorption of Cr (Ⅵ) is 0.992, whereas that of the Freundlich model is 0.968 (Table S2). Therefore, the Langmuir model is more suitable for describing Ami-CF adsorption of Cr (Ⅵ). This indicates that the adsorption behavior of Cr (Ⅵ) on Ami-CF is mainly a single-layer adsorption reaction [34]. The infinitesimal number RL is commonly used to judge whether the adsorption process is conducive to its occurrence, and its expression is shown in Equation (7). When RL is 0-1, it indicates favorable adsorption, RL > 1 indicates unfavorable adsorption, RL = 1 indicates linear adsorption, and RL = 0 indicates irreversible adsorption [4]. The results show that the RL value ranges from 0.08 to 0.9, which indicates that Ami-CF is favorable for the adsorption of Cr (Ⅵ). In addition, with the increase in initial Cr (Ⅵ) concentration, RL value decreases continuously, indicating that the higher the concentration of pollutants, the more favorable the adsorption. Langmuir and Freundlich isotherm models are often used to reveal the interaction mechanism between heavy metal ions and adsorbents. Among them, the Langmuir model assumes that metal ions occur at a uniform interface through monolayer deposition. In contrast, the Freundlich isotherm model describes adsorption at a non-uniform interface [32]. The adsorption isotherms of Cr (VI) on different carbon felt electrodes are shown in Figure 3b. Due to the lack of effective adsorption sites, the saturated adsorption capacity of O-CF and PAN-CF on Cr (VI) is only 1.99-2.71 mg·g −1 , whereas that of the Ami-CF-containing amidoxime group on Cr (VI) can reach 101.73 mg·g −1 . It is 2.1-5.5 times the saturated adsorption capacity of biochar, oxidized composite materials and other adsorbents reported in the literature (Table S3) [30,38,39]. The fitting coefficient R 2 of the Langmuir model for Ami-CF adsorption of Cr (VI) is 0.992, whereas that of the Freundlich model is 0.968 (Table S2). Therefore, the Langmuir model is more suitable for describing Ami-CF adsorption of Cr (VI). This indicates that the adsorption behavior of Cr (VI) on Ami-CF is mainly a single-layer adsorption reaction [34]. The infinitesimal number R L is commonly used to judge whether the adsorption process is conducive to its occurrence, and its expression is shown in Equation (7). When R L is 0-1, it indicates favorable adsorption, R L > 1 indicates unfavorable adsorption, R L = 1 indicates linear adsorption, and R L = 0 indicates irreversible adsorption [4]. The results show that the R L value ranges from 0.08 to 0.9, which indicates that Ami-CF is favorable for the adsorption of Cr (VI). In addition, with the increase in initial Cr (VI) concentration, R L value decreases continuously, indicating that the higher the concentration of pollutants, the more favorable the adsorption. Factors Influencing the Removal Efficiency of Cr (VI) In the traditional DC-mediated treatment of wastewater containing heavy metals, parameter optimization is limited to the difference of control voltage, whereas in the AC mediated treatment, the regulated parameters generally include positive and negative bias, frequency and duty ratio. The preliminary experimental results show that when the AC frequency is controlled at 400 Hz, the side reaction of electrolytic water is minimal. Therefore, the square-wave AC frequency was controlled to be 400 Hz, and the influences of positive and negative bias, duty cycle, solution flow rate, initial solution pH, initial concentration of Cr (VI) and other coexisting ions on Cr (VI) removal by AC electrochemical system were investigated. Two parallel treatments were set for each group of experiments. Effects of Bias, Duty Cycle and Flow Velocity An AC electrochemical system removes Cr (VI) in solution by applying asymmetric square wave bias. The volume of Cr (VI) solution (50 mL), solution pH (2 ± 0.05), a flow rate of a peristaltic pump (0.5 mL·min −1 ) and duty ratio (20%) in the liquid inlet bottle were kept unchanged, and the positive bias was fixed at 1V. The reduction and removal efficiency of Cr (VI) by AC electrochemical system under a negative bias voltage of −1.5 V, −2 V, −2.5 V, −3 V and −4 V were studied successively. The results are shown in Figure 4a; the removal rate of Cr (VI) increases gradually with the increase in negative bias voltage, which may be because the increase in applied potential accelerates the mass transfer and electron transfer of Cr (VI) in solution, which is conducive to the reduction of Cr (VI) on the electrode surface. When the negative bias increases from −1.5 V to −2.5 V, the removal rate of Cr (VI) increases from 95.97% to 99.98%, and the mean residual concentration of Cr (VI) is 8.33 µg·L −1 , which meets the limit value of Cr (VI) of the national sanitary standard for drinking water (<0.05 mg·L −1 ). When a negative bias is further increased, the residual concentration of Cr (VI) remains below 4 µg·L −1 . Therefore, −2.5 V is the best voltage for negative bias in the subsequent experiments. Similarly, by fixing negative bias as −2.5 V, the electrochemical removal effects of positive bias 0.5 V, 1 V, 1.5 V, 2 V and 2.5 V on Cr (VI) are studied successively. As shown in Figure 4b, with positive bias increasing from 0.5 V to 2.5 V, the removal rate of Cr (VI) presents a trend of first increasing and then decreasing. This may be attributed to the Coulomb repulsion effect, which inhibits the electrochemical reduction efficiency of Cr (VI) when the positive bias voltage is too low. However, when the positive bias voltage is too high, the reduced Cr (III) may be re-oxidized into Cr (VI), and the removal rate of Cr (VI) will eventually decline. Therefore, negative bias voltage −2.5 V and positive bias voltage 1 V were selected in the subsequent treatment experiment to ensure the best removal efficiency. and sufficient contact between Cr (VI) and Ami-CF surface. When the flow rates were 0.5 mL•min −1 and 0.7 mL•min −1 , the removal efficiency of Cr (VI) was about 99.98% and 95.76%, respectively. When the flow rate is further increased, the higher flow rate will reduce the adsorption-reduction process of Cr (VI) on the electrode surface, decreasing removal efficiency. Therefore, the flow rate of the peristaltic pump is set as 0.5 mL•min −1 . Effects of Initial Cr (Ⅵ) Concentration and Solution pH As shown in Figure 5a, the removal effect of the asymmetric AC electrochemical system on Cr (Ⅵ) with different concentrations at different pH values has been tested. When the fluid pH was 1 ± 0.05, Cr (Ⅵ) could be completely removed in a large range (0.5-250 mg•L −1 ). When the pH of the feed solution was adjusted to 2 ± 0.05, the Cr (Ⅵ) removal rate decreased slightly. Still, it remained at a high Cr (Ⅵ) removal rate (more than 85.04%) in the range of experimental concentration (0.5-250 mg•L −1 ) and Cr (Ⅵ) in the range of The duty cycle refers to the ratio of the high-level (positive bias) duration in a cycle. Other conditions were controlled in accordance with the above studies. The duty ratio was controlled as 10%, 20%, 30%, 40% and 50% separately to study its influence on the electrochemical reduction of Cr (VI). As shown in Figure 4c, when the duty ratio increases from 10% to 20%, the removal rate of Cr (VI) increases significantly, but when the duty ratio continues to increase from 20% to 50%, the removal rate of Cr (VI) decreases significantly. This may be because too large a duty cycle inhibits the reduction of Cr (VI), ultimately leading to the decrease in the Cr (VI) removal rate. However, when the duty cycle is too low, the alternating current is similar to the DC process in which stable negative bias is applied, resulting in the coulomb repulsion of the electrocatalytic reduction of Cr (VI) [16]. When the duty cycle is 20%, a good balance can be achieved between the capture of Cr (VI) oxygen-containing anions by chelating sites on Ami-CF and the reduction of Cr (VI) to Cr (III), thus achieving the best removal effect [2]. The effect of flow rate on Cr (VI) removal by penetrating electrochemical treatment devices is shown in Figure 4d. At low flow rates (0.1 mL·min −1 , 0.3 mL·min −1 ), Cr (VI) in the solution could be removed entirely due to the long residence time between electrodes and sufficient contact between Cr (VI) and Ami-CF surface. When the flow rates were 0.5 mL·min −1 and 0.7 mL·min −1 , the removal efficiency of Cr (VI) was about 99.98% and 95.76%, respectively. When the flow rate is further increased, the higher flow rate will reduce the adsorption-reduction process of Cr (VI) on the electrode surface, decreasing removal efficiency. Therefore, the flow rate of the peristaltic pump is set as 0.5 mL·min −1 . Effects of Initial Cr (VI) Concentration and Solution pH As shown in Figure 5a, the removal effect of the asymmetric AC electrochemical system on Cr (VI) with different concentrations at different pH values has been tested. When the fluid pH was 1 ± 0.05, Cr (VI) could be completely removed in a large range (0.5-250 mg·L −1 ). When the pH of the feed solution was adjusted to 2 ± 0.05, the Cr (VI) removal rate decreased slightly. Still, it remained at a high Cr (VI) removal rate (more than 85.04%) in the range of experimental concentration (0.5-250 mg·L −1 ) and Cr (VI) in the range of 0.5-100 mg·L −1 could almost be completely removed. The residual Cr (VI) is lower than 0.05 mg·L −1 , which meets the hygienic limit of drinking water. When the pH of the feeding solution is adjusted to 3 ± 0.05, the removal rate of Cr (VI) at 0.5-50 mg·L −1 concentration can reach more than 93.74%. Nanomaterials 2021, 11, x FOR PEER REVIEW 10 of 17 mainly exists in the form of Cr2O7 2− and HCrO4 -and participates in the reduction reaction from Cr (Ⅵ) to Cr (III) [6]. The reaction on the cathode is as follows: Equations (8)-(11) [42]; and (3) although Cr (III) generated by the reaction does not begin to form Cr (OH)3 precipitate until pH > 4 [24], the hydrogen evolution reaction equation (10) under negative potential provides a local slightly alkaline environment for the electrode, increasing local surface pH [21]. Thus, insoluble Cr (OH)3 films or colloids may be generated on the electrode surface (Equation (11)) [42], thus hindering electron transfer and inhibiting further reduction of Cr (Ⅵ) [43]. Although the optimal removal efficiency can be attained at pH = 1 ± 0.05, pH levels of actual chrome-containing wastewater typically range from 1 to 3, and economic considerations should also be taken into account [24,44]. Therefore, a pH of 2 ± 0.05 has been selected for the remediation of wastewater containing Cr (VI). In general, the removal effect of Cr (Ⅵ) on the AC electrochemical system decreases with the increase in initial Cr (Ⅵ) concentration and pH, which is consistent with studies in the literature [9,41]. It should be noted that the calculated flux of the electrochemical treatment device is 300 L h −1 m −2 , and the contact time between Cr (Ⅵ) and Ami-CF in solution is only 30 s. Therefore, this method can achieve rapid and efficient removal effect on medium and high concentration Cr (Ⅵ) wastewater at low pH, which is superior to adsorption, precipitation, photocatalytic reduction and other methods [11,15]. Removal of Cr (VI) in Multi-Ions Solutions Considering the complexity of actual wastewater components [44], Cu 2+ , Zn 2+ and Ca 2+ were taken as representative coexisting heavy metals and alkali metal cations, and SO4 2− , CO3 2− and NO3 − were taken as representative anions. The influence of coexisting ions on the removal of Cr (VI) by this method was discussed. The optimized electrochemical and solution parameters were selected as the experimental conditions; namely, the AC However, when the concentration of Cr (VI) in the feed solution is too high, the removal rate obviously decreases. The main reasons for the reduction of Cr (VI) removal rate with the increase in pH are as follows: (1) According to E (electric potential)-pH diagram for chromium speciation predominance [40], the reduction of Cr (VI) to Cr (III) under acidic conditions is advantageous from the perspective of thermodynamics, because its standard potential increases with the increase in proton concentration. When pH value is ≥ 3, the standard potential of Cr (VI) reduction decreases, thus reducing the possibility of a reaction [41]; (2) when pH is 1-3, it is simulated by Visual MINTEQ 3.1 that Cr (VI) mainly exists in the form of Cr 2 O 7 2− and HCrO 4 − and participates in the reduction reaction from Cr (VI) to Cr (III) [6]. The reaction on the cathode is as follows: Equations (8)-(11) [42]; and (3) although Cr (III) generated by the reaction does not begin to form Cr (OH) 3 precipitate until pH > 4 [24], the hydrogen evolution reaction equation (10) under negative potential provides a local slightly alkaline environment for the electrode, increasing local surface pH [21]. Thus, insoluble Cr (OH) 3 films or colloids may be generated on the electrode surface (Equation (11)) [42], thus hindering electron transfer and inhibiting further reduction of Cr (VI) [43]. Although the optimal removal efficiency can be attained at pH = 1 ± 0.05, pH levels of actual chrome-containing wastewater typically range from 1 to 3, and economic considerations should also be taken into account [24,44]. Therefore, a pH of 2 ± 0.05 has been selected for the remediation of wastewater containing Cr (VI). In general, the removal effect of Cr (VI) on the AC electrochemical system decreases with the increase in initial Cr (VI) concentration and pH, which is consistent with studies in the literature [9,41]. It should be noted that the calculated flux of the electrochemical treatment device is 300 L h −1 m −2 , and the contact time between Cr (VI) and Ami-CF in solution is only 30 s. Therefore, this method can achieve rapid and efficient removal effect on medium and high concentration Cr (VI) wastewater at low pH, which is superior to adsorption, precipitation, photocatalytic reduction and other methods [11,15]. Removal of Cr (VI) in Multi-Ions Solutions Considering the complexity of actual wastewater components [44], Cu 2+ , Zn 2+ and Ca 2+ were taken as representative coexisting heavy metals and alkali metal cations, and SO 4 2− , CO 3 2− and NO 3 − were taken as representative anions. The influence of coexisting ions on the removal of Cr (VI) by this method was discussed. The optimized electrochemical and solution parameters were selected as the experimental conditions; namely, the AC frequency was 400 Hz, the bias voltage was (−2.5, 1) V, the duty ratio was 20%, the pH of the solution was 2 ± 0.05 and the flow rate of the peristaltic pump was 0.5 mL·min −1 . An amount of 50 mL Cr (VI) containing 100 mg·L −1 and different concentrations of Cu 2+ /Zn 2+ /Ca 2+ /SO 4 2− /CO 3 2− /NO 3 − solutions were treated, and two parallel treatments were set up in each group. As shown in Figure 5b, under optimized operating parameters, the AC electrochemical system can achieve a 99.97% removal rate for Cr (VI) solution with an initial concentration of 100 mg·L −1 . The removal rate of Cr (VI) was further improved when the solution contained 50 mg·L −1 co-existing Cu 2+ /Zn 2+ /Ca 2+ . This may be because the introduction of appropriate Cu 2+ /Zn 2+ /Ca 2+ can improve the ionic strength and conductivity of the solution. The reduction potentials of Zn 2+ and Ca 2+ are relatively high, so they only act as electrolytes in the whole reaction system and do not participate in chemical reactions. Relatively speaking, Cu 2+ is quickly reduced to Cu + /Cu 0 and acts as a reducing agent to enhance the reduction removal of Cr (VI) [45]. However, when the concentration of Cu 2+ , Zn 2+ and Ca 2+ was further increased, the removal rate of Cr (VI) began to decrease. This may be because a large number of positively charged Cu 2+ , Zn 2+ and Ca 2+ will gather on the cathode surface through electrostatic attraction and compete with Cr (VI) for reaction sites on the electrode surface [46], resulting in a decline in the removal effect of Cr (VI). Similar to the enhanced conductivity of heavy metals and alkali metal ions, when the solution contains 50 mg·L −1 coexisting SO 4 2− /CO 3 2− /NO 3 − , the removal rate of Cr (VI) does not decrease, although it has no promoting effect. There are two possible reasons: (1) Compared with the reduction of S 6+ /C 4+ and N 5+ in SO 4 2− /CO 3 2− /NO 3 − , the reduction of Cr 6+ to Cr 3+ in HCrO 4 − is more likely [47,48]; and (2) for the coexisting cations (Cu 2+ /Zn 2+ /Ca 2+ ) and anions (SO 4 2− /CO 3 2− /NO 3 − ) with the same concentration involved in this experiment, the ionic strength obtained from the conversion of anions is smaller than that of cations, and the conductivity of the solution containing the coexisting anions is relatively small. When the concentrations of SO 4 2− , CO 3 2− and NO 3 − were further increased, the removal rate of Cr (VI) decreased slightly. This may be because a large number of negatively charged SO 4 2− , CO 3 2− and NO 3 − will accumulate on the anode surface through electrostatic attraction, which inhibits the reduction of Cr 6+ when the electrode is switched to the cathode later. In addition, Visual MINTEQ 3.1 was used to simulate the distribution of chromium species in 100 mg·L −1 Cr (VI) solution at pH 2. The results showed that Cr 2 O 7 2− and HCrO 4 − were the dominant chemical species, accounting for 6.4% and 93%, respectively. When different concentrations of Cu 2+ /Zn 2+ /Ca 2+ /SO 4 2− /CO 3 2− /NO 3 − were present in the solution, the species distribution of Cr (VI) did not change significantly (Table S4). It should be note that the composition of real wastewater is definitely much more complex than that in the current experiment. For example, various cations and anions, as well as dissolved organic matters, may co-existed in the real wastewater. Thus further research should be conducted to investigate the removal performance of the AC electrochemical system for real wastewater. Direct/Alternating Current Electrochemical Method for Removing Cr (VI) The solution conditions (pH = 2 ± 0.05, flow rate 0.5 mL·min −1 ) and negative bias voltage (DC −2.5 V, AC −2.5 V, 1 V) were consistent. The removal effects of Cr (VI) from 0.5-250 mg·L −1 were compared between DC and AC electrochemical methods. The results are shown in Figure 6a. For 0.5-50 mg·L −1 Cr (VI), 100% removal was achieved by both DC and AC methods. However, when the initial concentration of Cr (VI) increased further, the removal efficiency of the DC method decreased significantly. For example, when the infusion concentration was 100 mg·L −1 and 250 mg·L −1 , the removal efficiency of the DC method was 77.22% and 64.65%, respectively. In contrast, the AC method can still maintain the removal rate of 99.97% for Cr (VI) of 100 mg·L −1 , and even the Cr (VI) of 250 mg·L −1 can still reach 85.37%, which is more than 20% higher than the DC method, which further highlights the superiority of AC method. Stability of Functional Electrode and Electrochemical Treatment Device In order to test the stability of the functional electrode Ami-CF and the electrochemical treatment device, 50 mg•L −1 Cr (VI) solution was continuously injected into the electrochemical filtration device at the rate of 0.5 mL•min −1 by a peristaltic pump and 50 mL was taken as a single dose. The removal efficiency in the long-term operation process shown in Figure 6b. In a total of 10 experiments, the removal rate remained stable above 99.9%, and the Cr (VI) concentration after treatment was lower than 0.05 mg•L −1 , which met the safety standard of Cr (VI) for drinking water stipulated by the World Health Organization. This excellent performance indicates that the functional electrode Ami-CF has excellent recycling performance, and the electrochemical treatment device can maintain excellent removal efficiency for a long time. Meanwhile, Cr (VI) was converted into Cr (III), which could be easily separated and recovered through precipitating with alkali, consequently reduced the overall cost. Thus the long time and stable performance, together with the concept of turning waste into resource, demonstrated the proposed method has broad application prospects. Removal Mechanism In order to explore the removal mechanism of Cr (Ⅵ) by this method, the Ami-CF after Cr (Ⅵ) adsorption, Cr (Ⅵ) treatment by direct current electrochemistry, and AC electrochemistry were characterized by XPS. After carbon correction, there was no characteristic peak of Cr on the surface of unused Ami-CF as a control group (Figure 7a). In Figure Figure 6. (a) Comparison of Cr (VI) removal by direct current (DC) and alternating current (AC) methods. (b) Cr (VI) removal as the function of treatment cycle by Ami-CF in the long-term experiment. Stability of Functional Electrode and Electrochemical Treatment Device In order to test the stability of the functional electrode Ami-CF and the electrochemical treatment device, 50 mg·L −1 Cr (VI) solution was continuously injected into the electrochemical filtration device at the rate of 0.5 mL·min −1 by a peristaltic pump and 50 mL was taken as a single dose. The removal efficiency in the long-term operation process shown in Figure 6b. In a total of 10 experiments, the removal rate remained stable above 99.9%, and the Cr (VI) concentration after treatment was lower than 0.05 mg·L −1 , which met the safety standard of Cr (VI) for drinking water stipulated by the World Health Organization. This excellent performance indicates that the functional electrode Ami-CF has excellent recycling performance, and the electrochemical treatment device can maintain excellent removal efficiency for a long time. Meanwhile, Cr (VI) was converted into Cr (III), which could be easily separated and recovered through precipitating with alkali, consequently reduced the overall cost. Thus the long time and stable performance, together with the concept of turning waste into resource, demonstrated the proposed method has broad application prospects. Removal Mechanism In order to explore the removal mechanism of Cr (VI) by this method, the Ami-CF after Cr (VI) adsorption, Cr (VI) treatment by direct current electrochemistry, and AC electrochemistry were characterized by XPS. After carbon correction, there was no characteristic peak of Cr on the surface of unused Ami-CF as a control group (Figure 7a). In Figure 7b, after Cr (VI) adsorption, Ami-CF shows characteristic peaks of Cr (III) 2p 1/2, Cr (VI) 2p 1/2, Cr (VI) 2p 3/2 and Cr (III) 2p 3/2 at 588.8 eV, 586.7 eV, 579.6 eV and 577.3 eV, respectively [49]. This is consistent with the adsorption results of Cr (VI) by Ami-CF (Figure 3a,b), which further confirms the strong adsorption capacity of the amidoxime group in Ami-CF on Cr (VI). In addition, the oxygen-containing functional groups (-OH, -NH) in Ami-CF can reduce Cr (VI) to Cr (III), and then adsorb it on the electrode surface [50,51]. In Figure 7c,d, Ami-CF after DC and AC treatment showed prominent characteristic peaks of Cr (VI) and Cr (III) at 586.7 eV, 586.6 eV, 577.3 eV, 577.2 eV, respectively. Therefore, it is speculated that the reduction and removal of Cr (VI) under AC mediation mainly include the following three steps (Figure 7e): In step (1), all Cr (VI) oxygen-containing anions are randomly distributed in aqueous solution without applied voltage. In step (2), when a positive bias is applied, the oxygen-containing anion of Cr (VI) migrates towards the anode in the applied electric field. It is adsorbed by the amidoxime group on the Ami-CF surface to form an electric double layer on the electrode surface. In step (3), voltage switching and cathode electron transfer reduce Cr (VI) to Cr (III) and release Cr (III) into the solution. At the same time, the previously occupied active sites can be recovered. In the subsequent reaction, new Cr (VI) oxygen-containing anions can be adsorption-fixation-reduction again to maintain the continuous process of the whole reaction. Nanomaterials 2021, 11, x FOR PEER REVIEW 13 of solution. At the same time, the previously occupied active sites can be recovered. In t subsequent reaction, new Cr (Ⅵ) oxygen-containing anions can be adsorption-fixatio reduction again to maintain the continuous process of the whole reaction. Conclusions In this study, we provide a stable and efficient treatment method for wastewater co taining Cr (VI). To improve adsorption and subsequent eletroreduction of Cr (VI) to (III), facile fabrication of Ami-CF from commercial carbon felt by functionalization wi amine oxime was developed and has excellent hydrophilicity and a strong adsorption c pacity for Cr (VI) with saturated adsorption capacity of 101.73 mg•g −1 . Coupled with t Ami-CF electrode, the Coulomb repulsion effect and the side reaction of electrolytic wat were suppressed under the high frequency anode and cathode conversion (asymmet Conclusions In this study, we provide a stable and efficient treatment method for wastewater containing Cr (VI). To improve adsorption and subsequent eletroreduction of Cr (VI) to Cr (III), facile fabrication of Ami-CF from commercial carbon felt by functionalization with amine oxime was developed and has excellent hydrophilicity and a strong adsorption capacity for Cr (VI) with saturated adsorption capacity of 101.73 mg·g −1 . Coupled with the Ami-CF electrode, the Coulomb repulsion effect and the side reaction of electrolytic water were suppressed under the high frequency anode and cathode conversion (asymmetric AC), and the diffusion mass transfer rate of Cr (VI) in the solution was promoted. Consequently, the asymmetric AC electrochemistry based on Ami-CF can rapidly (in 30 s) turning 0.5-100 mg·L −1 Cr (VI) into the safety standard of WHO drinking water with a high flux of 300 L h −1 m −2 . Long-term operation of a total of 10 cycles demonstrated a high, stable and efficient removal performance. These results indicate that the asymmetric AC electrochemistry coupled with Ami-CF exhibit great application potential for the treatment of medium-to-high concentration Cr (VI)-containing wastewater. Further study is needed to scale-up the asymmetric AC electrochemical system and treat real wastewater with a much more complex composition.
v3-fos-license
2020-05-06T14:50:25.376Z
2020-05-06T00:00:00.000
218513195
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccardiovascdisord.biomedcentral.com/track/pdf/10.1186/s12872-020-01495-0", "pdf_hash": "1db5d191752ee58a33a140431dbc96e633f7ae5d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1437", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "1db5d191752ee58a33a140431dbc96e633f7ae5d", "year": 2020 }
pes2o/s2orc
Identification of foam cell biomarkers by microarray analysis Background Lipid infiltration and inflammatory response run through the occurrence of atherosclerosis. Differentiation into macrophages and foam cell formation are the key steps of AS. Aim of this study was that the differential gene expression between foam cells and macrophages was analyzed to search the key links of foam cell generation, so as to explore the pathogenesis of atherosclerosis and provide targets for the early screening and prevention of coronary artery disease (CAD). Methods The gene expression profiles of GSE9874 were downloaded from Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE9874) on GPL96 [HG-U133A] Affymetrix Human Genome U133. A total of 22,383 genes were analyzed for differentially expression genes (DEGs) by Bayes package. GO enrichment analysis and KEGG pathway analysis for DEGs were performed using KOBAS 3.0 software (Peking University, Beijing, China). STRING software (STRING 10.0; European Molecular Biology Laboratory, Heidelberg, Germany) was used to analyze the protein-protein interaction (PPI) of DEGs. Results A total of 167 DEGs between macrophages and foam cells were identified. Compared with macrophages, 102 genes were significantly upregulated and 65 genes were significantly downregulated (P < 0.01, fold-change > 1) in foam cells. DEGs were mainly enrich in ‘sterol biosynthetic and metabolic process’, ‘cholesterol metabolic and biosynthetic process’ by GO enrichment analysis. The results of KEGG pathway analysis showed all differential genes are involved in biological processes through 143 KEGG pathways. A PPI network of the DEGs was constructed and 10 outstanding genes of the PPI network was identified by using Cytoscape, which include HMGCR, SREBF2, LDLR, HMGCS1, FDFT1, LPL, DHCR24, SQLE, ABCA1 and FDPS. Conclusion: Lipid metabolism related genes and molecular pathways were the key to the transformation of macrophages into foam cells. Therefore, lipid metabolism disorder is the key to turn macrophages into foam cells, which plays a major role in CAD. Background With the development of global economy, metabolic diseases such as hypertension, diabetes and obesity increase, leading to coronary artery disease (CAD) is still one of the major diseases threatening human's health in this century. In particular, it is worth noticing that although the diagnosis and treatment of CAD are in great development, the incidence of CAD and the trend of youth are still unavoidable, further bring a huge economic and psychological burden to human beings. Atherosclerotic plaque accumulation in the epicardial arteries is the main pathological mechanism of CAD [1]. Lipid infiltration and inflammatory response run through the occurrence of atherosclerosis (AS) [2]. Endothelial cell dysfunction, expression of cellular adhesion molecules, lipid retention, monocyte recruitment and differentiation into macrophages, foam cell formation, proteolysis, apoptosis, angiogenesis are the key steps of AS [3]. Each of these mechanisms and potential diagnostic and therapeutic targets have been extensively studied. However, the mechanism of CAD has not been fully elucidated. The data of gene expression profiles have been increased rapidly in recent years, and bioinformatics is widely used to analyze a large number of gene expression profile data to provide new sights for revealing the pathogenesis of CAD, and theoretical basis for early diagnosis, prevention and treatment target selection of CAD [4]. Because foam cells are the characteristic pathological cells of AS. They can be used to find the underlying mechanisms of CAD by detecting the differentially expressed genes. In this study, the differential gene expression between foam cells and macrophages was analyzed to search the key links of foam cell generation, so as to explore the pathogenesis of atherosclerosis and provide targets for the early screening and prevention of CAD. Microarray data The gene expression profiles of GSE9874 were downloaded from Gene Expression Omnibus (GEO) (https:// www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE9874). GSE9874 was performed on GPL96 [HG-U133A] Affymetrix Human Genome U133. The GSE9874 data set contained 60 samples, including 15 non-AS-macrophage samples from subjects without AS, 15 AS-macrophage samples from atherosclerotic tissues, 15 non-AS-foam cells samples from subjects without AS and 15 AS-foam cells samples from atherosclerotic tissues. Macrophages were obtained from human white blood cells of fifteen subjects with atherosclerosis/family history of CAD and from fifteen subjects (sex and age matched) without atherosclerosis/family history of CAD. After collection, all monocyte-derived macrophages from peripheral blood were cultured in absence or presence (foam cells) of ox-LDL from all subjects (healthy and atherosclerotic). Principal component analysis (PCA) The processed data were downloaded using R package GEO query. The mRNA expression levels of targeted patients and controls were extracted from all the samples and were transformed into log2 scale before further analysis. PCA was performed, and the results were shown in Fig. 1. It was difficult to distinguish gene expression in each group. According to the results in Fig. 1, 60 samples were divided into two groups: 30 foam cells samples and 30 macrophages samples. The results of PCA were shown in Fig. 2, which could better distinguish the gene expression of each group. Identification of differentially expression genes (DEGs) A total of 22,383 genes were analyzed for DEGs by Bayes package. Gene ontology (GO) enrichment analysis and KEGG pathway analysis GO enrichment analysis and KEGG pathway analysis for differentially expressed gene were performed using KOBAS 3.0 software (Peking University, Beijing, China), which can be accessed at https://kobas.cbi.pku.edu.cn. Protein interaction network analysis STRING software (STRING 10.0; European Molecular Biology Laboratory, Heidelberg, Germany) was used to analyze the protein-protein interaction (PPI) of differentially expressed genes. PPI refers to the forming of protein complex by two or more protein molecules through non-covalent bonds. STRING can be accessed at https://string-db.org/. Screening of differentially expressed genes A total of 167 differentially expressed genes between macrophages and foam cells were identified from gene chip GSE9874. Compared with macrophages, 102 genes were significantly upregulated and 65 genes were significantly downregulated (P < 0.01, fold-change > 1) in foam cells, and which were plotted in the form of volcano plots (Fig. 3). The top 100 genes were listed in heatmap (Fig. 4). GO enrichment analysis and KEGG pathway analysis The first 10 enrichment processes of GO enrichment analysis were listed in Fig. 5. Differentially expressed genes were mainly enriched in 'sterol biosynthetic and metabolic process', 'cholesterol metabolic and biosynthetic process'. The results of KEGG pathway analysis showed all differential genes are involved in biological processes through 143 KEGG pathways. First 10 KEGG pathways were listed in Table 1. Discussion The results of this study showed that the differential genes mainly regulated the transformation of macrophages into foam cells by up-regulating and downregulating the sterol biosynthetic and metabolic process, and the cholesterol metabolic and biosynthetic process. It was mainly achieved by acting on different targets of the Steroid biosynthesis, Metabolic pathways, PPAR signaling pathway, MAPK signaling pathway, Glycerolipid metabolism, Cytokine-cytokine receptor interaction, etc. Further analysis of PPI showed that HMGCR, SREBF2, HMGCS1, LDLR, FDFT1, LPL, SQLE, DHCR24, ABCA1 and FDPS were the 10 genes that played a core role in the action network. 3-hydroxy-3-methylglutaryl CoA (HMG-CoA) is an important intermediate in cholesterol synthesis. HMGCR and HMGCS1 respectively encode two important enzymes that regulate the synthesis and further transformation of HMG-CoA. HMGCS1 encodes HMG-CoA synthase, which mediates the first step of the pathway, converting acetyl CoA and acetoacetyl-CoA into HMG-CoA. HMG-CoA is then reduced by HMG-CoA reductase (encoded by HMGCR) into mevalonate in the rate limiting step of the reaction [5]. As a rate-limiting enzyme for cholesterol synthesis, HMG-CoA reductase is the target of a variety of physiological hormones and drugs to regulate the synthesis efficiency of cholesterol [6]. The sterol-regulatory element binding transcription factors (SREBFs) has been shown to be primarily involved in cellular cholesterol homeostasis, which can regulate the expression of low-density lipoprotein (LDL) receptors by enabling the hepatocytes to remove cholesterol contained in LDL particles from the bloodstream [7]. The SREBFs comprise three SREBF isoforms, SREBF1a, SREBF-1c, and SREBF-2. SREBF-2 gene codes for SREBP-2, which is a key regulator of cholesterol. When cells are deprived of cholesterol, proteolytic cleavage releases the NH (2)-terminal domain of SREBP-2 that binds and activates the promoters of SREBP-2regulated genes including the genes encoding the LDL receptor (LDLR), HMG-CoA synthase, and HMG-CoA reductase. Thus, SREBP-2 gene activation leads to enhanced cholesterol uptake and biosynthesis [8]. In addition, SREBF-2 variants were associated with premature CAD [9]. Seung-soon Im et al. found that SREBP- 1a not only activates genes required for lipogenesis in macrophages but also the gene encoding Nlrp1a, which is a core inflammasome component [10]; SREBP2 with SREBP cleavage activating protein (SREBP cleavage activating protein, SCAP) formed SCAP-SREBP2 complex, which was required for optimal activation of the NLRP3 inflammasome both in vitro and in vivo [11], promote local inflammatory response in arterial wall. LDLR is an integral membrane protein which is most abundantly expressed in the liver, and binds to and removes LDL-C from the circulation by endocytosis [12,13]. SREBP-2 can negatively regulate the expression level of LDLR gene at the transcriptional level [14]. Additionally, post-translational regulation of LDLR is primarily governed by PCSK9 [15], and the post-transcriptional regulation of LDLR is mainly achieved through modulation of its mRNA stability [16]. Farnesyl diphosphate farnesyl transferase 1 (FDFT1) encoded squalene synthase, which is another key enzyme for the synthesis of sterols, and ultimately cholesterol [17]. The human FDFT1 gene spans over 40 kb on chromosome 8p.23 [18], which is ubiquitously expressed in human tissues but is particularly high in the hypothalamus and liver [19,20]. The FDFT1 gene has several isoforms, with the most common containing eight exons. The promoter of the gene contains three SRE-like sequences (SRE-1, Inv-SRE-3 and SRE-1), which are located between 198 and 127 bp upstream of the predominant transcription start site [21,22]. SREBPs bind to SRE-like sequences to regulate transcription of the FDFT1 gene [21,23]. Lipoprotein lipase (LPL) encoded by the LPL gene hydrolyses triglycerides in circulating chylomicrons, LDL and very low-density lipoproteins (VLDL) to render nonunesterified fatty acids (NEFA) and 2-monoacylglycerol for tissue utilization [24]. The catalytic activity of plasma LPL can reduce plasma TG level and increase HDL-C level, and thus appears to be antiatherogenic [25]. And studies have confirmed that activation of peroxidase activated value-added receptor (PPAR) on the nuclear membrane increases the gene expression of LPL [26,27], which is a target for drugs that lower triglycerides. On the other hand, noncatalytic activity of lipoprotein enzymes can enhance atherosclerosis through bridging and selective uptake of CE [28], but the mechanism is more complex. 3β-hydroxysterol Δ24-reductase (DHCR24) encodes the cholesterol-synthesizing enzyme seladin-1, and catalyzes the final step of Bloch cholesterol synthetic pathway [29]. Like many cholesterol synthetic genes, DHCR24 is transcriptionally regulated by sterols via SREBF [30]. And, due to its critical role in cholesterol synthesis, DHCR24 is a prime candidate that acts as a control point for regulation of cholesterol besides HMG-CoA reductase [31]. In addition, independent of cholesterol metabolism, Fei Han et al. found that DHCR24 attenuate cardiac infarction and dysfunction by antiapoptotic effect [32]. Squalene epoxidase (SQLE) encodes a monooxygenase, which is the second rate-limiting enzyme in cholesterol biosynthesis by catalyzing the first oxygenation step in sterol biosynthesis [33]. SQLE exertncvhvk n s this effect through the action of two key downstream metabolites, cholesteryl ester and nicotinamide adenine dinucleotide phosphate (NADP+) [34]. Additionally, SQLE is also a target of the SREBP-2. It has been found that there were two transcription factors of SP1 and NF-Y in SQLE [35]. Efflux of cholesterol is accompanied by cholesterol transport proteins including adipocyte ATP-binding cassette A1 (ABCA1), adipocyte ATP-binding cassette G1 (ABCG1) and class B scavenger receptor (SR-BI) [36]. It is now well established that ABCA1 plays a critical role in the prevention of macrophage foam cell formation and atherosclerosis by mediating the active transport of intracellular cholesterol and phospholipids to apoA-I, the major lipoprotein in HDL [37]. Farnesyl diphosphate synthase (FDPS) is a branch point enzyme in the synthesis of sterols and isoprenylated cellular metabolites. FDPS catalyzes the conversion of isopentenyl pyrophosphate and dimethylallyl pyrophosphate to geranyl pyrophosphate and farnesyl pyrophosphate, which are protein prenylation substrates [38]. FDPS is mainly known to mediate immunoregulatory functions [39,40], its activity and expression have been also documented in human colon cancer [41] and certain other neoplastic disorders. Therefore, it may be a potential target for cancer treatment. In this study, it was found that the interaction network with HMGCR and SREBF-2 et al. as the core was involved in the formation of foam cells. This is not only the mechanism leading to the formation of atherosclerosis, but also the further induction of local inflammatory response in the vascular wall is another major cause of the formation of AS. But it is still to be further studied to clarify which one comes first or even which one is more important. The cause-and-effect relationship between lipid deposition and inflammatory response, as well as its core link, needs to be further studied and clarified. Conclusion In this study, it was found that lipid metabolism related genes and molecular pathways were the key to the transformation of macrophages into foam cells. Currently widely used and effective lipid-lowering drugs can not only reduce lipid levels, but also further reduce ASCVD risk through lipid-lowering. Although the mechanism of action has been partially clarified, the results of this study further suggest that the ability of monocytes to differentiate into macrophages and further turn into foam cells in the population may be related to premature CAD, and the mechanism may be mainly related to genes involved in lipid metabolism.
v3-fos-license
2023-08-28T15:03:41.360Z
2023-08-25T00:00:00.000
261220403
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/tgis.13098", "pdf_hash": "2c968d56f6b9cc469f348fa773f815c13309e48d", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1438", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b420414d8101f771d9e4349955035370cdbaf492", "year": 2023 }
pes2o/s2orc
Modelling underground cadastral survey data in CityGML In underground environments, survey elements such as survey points and observations provide the information required to define legal boundaries. These elements are also used to connect underground legal spaces to a geodetic survey network. Due to the issues of current 2D approaches for managing underground cadastral data, prominent 3D data models have been extended to support underground land administration. However, previous studies mostly focused on defining underground legal spaces and boundaries, with less emphasis on survey elements. This research aims to extend CityGML to support underground cadastral survey data. The proposed extension is based on the survey elements elicited from underground cadastral plans, which is then implemented for an underground case study area in Melbourne, Australia. This extension integrates underground survey data with legal and physical data in a 3D digital environment and provides an improved representation of survey elements, facilitating the management and communication of underground cadastral survey data. New underground developments such as tunnels require drafting multiple pages of survey plans to show vertically stratified legal spaces in the subsurface areas (Strack, 2021).Current practices use 2D survey plans, crosssectional diagrams, and textual notations (e.g., elevation information) to define the legal extent of underground assets.The difficulties with interpretation make survey plans barely usable for the public.This has inevitably raised the question: what is the extent of subsurface rights?(Strack, 2021).To answer this question, it is first necessary to define the ownership of underground space clearly and secondly model it precisely and correctly.The first part is related to the legal aspects of Underground Land Administration (ULA), but the second part is a technical matter (Saeidian et al., 2023).The development of a 3D land administration model is a technical part that studies the process of adding 3D legal objects in a data model (Hassan & Abdul Rahman, 2011).The 3D models derived from such data models can improve the communication and interpretation of land administration data in underground areas.Consequently, there is a growing trend towards adopting 3D data models in the realm of land administration (Asghari et al., 2021). Surveying is the main form of capturing underground data for land administration (Aien, 2012).The cadastral surveying process involves defining, identifying, demarcating, measuring, and mapping new/changed legal boundaries (Grant et al., 2020).In the current practice, survey plans are used to define the spatial extent of underground legal spaces and boundaries as well as relevant attributes and relationships.In addition to underground legal data (legal spaces and boundaries), the original cadastral surveying data in the field is also recorded and communicated using the same 2D survey plans or separate documents.This surveying data is a summary of a cadastral survey work that provides the information required to define legal boundaries and connect underground legal spaces to a geodetic survey network.Survey documents provide various types of survey measurements and their attributes. For example, there is a range of survey points such as control points, traverse points, and boundary points.In addition, there exist several types of survey observations such as traverse, radiation, and boundary observations.Elevation information is also provided in the survey plans and documents. A 3D data model supporting digital land administration should provide entities to define not only underground legal data but also the cadastral survey elements.A 3D data model is the basis to create a 3D integrated digital model.It can potentially provide an effective approach to managing and communicating ULA data components including survey elements (survey data), the geometric and semantic information about the physical reality of underground assets such as utilities and tunnels (physical data) as well as legal spaces and boundaries (legal data) in subterranean spaces.However, the existing 3D integrated data models mostly focus on underground physical and legal data, with less emphasis on survey data elements. Although some data models such as LandInfra and LADM are rich in defining survey elements and legal data, prominent 3D physical data models such as IFC and CityGML have also their use cases and benefits for ULA as discussed in some studies (Atazadeh et al., 2022;Saeidian et al., 2023).Therefore, these data models have been extended to model underground legal data in order to provide a 3D integrated digital model which defines the physical reality of underground assets and their corresponding legal spaces and boundaries.Since IFC and CityGML are limited in defining survey elements, the proposed integrated data models cannot fully support survey data.In this regard, an IFC-based integrated model was enriched with survey data by Atazadeh et al. (2021). However, there is a knowledge gap in exploring the potential of CityGML to support survey data elements. CityGML is a leading standard for 3D city modelling, which is used widely in the geospatial domain.Several cities apply the CityGML data structure to manage and communicate their 3D city models which serve a wide range of applications from land use planning to cadastre (Lippold, 2022).In this regard, a range of CityGML Application Domain Extensions (ADEs) has been developed in several domains (Biljecki et al., 2018).Some studies have also investigated and suggested CityGML for land administration purposes (Halim et al., 2021;Nega & Coors, 2022;Saeidian et al., 2023aSaeidian et al., , 2023b;;Siew et al., 2021).However, this data model does not support cadastral survey elements in the current version (CityGML 3.0) and the studies also ignored defining these elements in the proposed | CURRENT S TANDARDS FOR MODELLING UNDERG ROUND S U RV E Y DATA This section reviews the current 3D data models which include surveying data elements.The review specifically focuses on assessing the capability of these data models in terms of modelling underground survey measurements.The 3D data models assessed in the study include LADM (ISO, 2012), ePlan (ICSM, 2010), LandInfra (Scarponcini et al., 2016), as well as those contributions that enriched IFC (ISO, 2013) and CityGML (Kolbe et al., 2021) standards.Table 1 presents the entities provided by these data models to define different underground cadastral survey elements (the next section explains these survey elements in detail). LADM is a leading international 3D data model in the land administration domain.In recent years, several studies have worked on developing LADM-based country profiles for underground areas (Dželalija & Roić, 2022;Janečka & Bobíková, 2018;Kim & Heo, 2017;Radulović et al., 2019;Ramlakhan et al., 2023;Saeidian et al., 2022;Silva & Carneiro, 2020;Yan et al., 2019Yan et al., , 2021)).However, these studies mostly focused on the legal spaces attached to underground assets.In this area, Soffers (2017) used LADM as a template to design a data model in order to link survey elements to legal boundaries in the Netherlands.For modelling cadastral survey elements, Kalogianni et al. (2021) also proposed a refined survey model for LADM considering the interoperability between LADM and LandInfra standards.Since LADM focuses on land administration, it provides some feature classes for modelling cadastral survey elements as presented in Table 1.For example, the LA_Point class from the Surveying and Representation sub-package defines cadastral survey points.The LA_BoundaryFaceString feature class is also provided for defining boundary lines and curves.However, this entity does not define some critical attributes of observations such as bearings and distances.In addition, this data model does not support other survey observations such as radiation observations.In LADM, the elevation information can be defined for points, but there is no specific class to define survey surfaces (see the next section for more information about these surfaces).Since the geometries of survey observations and surfaces are lines, curves, or surfaces, it is possible to geometrically represent these elements using the LA_BoundaryFaceString and LA_BoundaryFace feature classes.However, this mapping approach does not consider semantics and relevant attributes.It makes ambiguity for users such as surveyors who want to reuse survey data using a 3D digital model.LADM is also a conceptual data model without any encodings.This standard is limited to the legal aspects and suggests external classes for modelling underground physical objects such as pipelines and cables (Lemmen et al., 2015).Finally, it should be mentioned that the current version of LADM is being revised/extended and the new version is expected to cover more information related to surveying and data acquisition approaches as well as accuracies such as new attributes and code lists for surveying techniques and platforms (Kalogianni et al., 2021;Lemmen et al., 2019;Van Oosterom et al., 2019). ePlan is another cadastral data model developed in Australia based on LandXML.As seen in Table 1, this data model provides several entities to define different survey elements.For example, the Points and Observation packages provide entities for defining survey points and observations, respectively.However, the survey elevation information is limited to points, and elevation surfaces are not defined.In addition, similar to LADM, this data model faces the challenge of addressing physical aspects and does not support physical data.An integrated 3D model requires to include not only underground cadastral survey elements and legal information but also information about the physical reality of underground assets (Saeidian et al., 2022c).Finally, the capabilities of ePlan are limited in terms of supporting the requirements of three-dimensional land administration (Aien, 2012). LandInfra is another standard developed to model both land and infrastructure information.This data model provides the Survey package with three sub-packages (Equipment, SurveyResults, and Observations) for information related to observations, processes and their results gathered during survey works (Scarponcini et al., 2016). Therefore, LandInfra has several entities that can be used to model survey elements as presented in Table 1.For example, the SurveyMark class can be used to define survey points.The SurveyObservation class and its subclasses also cover a wide range of survey observations such as angular and distance observations, total Stations observations, level observations, GNSS observations, point clouds, and image observations.The LevelObservation class can also be used to represent elevation information.The deltaHeight attribute of this class can be used to define TA B L E 1 The entities provided by some well-known data models to define underground cadastral survey elements.the elevation.The entities provided by LandInfra for modelling survey data need to be customised based on the jurisdictional cadastral surveying requirements (e.g., adding the required attributes).In addition, while LandInfra has been suggested as a potential model for a 3D cadastre (Bydłosz & Bieda, 2020), there are still a limited number of studies that have utilised it for land administration purposes. LADM, ePlan, and LandInfra are rich in defining survey elements as presented in Table 1.In particular, LandInfra provides several entities to define these elements.However, these data models need to be customised based on jurisdictional requirements (e.g., required enumerations and attributes for the elements).In addition, they need to be enriched to fully support all underground cadastral survey elements.In the case of LADM, it is at a conceptual level and requires to be encoded in another data structure.Furthermore, LADM and ePlan do not support physical aspects.This research considers a 3D integrated model that supports not only survey elements but also a wide range of underground data components such as underground physical assets and their corresponding legal spaces and boundaries.In other words, this research aims to develop a data model for modelling survey elements, but this data model should be a part of a 3D integrated underground data model to meet the requirements of a 3D digital environment for managing and communicating the physical, legal, and survey elements in underground areas. Previous studies described the applications and benefits of a 3D integrated model (Aien et al., 2015;Saeidian et al., 2023).Saeidian et al. (2021) considered three approaches for developing a 3D integrated data model. The first approach is to develop a new 3D data model for all data requirements (components) which is a timeconsuming and costly approach.The second approach is to interlinkage the data models that are rich in modelling specific data component(s).For example, LandInfra is rich in survey data, LADM is a well-known conceptual standard for defining legal data, and IFC and CityGML are prominent data models for 3D modelling of physical objects at the building and city scales, respectively.However, interlinking these standards requires addressing geometric conversion and semantic interoperability (Atazadeh et al., 2017b;Saeidian et al., 2021).For example, some studies worked on the interoperability between LandInfra and LADM (Kalogianni et al., 2021;Lemmen et al., 2021;Stubkjaer et al., 2018), the integration of IFC and LADM (Oldfield et al., 2017;Ramlakhan et al., 2023), CityGML andLADM (Góźdź et al., 2014;Gürsoy Sürmeneli et al., 2022;Li et al., 2016), CityGML andIFC (Hajji et al., 2021;Rashidan et al., 2021), andLADM, IFC, andCityGML (Mi, 2019;Sun et al., 2019) in the land administration domain.In addition, jurisdictions may be reluctant to use more than one data model to store, manage, and communicate all data components since this would potentially increase the cost and time for buying, establishing, and training data models.The third approach is extending existing data models to cover all data requirements.For example, some studies extended IFC (Atazadeh, Kalantari, Rajabifard, Ho, et al., 2017;Atazadeh, Kalantari, Rajabifard, Ho, &Champion, 2017) andCityGML (Halim et al., 2021;Nega & Coors, 2022;Saeidian et al., 2023aSaeidian et al., , 2023b;;Siew et al., 2021) to create a 3D integrated data model and reported this as a viable approach to develop a 3D integrated model.Although these studies developed a 3D integrated data environment, they only considered physical and legal data without incorporating survey data elements into the developed integrated model. Available data models such as CityGML, IFC, LADM, and LandInfra have their use cases in the land administration domain (Atazadeh et al., 2022).For instance, LandInfra focuses on infrastructures, IFC is more appropriate for use cases that require a building-scale model (e.g., condominium registration), and for use cases such as creating 3D digital property maps for an entire jurisdiction and planning and constructing large-scale tunnels and utilities, a city-scale model like CityGML is required.The selection of the approach and the data model(s) to develop a 3D The next section describes these elements in detail. | UNDERG ROUND C ADA S TR AL SURVE Y DATA REQU IREMENTS Underground cadastral survey elements are defined based on the knowledge gained from investigating current practices.In several jurisdictions such as the Netherlands (Soffers, 2017) and Victoria (Atazadeh et al., 2021) Surveyor-General Victoria, 2021Victoria, , 2022)). | Survey points The most important survey points are control points, reference points, traverse points, and boundary points. All cadastral surveys need to be connected to at least two control points in Victoria (Surveyor-General Victoria, 2021).GNSS would be used to connect cadastral surveys to the datums (Surveyor-General Victoria, 2021). However, for underground areas, this technique is not possible because of the line-of-sight barriers.Therefore, traversing needs to be done between control points and cadastral surveys through the entrances of underground structures or before burying them.Control, reference, and traverse points are used for the traversing.There are two types of control points in Victoria: permanent marks (PMs) and primary cadastral marks (PCMs).Victoria | Survey observations The most important survey observations are traverse, radiation, connection, and boundary.The traverse observation refers to the traversing between control, reference, and traverse points.The radiation observation connects control, reference, and traverse points to the legal space corners (LandVictoria, 2019).Consequently, the legal spaces connect to a geodetic survey network.Boundary observations are also the measurements used to define legal surveyed boundaries (for more information about surveyed boundaries see [Saeidian et al., 2023a]).Figure 3 shows some examples of traverse, radiation, and boundary observations in the AFR of the underground tunnel of Figure 2a. All legal objects require to be connected to the adjacent lands.A primary parcel needs to be connected to the surrounding road or crown unclosed abuttal parcels.A secondary interest must also be connected to primary parcels by sharing a corner or a special observation (connection observation) from one of its corners to a primary parcel corner (LandVictoria, 2019).For more information about underground primary parcels and secondary interests (see Saeidian et al., 2023b).Figure 4 shows a few examples of connecting underground primary parcels to roads and connecting underground secondary interests to primary parcels. | Elevation information In addition to the survey points and observations, the elevation information is also provided in plans and AFRs.This survey information is frequently used in underground plans.Figure 6 shows some examples of the elevation information in the plan of the underground tunnel of Figure 2b.As shown, the reduced level (RL) values of the ground/site and some parts of the legal space (crown allotment) associated with the tunnel are provided.These RLs specify the elevation of certain parts of the legal space and ground/site relative to the AHD.In cross-sectional diagrams, each text specifying the elevation information defines a surface on which all points have the same elevation relative to AHD.A 3D model can define this survey information as surfaces rather than textual notations. | THE DE VELOPED CIT YG ML E X TEN S I ON CityGML is an OGC standard for the 3D modelling of natural and man-made objects in urban environments.This standard provides the entities for the spatio-semantic representation of urban objects and defines topologies, The last edition of CityGML (version 3.0) is introduced in UML diagrams, making it independent of any platform.Each CityGML ADE module needs to be provided in UML with its namespace (Kolbe et al., 2021).Figure 7 shows the conceptual schema of the SurveyElement module of the VicULA ADE provided in UML.The feature class ElevationSurface defines the elevation information.It should be noted that survey points also have elevation information as shown in Figure 7.However, the ElevationSurface class defines elevation surfaces as described in the previous section.This is a requirement that is very common in underground AFRs and plans and has not been well covered in the previous data models. CityGML 3.0 provides various geometries such as primitive geometries, spatial aggregates and composites. It uses the ISO standards such as "ISO 19107:2003 Spatial Schema" to define the geometries.These geometries The SurveyElement module of the CityGML VicULA ADE. provided for spaces and space boundaries (the AbstractSpace and AbstractThematicSurface classes and their subclasses).However, the survey elements are not physical/logical spaces or space boundaries.Therefore, geometries are used directly for the feature classes of the SurveyElements module in order to spatially represent them. As shown in Figure 7, the primitive geometries defined by the ISO 19107 standard such as GM_Point, GM_Curve, and GM_surface are used in the SurveyElement module. CityGML 3.0 defines four levels of detail (LODs) for both interior and exterior objects from a highly generalised model to a highly detailed model, supporting different use cases/applications.The ADE mechanism also provides the possibility of defining LODs.CityGML 3.0 uses LODs solely for spatial representations, and not for semantics (Kolbe et al., 2021).Therefore, using the LOD concept for underground cadastral survey elements (points, lines, curves, and surfaces) is not logical.As shown in Figure 7, the defined geometric representations in the SurveyElement module do not have any specific LODs (the same approach has been used in the relief module of the CityGML CM).Therefore, they can be used regardless of LODs. It | Y PIN G To test the feasibility and viability of the developed data model, it needs to be implemented for an underground case study.The survey data of the case study is stored in a geography markup language (GML) file using the developed schema (the XML encoding), which is compliant with standards.This means that software tools can process this application schema and work directly with the ADE data sets, including writing, reading, visualising, and querying.The implementation uses FME Workbench and FZK Viewer to perform these tasks on the GML file derived from the 3D data model.As a 3D integrated model, this GML file is able to store: • survey elements: These elements are modelled within the proposed schema of the SurveyElement module of the VicULA ADE, which is the contribution of this study. • underground legal spaces: These spaces are modelled within the proposed schema of the UndergroundParcel module of the VicULA ADE (Saeidian et al., 2023b). • underground legal boundaries: These boundaries are modelled within the proposed schema of the UndergroundBoundary module of the VicULA ADE (Saeidian et al., 2023a). • underground physical objects: These objects are modelled within the current schemas of the CityGML modules (OGC, 2021). The schema of the SurveyElement module imports all other schemas (Figure 9).Therefore, the GML file derived from this schema is able to define underground legal spaces and boundaries and physical assets along with the survey elements. To test the feasibility of modelling underground cadastral survey data and legal and physical objects in realworld underground areas using the proposed data model, the cadastral plan of an underground tunnel (shown in Figure 4b) and its AFR (shown in Figures 2a and 3) are used.This case study contains information about different survey points and observations.However, this case study does not have elevation surfaces.Therefore, a synthetic elevation surface is considered for this case study to test if the developed data model can define elevation surfaces. Using the FME Workbench software tool, all survey, legal, and physical data are stored in a single GML file based on the SurveyElement module XSD that imports all other modules (XSDs) as shown in Figure 9.This tool was able to read the developed schema and write the data according to the schema, which shows that the developed schema is compliant with standards.Figure 10 shows snippets of this GML file and the information defined by the VicULA ADE (survey elements and legal spaces and boundaries) and CityGML modules (physical objects).In this and the following figures, some elements (above the lines), especially geometries, are hidden due to visualisation limitations. The created GML file contains the geometries and attributes of different types of survey elements.Figure 11 shows this information for a survey point, a survey observation, and an elevation surface. In the implemented prototype, all survey, legal, and physical data are converted into a single GML file containing 3D geometries, attributes, semantics, and relationships.The implemented model of the case study (the GML file) was then visualised using the FZK Viewer.The developed schema for the SurveyElements module and the implemented prototype (the GML file) were provided to this viewer.The viewer was able to read the GML file and visualise it, showing the viability and feasibility of the developed integrated model. Figure 12 shows the visualisation of the 3D integrated underground model of the case study.In this figure, the synthetic elevation surface is not shown due to visualisation transparency.This 3D integrated model represents the tunnel and the legal space associated with it along with the survey elements.Different survey points and observations are modelled based on the AFR of the tunnel (Figures 2a and 3).Some survey points and observations are located on the ground surface to connect the legal space of the tunnel to the control point (the first point on the right). In addition to the 3D geometries, the model also provides various attributes of the survey elements. Figure 13 shows some survey elements and their attributes modelled, including a survey point (boundary point) located in the corner of the legal space of the tunnel (Figure 13a), a survey observation (radiation observation) connecting this boundary point to a reference point in the survey network (Figure 13b), and an elevation surface (Figure 13c). The implemented prototype shows that the developed data model can successfully incorporate different underground cadastral survey elements in a 3D integrated model.It is able to define the geometries, attributes, and semantics of these elements.The proposed application schema and the 3D integrated model derived from it have some benefits compared to the previous data models and the current practice, which is discussed in the next section.models mostly modelling legal spaces and boundaries.A few studies enriched existing data models like BIM and LADM for modelling survey elements (Atazadeh et al., 2021;Kalogianni et al., 2021;Soffers, 2017). | D ISCUSS I ON However, these studies only focused on buildings and land parcels and did not investigate underground cases and requirements.Although, most survey elements are common between above-ground and underground such as survey points and observations, the importance of some survey elements for underground is observed that are less considered in the previous data models.For example, elevation information (elevation levels) is frequently provided for underground areas.Therefore, this research proposed a new element to define elevation surfaces. The new feature class models the geometry and attributes of these surfaces in a 3D digital model instead of textual notations as shown in Figure 13c.In addition, the investigation of underground cases has revealed some challenges in the current practices, which are discussed in more detail below. In the current practice, survey data are stored and managed in a separate document (e.g., AFR in the context of Victoria, Australia).For example, the case study of this research includes an AFR for cadastral survey elements (Figures 2a and 3) and a cadastral plan for the legal space associated with the tunnel (Figure 4b).Since it is a digital model, it is possible to define validation rules and users can interact with the model whereas in 2D AFRs, this is impossible since the survey data is provided in a static format. The integrated 3D model also avoids providing survey data in several 2D sheets.These 2D sheets frequently refer to each other.The user needs to put them together and interpret the survey work.For example, the AFR of this research's case study has two sheets.Figure 14 shows some parts of the first sheet with several references to sheet 2. In contrast, the proposed 3D model is a single integrated model without any references to several sheets (as shown in Figure 12).For complex cases, the number of sheets and references is even greater. The proposed 3D digital model provides an improved visualisation of survey elements compared to 2D AFRs. As a result, interpreting and communicating this model becomes much easier.In this 3D model, the connection of underground legal spaces to a geodetic survey network is clearly visible as shown in Figure 12.In contrast, the current practice involves interpreting several 2D pages of AFRs and plans that contain various object symbology and textual notations to understand the survey work.For example, bearings, distances, and other information about observations are written as textual notations in the AFR, making the drawing very crowded and it is difficult to find relevant notations of observations.On the other hand, in the developed model (as shown in Figure 13), all information is defined as attributes and the user can retrieve this information for any element needed.Figure 15 shows another example in which the textual notation of a traverse observation clarifies that the observation has been conducted through the tunnel between the reference points on both sides of the tunnel.This textual notation makes the drawing crowded but is necessary to interpret the survey work in the 2D AFR.In the 3D integrated digital model, in addition to the survey elements, legal and physical data are also provided.Therefore, there is no need for such textual notations as shown in Figure 15. This 3D digital model also enables queries.For example, in the current practice, if a surveyor wants to find a specific survey point or observation, all parts of the AFRs need to be searched manually to find the survey element.On the contrary, using the digital model, the user can easily find a specific survey point by executing a query based on the names/IDs.A clear model of cadastral survey elements is crucial for surveyors who want to reuse AFRs. F I G U R E 1 4 The first sheet of the AFR for this research's case study with several references to the second sheet. Control points are located on the ground surface, and underground legal spaces require to be connected to a geodetic network through these control points.In addition, underground legal spaces are connected to roads located on the ground surface.As underground structures are buried, such traversing and connections (surveying observations) can be done before burying the assets or a traversing needs to be performed between the points located on the ground surface and underground through the entrances of underground structures.In the AFR of this research's case study, there is no information about how the surveying has been done.Also, the elevation of reference and traverse points are not provided.Therefore, it is impossible to know which points and observations are underground, which is another limitation of current 2D practices.On contrary, in a 3D model, the user can simply distinguish this as all objects have 3D coordinate values (x,y,z).However, it is necessary to capture the elevation of the points.In this study, we assumed that all points and observations are located on the ground surface, except the points on both sides of the tunnel and the observations between them, the tunnel, and the legal space of the tunnel (Figure 16). The implemented prototype proves that the developed data model can successfully define the survey elements like points, observations, and elevation surfaces in a 3D environment.It shows the viability and feasibility of the model from the data modelling perspective.However, some challenges were faced during implementation. The implementation of elevation surfaces (creating these surfaces) can be challenging for a large-scale area considering the earth's curvature.In this study, since the case study was small, the synthetic elevation surface was modelled as a flat surface without considering this parameter. This research focused on modelling the spatial elements and their semantics and attributes in the final document of a survey work (AFRs).Therefore, surveying processes were not investigated.For example, surveyors may measure a survey element several times or do a traversing between a control point on the ground surface and a point in an underground structure, calculate the direct bearing and distance between them, and provide this direct bearing and distance in the AFR (as a computed bearing and distance), not the original traversing.Another example is Figure 4c where an underground primary parcel (underground lot) is connected to a road located on the ground surface.There is no information about these processes in AFRs.It can be investigated in detail in future studies.In this regard, it is necessary to investigate cadastral surveying processes for underground areas.It is also necessary to investigate whether such information requires to be modelled based on the current regulations and possible applications of the model. | CON CLUS IONS The development of a 3D data model is a fundamental step towards 3D digital land administration of underground areas.This model requires covering different types of underground data including cadastral survey elements as well as physical and legal datasets.In our previous studies (Saeidian et al., 2023a(Saeidian et al., , 2023b)), the CityGML VicULA ADE was developed to integrate the physical data of underground assets with the legal spaces and boundaries. In the current practice of Victoria, it is also necessary to provide survey information along with legal spaces and boundaries.Therefore, to enrich the CityGML VicULA ADE with survey information, this research developed the SurveyElement module. The SurveyElement module can import other VicULA ADE modules and CityGML modules in order to integrate underground survey, physical, and legal data in a 3D digital environment.To test and demonstrate the feasibility and viability of the CityGML VicULA ADE, a prototype was implemented for an underground tunnel. This prototype showed that the proposed extension has the capability to model various underground cadastral survey elements including different types of 3D survey points, observations, and elevation information in a realworld context. The approach proposed in this study can be practical for augmenting CityGML with survey data.The developed data model can integrate all 3D ULA objects such as different types of underground cadastral survey elements and underground legal and physical objects homogeneously in terms of geometry and semantics.The proposed methodology was applied to Victoria to showcase its feasibility and benefits.However, this methodology can be replicated in other jurisdictions by adjusting the jurisdiction-specific data requirements in underground cadastral surveys.It should be noted that creating an integrated 3D model for ULA requires addressing some challenges such as data availability, differences in vertical and/or horizontal datums, and differences in formats.In addition to these technical challenges, institutional and legal barriers may be significant obstacles to utilising the proposed extension. F I G U R E 1 6 The location of survey points. CityGML-based integrated data models.Thus, a new study is required to investigate the possibility of modelling survey data elements in CityGML.This research aims to address this knowledge gap by investigating and extending the CityGML standard in order to incorporate underground cadastral survey data into a 3D integrated environment.As shown in Figure 1, this study adopts a case study driven research methodology.The first step is selecting a case study.The cadastral survey highly depends on the regulations and varies in every jurisdiction.This study focuses on the Victorian jurisdiction of Australia.Underground cadastral survey plans in this jurisdiction are investigated to extract survey elements and identify requirements.In the second step, the conceptual model of CityGML 3.0 is extended based on the requirements.The result is a conceptual ADE for CityGML 3.0 provided in a Unified Modelling Language (UML) diagram to model underground cadastral survey elements.This conceptual ADE is then encoded in Extensible Markup Language (XML) in step 3, followed by implementing it for a real-world underground area. To develop an integrated city-scale model by adopting the third approach (enriching a base data model to support ULA data components),Saeidian et al. (2023aSaeidian et al. ( , 2023b) ) proposed a new ADE for CityGML 3.0 to represent underground physical assets and legal spaces and boundaries.As discussed in the previous section, in the current practices in different jurisdictions, survey elements also need to be provided along with the legal spaces and boundaries (for example as the Abstract of Field Records (AFR) file in Victoria).These elements are fundamental for connecting underground legal spaces to a geodetic survey network.This research aims to identify different types of cadastral survey elements in underground survey documents and enrich the CityGML 3.0 data model to support these elements and represent them along with underground legal spaces and boundaries and physical objects in a 3D integrated model.This development also benefits the CityGML standard since no study explored the potential of this standard for modelling survey data and developed an ADE in this context.The developed ADE can expand the functionality of CityGML for use cases that require survey data in a 3D city model.In order to accomplish this aim, it is necessary to identify underground cadastral survey elements in the current practice. Survey points have two/three coordinate values (horizontal and/or vertical coordinates).These points are coordinated relative to the Australian national datums.Horizontal coordinate values are based on the Map Grid of Australia (MGA) which is the Universal Transverse Mercator (UTM) projection of the Geocentric Datum of Australia (GDA), and vertical coordinates (elevations) are based on the Australian Height Datum (AHD) which is the datum of mean sea level as determined by the National Levelling Adjustment (Chief Parliamentary Counsel, 2018; has established a network of survey control points that covers the whole state, known as PMs.These control points have 3D coordinate values (horizontal coordinates and elevation) aligned to the national datums(Surveyor- General Victoria, 2022).PCMs are also permanent survey marks that can be connected to during a cadastral survey to meet the requirements of regulations(Surveyor-General Victoria, 2021).Some points of the traversing are important for the surveyor (reference points) and the rest points are traverse points.Finally, boundary points are used to specify legal boundaries.Survey points can be provided in both AFRs and plans.Figure2shows some survey points in an AFR and a cadastral plan of underground tunnels.In the field, survey points can be marked by different monuments (e.g., peg, plaque, and survey nail) specified in the AFRs and plans.Survey points can have different attributes including name, description, point type, point state, monument type, horizontal datum, vertical datum, easting, northing, and elevation.The point type defines the type of point including control points (PM or PCM), reference points, traverse points, and boundary points.The point state can also be existing, proposed, and destroyed. Traverse and radiation observations are straight lines defined by bearing and distance measurements.However, boundary and connection observations can be either straight lines or arcs.For underground cases, arcs are very common, especially for underground tunnels and utility easements (Figure5).The name, observation type, and start and end points (their IDs/names) are the expected attributes for line and arc survey observations.F I G U R E 2Examples of survey points: (a) the AFR of an underground tunnel; and (b) the cadastral plan of an underground tunnel.The observation type can be traverse, radiation, boundary, and connection.In addition to these attributes, other expected attribute values for lines are bearing, distance, and the types of bearing and distance.Other expected attributes for arcs are also the bearing and length of the chord, the radius, length and type of the arc, and the direction of rotation from the start to the end (rot).The distance, bearing, and arc types can be adopt dimension, computed, derived, or measured.F I G U R E 3 Examples of traverse, radiation, and boundary observations in the AFR of the underground tunnel of Figure 2a.F I G U R E 4 Examples of connecting underground legal objects: (a) connecting an underground secondary interest (easement) to a primary parcel (lot); (b) connecting an underground primary parcel (the underground crown allotment of Figure 2a) to roads (the start and end points of the straight boundary lines highlighted are on the roads); and (c) connecting an underground primary parcel (lot) to a road. attributes, and appearances.There are different kinds of thematic modules (e.g., Building, Tunnel, Construction, Transportation, and LandUse) in the CityGML Conceptual Model (CityGML CM) as well as the Core module that is mandatory to be implemented for any application(Kolbe et al., 2021).However, CityGML does not have any specific module for modelling underground cadastral information including underground legal spaces and boundaries as well as associated survey data elements.CityGML provides the ADE mechanism for enriching this 3D model according to the application-specific needs.Saeidian et al. (2023aSaeidian et al. ( , 2023b) ) developed an ADE for CityGML 3.0 named VicULA ADE to represent underground legal spaces and boundaries in Victoria.In these studies, two modules (UndergroundParcel and Un-dergroundBoundary) have been developed for modelling underground legal spaces and boundaries.This study enriches the VicULA ADE by developing another module named SurveyElement to model underground cadastral survey elements described in the previous section. F I G U R E 5 Examples of arcs: (a) legal spaces of utilities (easements); and (b) the legal space of a tunnel (crown allotment).The AbstractSurveyElement class is the top class of the module with some subclasses for ULA survey points, observations, and elevation information.It is a subclass of the AbstractFeature class of the CityGML Core module.Therefore, the AbstractSurveyElement class and its subclasses (all survey elements) inherit the properties of the AbstractFeature class such as the "description" and "name" attributes.The SurveyElement module also has four F I G U R E 6 Examples of the elevation information in the cadastral plan of the underground tunnel of Figure 2b.enumerations for the type and state of points, the type of observations, and the type of bearings, distances, and arcs according to the requirements listed in the previous section.As described in the previous section, survey observations can be lines or arcs.The line and arc observations have some similar and some different properties as listed in the previous section.Therefore, two feature classes have been created for them to model different attributes.These classes are the subclass of an abstract class named SurveyObservation.This abstract class defines similar properties including attributes and the relationship between survey observations and points.As mentioned in the previous section, only boundary and connection observations can be arcs.Therefore, an object of the ArcObservation feature class can only be created if the observation type is a boundary or connection.As shown in Figure 7, the Object Constraint Language (OCL) has been used to apply this constraint on the ArcObservation feature class. is necessary to develop an encoding for the UML conceptual model of anyADE (Kolbe et al., 2021).Similar to the conceptual data model of the latest version of CityGML (version 3.0), the conceptual data model of the developed ADE is created using UML diagrams that can be implemented using any database or file-based schemas (encodings) based on available software tools.This research developed an XML schema for the developed ADE; however, the developed data model is not limited to the XML encoding.The XML schemas of all CityGML modules (OGC, 2021) and the UndergroundParcel(Saeidian et al., 2023b) and UndergroundBoundary(Saeidian et al., 2023a) modules of the VicULA ADE are available.Therefore, this study developed another XML schema for the SurveyElement module as the new module of the VicULA ADE.Figure8shows some parts of the developed XML encoding for the SurveyElement module derived from the UML conceptual model shown in Figure7.F I G U R E 8 Some parts of the developed XML encoding (XSD) for the SurveyElement module. This research developed an integrated 3D underground data model by proposing a new CityGML ADE at the conceptual (UML) and encoding (XML schema) levels to model underground cadastral survey data.Previous data F I G U R E 9 The use of VicULA ADE and CityGML modules in the 3D underground integrated model: (a) at the conceptual level; and (b) at the XML encoding level (importing all schemas of the VicULA ADE modules, CityGML modules, and GML by the SurveyElement module XSD). However, legal and survey data are strongly linked to each other since the legal boundaries are delineated based on the survey measurements.Therefore, in many cases, such as reconstruction of legal boundaries, subdivisions, and consolidations, surveyors must check the integrity of both legal and survey data by accessing siloes of documents or data repositories to confirm validity and reusability of these datasets.In other words, they must consolidate all survey and legal information from these documents or repositories in order to interpret them accurately.Combining all survey and legal data from various files is a cognitively complex challenge.On the other hand, a 3D integrated model can provide a coordinated representation and management of survey and cadastral datasets.The 3D digital model developed in this study provides the capability to integrate survey data elements with underground legal F I G U R E 1 0 Snippets of the GML file of the case study derived from the SurveyElement module XSD that imports all other modules (XSDs).spacesand boundaries in a common 3D data environment, which facilitates the interpretation and communication of underground cadastral survey data (see Figure12).In a 3D integrated digital model, survey measurements and legal spaces and boundaries can easily be checked by surveyors to identify possible inconsistencies.F I G U R E 11 Some survey elements and their properties in the GML file of the case study.F I G U R E 1 2 The visualisation of the 3D integrated underground model of the case study that has survey elements (survey points shown in cyan and survey observations shown in red), underground legal space and boundaries, and physical assets.F I U R E 1 3 Some underground cadastral survey elements and their attributes; (a) a survey point; (b) a survey observation; and (c) an elevation surface. F Communicating the survey data of a traverse line in 2D AFR by the 3D integrated model and CityGML 3.0.
v3-fos-license
2019-03-11T13:03:57.633Z
2013-12-23T00:00:00.000
73186820
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/archive/2013/418586.pdf", "pdf_hash": "fc9d99aea2b0711aed7864d891dfa730feea8c21", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1439", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "32195614c2bbfcfb5e22b7019095d26d903222ca", "year": 2013 }
pes2o/s2orc
Variability of Soil Physical Properties in a Clay-Loam Soil and Its Implication on Soil Management Practices We assessed the spatial variability of soil physical properties in a clay-loam soil cropped to corn and soybean. The study was conducted at Lincoln University in Jefferson City, Missouri. Soil samples were taken at four depths: 0–10 cm, 10–20, 20–40, and 40–60 cm and were oven dried at 105C for 72 hours. Bulk density (BDY), volumetric (VWC) and gravimetric (GWC) water contents, volumetric air content (VAC), total pore space (TPS), air-filled (AFPS) and water-filled (WFPS) pore space, the relative gas diffusion coefficient (DIFF), and the pore tortuosity factor (TORT) were calculated. Results showed that, in comparison to depth 1, means for AFPS, Diff, TPS, and VAC decreased in Depth 2. Opposingly, BDY, Tort, VWC, andWFPS increased in depth 2. Semivariogram analysis showed that GWC, VWC, BDY, and TPS in depth 2 fitted to an exponential variogram model. The range of spatial variability (A Introduction Characterizing the spatial variability and distribution of soil properties is important in predicting the rates of ecosystem processes with respect to natural and anthropogenic factors [1] and in understanding how ecosystems and their services work [2].In agriculture, studies of the effects of land management on soil properties have shown that cultivation generally increases the potential for soil degradation due to the breakdown of soil aggregates and the reduction of soil cohesion, water content and nutrient holding capacity [3,4].Cultivation, especially when accompanied by tillage, has been reported to have significant effects on topsoil structure and thus the ability of soil to fulfill essential soil functions and services in relation to root growth, gas and water transport and organic matter turnover [5][6][7].Soil properties vary considerably under different crops, tillage type and intensity, fertilizer types and application rates.Consequently, the physical properties of the soil are also affected by many factors that change vertically with depth, laterally across fields and temporally in response to climate and human activity [8].Since this variability affects plant growth, nutrient dynamics, and other soil processes, knowledge of the spatial variability of soil physical properties is therefore necessary.To study the spatial distribution of soil properties, techniques such as classical statistics and geostatistics have been widely applied [9][10][11].Geostatistics provides the basis for the interpolation and interpretation of the spatial variability of soil properties [9,[12][13][14].Information on the spatial variability of soil properties leads to better management decisions aimed at correcting problems and at least maintaining productivity and sustainability of the soils and thus increasing the precision of farming practices [1,15].A better understanding of the spatial variability of soil properties would enable refining agricultural management practices by identifying sites where remediation and management are needed.This promotes sustainable soil and land use and also provides a valuable base against which subsequent future measurements can be proposed [14].Despite the importance of this topic in agriculture, the literature is not abundant on the variability of soil physical properties in 2 ISRN Soil Science Central Missouri.Furthermore, existing studies on the spatial variability of soil properties have focused on the top soil (0-20 cm) with less or no studies at deeper soil depths (30-100 cm).The objective of this study was therefore to assess the spatial variability of soil physical properties at various depths (0-10 cm, 10-20, 20-40 and 40-60 cm) in a clay-loam soil cropped to corn and soybean, and determine how knowledge on this variability can affect soil management practices.and 4, respectively.The cylinders of 10 cm height were used for soil samples collection at depths 1 and 2 while the 20 cm height cylinders used for sampling at depths 3 and 4. A total of 576 soil samples were collected as follows: 48 plots × 4 depths × 3 replicates (at the middle of each plot).Collected samples were taken to the laboratory where they were weighed (fresh weight of sample; FWS) then oven dried at 105 ∘ C for 72 hrs.The weight was taken after oven drying (dry weight of soil; DWS).Soil physical properties were calculated as follows: Soil bulk density (BDY, g⋅cm −3 ) = (DWS/V), where DWS is the dry weight of soil and the volume of cylinder (total volume of soil); Volumetric water content (VWC, cm 3 ⋅cm −3 ) = (FWS − DWS)/), with FWS being the fresh weight of soil; gravimetric water content (GWC, g⋅g −1 ) = [(FWS − DWS)/DWS] where FWS is the fresh weight of soil; total pore space (TPS, cm 3 ⋅cm −3 ) = 1 − (BDY/PDY), where PDY is the soil particle density (taken as 2.65 g cm −3 ); volumetric air content (VAC, cm 3 ⋅cm −3 ) = TPS − VWC; water-filled pore space (WFPS, %) = 100 * (VWC/TPS); air-filled pore space (AFPS, %) = 100 * (VAC/TPS); relative gas diffusion coeffient (Diff., cm 2 s −1 ⋅cm −2 ⋅s) = (VAC) 2 , pore space tortuosity (Tort., m⋅m −1 ) = (1/VAC) [16]. Statistical and Geospatial Analysis.After calculation, data on soil physical properties was first transferred to Statistix 9.0 to compute summaries of simple statistics, then to GS+ (Geostatistics for environmental science) 7.0 for semivariogram analysis.A semivariogram (a measure of the strength of statistical correlation as a function of distance) is defined by the following equation [17]: where (ℎ) is the experimental semivariogram value at a distance interval ℎ, (ℎ) is number of sample value pairs within the distance interval ℎ, and ( ), and ( + ℎ) are sample values at two points separated by the distance ℎ.Exponential and spherical models were the empirical semivariograms.The stationary models, that is, exponential (2) and spherical model (3) that fitted to experimental semivariograms were defined in the following equations [18]: where 0 is the nugget, 1 is the partial sill, and is the range of spatial dependence to reach the sill ( 0 + 1 ). The ratio 0 /( 0 + 1 ) and the range are the parameters that characterize the spatial structure of a soil property.The 0 /( 0 + 1 ) relation is the proportion in the dependence zone, and the range defines the distance over which the soil property values are correlated with each other [19].A low value for the 0 /( 0 + 1 ) ratio and a high range generally indicate that high precision of the property can be obtained by Parfitt et al. [19].The classification proposed by Cambardella et al. [14], which considers the degree of spatial dependence (DSD = 0 /( 0 + 1 ) × 100) as strong when DSD ≤ 25%; moderate when 25 < DSD ≤ 75%; and weak when DSD > 75%, was used in this study to classify the degree of spatial dependence of each soil property. (Table 1).The highly skewed soil parameters included soil bulk density (BDY), diffusivity (DIFF), and volumetric water content (VWC), whereas total pore space (TPS) was moderately skewed.Air-filled pore space (AFPS) had a low skewness.Highly skewed parameters indicate that these properties have a local distribution; that is, high values were found for these properties at some points, but most values were low [20].The other soil physical properties were approximately normally distributed on the field.The underlying reason for soil properties being normally or nonnormally distributed may be associated with differences in management practices, land use, vegetation cover, and topographic effects on the variability of soil erosion across the landscape of the field.These factors can be the sources for a large or very small variation of soil properties in some of the samples, which leads to the nonnormal distribution [21].A wide range of spatial variability was observed for soil physical properties (Table 1).For instance, soil bulk density (BDY) ranged from 1.01 to 1.23 g cm −3 for depth 1, 1.15 to 1.46 g cm −3 for depth 2, 0.96 to 1.19 g cm −3 and 1.04 to 1.18 g cm −3 for depths 3 and 4, respectively (Figure 2).Soil bulk density was also significantly higher in the second depth (1.4 g cm −3 ) than all the other 3 depths, where it varied between 1.18 g cm −3 and 1.24 g cm −3 .The mean value of AFPS was significantly lower in the second depth (26.5 cm 3 ⋅cm −3 ) than in all other 3 depths, where it varied from 39.34 to 45.7 cm 3 ⋅cm −3 .Soil pore tortuosity factor (TORT) and water-filled pore space (WFPS) were also significantly higher in the second depth (12.46 cm⋅cm −1 and 73.46%, resp.).However, the relative gas diffusion coefficient (DIFF), gravimetric water content (GWC), total pore space (TPS), and volumetric air content (VAC) were significantly lower in the second depth (0.02 m 2 s −1 m −2 s, 0.21 g⋅g −1 , 0.42 cm 3 ⋅cm −3 , and 0.12 cm 3 ⋅cm −3 , resp.) (Table 1).The variability in soil physical properties is understandable since the soil of this site has a smectite layer (claypan) in the 10-20 cm, which corresponded to our second sampling depth.This layer of smectite is hard and compact, with very low pore space, high mass-volume ratio (bulk density) and high water retention capability (because of their large surface area).As a consequence of the presence of this smectite layer in depth 2, the mean of water-filled pore space (WFPS) was slightly lower in the first depth (54%) than in all four depths.In fact, air predominates the pore space in the first depth and cultivation loosened the soil, thereby allowing the water trapped in the pore space to evaporate.Higher GWC, VWC, and TPS at the lower depths (20-60 cm) mean that crops (especially corn and soybean grown in the field) were able to access water and dissolved nutrients through their roots.In fact, despite the claypan layer (10-20 cm), it has been reported by various researchers that crop roots were able to penetrate into and through this layer of smectitic clay [22][23][24] and that root growth may increase within the claypan layer [23] as a result of plant adaptation to water-limited soil layers.In general, the use of the coefficient of variation (CV) is a common procedure to assess variability in soil properties since it allows comparison among properties with different units of measurement.Overall, the coefficient of variation for all soil physical properties, in the four depths, ranged from 4.83 to 91.61% (Table 1).The pore tortuosity factor (TORT) showed the highest variation while soil bulk density (BDY) showed the least variation.The CV indicated that there was a strong spatial variability of the soil properties investigated.However, to have a better assessment of such spatial variability across the entire field, a geostatistical analysis was used.2).In general, for all depths, model fit was not very strong with the exception of gravimetric water content and bulk density in the second depth.Overall, the exponential model provided the best fit with about 65% of the physical properties fitting this model.In geostatistical theory, the range of the spatial variability of the semivariogram is the distance between correlated measurements (the minimum lateral distance between two points before the change in property is noticed) and can be an effective criterion for the evaluation of sampling design and mapping of soil properties.The value that the semivariogram model attains at the range (the value on the -axis) is called the sill.The partial sill is the sill minus the nugget [25,26].Theoretically, at zero separation distance (lag = 0), the semivariogram value is zero.However, at an infinitesimally small separation distance, the semivariogram often exhibits a nugget effect (the apparent discontinuity at the beginning of many semivariogram graphs), which is some value greater than zero.The nugget effect can be attributed to measurement errors or spatial sources of variation at distances smaller than the sampling interval (or both).Measurement error occurs because of the error inherent in devices.To eliminate this error, multiple samples were taken from each sampling point.Natural phenomena can vary spatially over a range of scales.Variation at microscales smaller than the sampling distances will appear as a part of the nugget effect.Table 2 shows that the spatial correlation (range) of soil properties widely varied from 1 m for volumetric water content (VWC) in depth four to 64 m for gravimetric water content (GWC) in depth 2. However, for the first and second depth (which are agriculturally more important), the range of spatial correlation varied from 3 m for volumetric air content (VAC) in depth 2 to 64 m for GWC in depth 2. Beyond these ranges, there is no spatial dependence (autocorrelation).The spatial dependence can indicate the level of similarity or disturbance of the soil condition.According to López-Granados et al. [27] and Ayoubi et al. [17], a large range indicates that the measured soil property value is influenced by natural and anthropogenic factors over greater distances than parameters which have smaller ranges.Thus, a range of about 64 m for GWC in this study indicates that the measured GWC values can be influenced in the soil over greater distances as compared to the soil parameters having smaller range (Table 2).This means that soil variables with smaller range such as VWC and VAC are good indicators of the more disturbed soils (the more disturbed a soil is, the more variable some soil properties become).The more variable properties have a shorter range of correlation.The different ranges of the spatial dependence among the soil properties may be attributed to differences in response to the erosion-deposition factors, land usecover, parent material, and human interferences in the study area.The nugget, which is an indication of microvariability was significantly higher for water-filled pore space (WFPS) and air-filled pore space (AFPS) when compared to the others.This can be explained by our sampling distance which could not capture well their spatial dependence.The lowest nugget was for GWC (Table 2).This indicates that GWC had low spatial variability within small distances.Knowledge of the range of influence for various soil properties allows one to construct independent accurate datasets for similar areas in future soil sampling design to perform statistical analysis [17].This aids in determining where to resample, if necessary, and design future field experiments that avoid spatial dependence.Therefore, for future studies aimed at characterizing the spatial dependency of soil properties in the study area and/or a similar area, it is recommended that the soil properties be sampled at distances shorter than the range found in this study.Cambardella et al. [14] established the classification of degree of spatial dependence (DSD) between adjacent observations of soil property > 75% to correspond to weak spatial structure.In this study, the semivariograms indicated strong spatial dependence (DSD ≤ 25%) for soil physical properties such as bulk density, gravimetric water content, volumetric water content, total pore space, and diffusivity.The rest of the soil physical properties (water-filled pore space, Air-filled pore space, and tortuosity) measured exhibited very weak spatial dependence (DSD > 75%) (Table 2).The strong spatial dependence of the soil properties may be controlled by intrinsic variations in soil characteristics such as texture and mineralogy, whereas extrinsic variations such as tillage and other soil and water management practices may also control the variability of the weak spatially dependent parameters [14]. Spatial Distribution of Soil Properties across the Field. Interpolated maps portraying the distribution of soil physical properties in various depths are shown in Figure 3 for soil gravimetric (GWC) and volumetric (VWC) contents and water-filled pore space (WFPS).Gravimetric water content showed a good spatial distribution across the field with the highest values located around the southwestern portion of the field.Volumetric water content also showed good spatial distribution across the field with high values located in the northern, central, and southwestern portions of the field.Water-filled pore has a distribution similar to that of volumetric water content.The other soil properties, however, showed very poor spatial distribution in the field.This is most probably due to their poor sill ( 0 + ), model fit and coefficient of determination ( 2 ).Even though the spatial variability was not very pronounced, there were areas on the field that had slightly higher values of these physical properties than the rest of the field.In general, bulk density, total pore space, volumetric air content, Air-filled pore space, diffusivity, and tortuosity were very high in the field even though they did not exhibit very distinguishable variability.This lack of visible spatial variability is supported because the sampling distance (range) is 26 m for these properties. Implications of Spatial Variability of Soil Physical Properties on Soil Management. Results of this study indicated that the spatial variability of soil water content (GWC and VWC) was high.This can be explained, among many other reasons, by soil type (clay-loam) which was able to hold more water.But with intensive tillage, this soil water content could be adversely affected.Studies have shown that tillage practices can alter soil physical properties and consequently the hydrological behavior of agricultural fields, especially when a similar tillage system has been practiced for a long period [15,[28][29][30][31]. Tillage intensity has also considerable effects on spatial structure and spatial variability of soil properties [15,30].Therefore, this study can help determine site-specific soil management and decision making.To do so, the spatial variability of soil properties developed through kriging will be an important tool.Different ranges of spatial dependence were noticed in the field.The different ranges of the spatial dependence among the soil properties may be attributed to differences in response to the erosion-deposition factors, land use-cover, parent material, and human interferences in the study area.The different ranges can also be used in future studies to determine the sampling distance of different soil physical properties on the field.Also, the sill ( 0 +) can help determine where the variability or change in soil property stops.This will be useful especially for the irrigation purposes.Generally, with farmers facing the decision of whether or not to till and the intensity of tillage, a spatial variability study can help in this decision making.Maps produced in this study can also be used for irrigation purposes as they can clearly indicate which portion of the field needs irrigation (soil water content).To do this, soil water content information can be collected and analyzed geospatially to produce field maps.The process can be repeated frequently to obtain up-to-date soil water content information.To avoid frequent destructive sampling for water content analysis, equipment that allows insitu measurements such as TDR methods and water mark sensors can be used.Since a different range of spatial dependence among soil properties shows differences in response to human interferences and land usecover, this will help reduce human activities that increase soil bulk density and cause soil compaction like the use of heavy equipment.It can also serve as a reference for the type of crop to be grown (cover crops for erosion susceptible areas). Conclusion We assessed the spatial variability of soil physical properties in a clay-loam soil cropped to corn and soybean.Results showed that soil physical properties either decreased or increased sharply in the second depth (due to the presence of a smectite layer) before leveling up or dropping off, but without reaching the first depth value in either case.In addition, depending on soil physical property, maps produced by kriging showed either good or poor spatial distribution.The semivariogram analysis showed the presence of a strong (≤25%) to weak (>75%) spatial dependence of soil properties.Our understanding of the behavior of soil properties in this study provides new insights for soil site-specific management in addressing issues such as "where to place the proper interventions" (tillage, irrigation, and crop type to be grown). Figure 1 : Figure 1: Study area (Lincoln University's Freeman farm) showing the plots. Figure 2 : Figure 2: Variation of soil bulk density, gravimetric water content, total pore space, and volumetric water content with depth. Figure 3 : Figure 3: Spatial distribution of gravimetric (GWC) and volumetric (VWC) water contents in 0-10 cm depth and that of water-filled pore space (WFPS) in 20 cm depth. Table 1 : Descriptive statistics for soil physical properties at four depths in a clay-loam soil. Table 2 : Variogram parameters for soil physical properties at four depths in a clay-loam soil.
v3-fos-license
2018-12-11T15:25:49.691Z
2013-06-12T00:00:00.000
55705319
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/43469", "pdf_hash": "32871ffc95406cb39a6229bbd27529e7b83cf9f7", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1440", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6d8255c28f20f612e90830a049ef68b930502103", "year": 2013 }
pes2o/s2orc
Classification and Clinical Features of AMD Disease was first time described as “Symmetrical central choroido-retinal disease occurring in senile persons” in 1874 by Hutchinson. About 25 years ago, the term "age-related macul‐ opathy" was accepted and end stage of disease was acknowledged as age-related macular degeneration [1]. AMD is leading cause of blindness worldwide in older patient population. The highest risk of developing of AMD is in the population older than 65 years. With ageing of population in many countries, more than 20 % might have the disease [2]. Advanced forms of AMD are associated with visual progressive impairment. Visual acuity of this sub‐ jects decreases to practical blindness. This has big socioeconomic impact. Introduction Disease was first time described as "Symmetrical central choroido-retinal disease occurring in senile persons" in 1874 by Hutchinson. About 25 years ago, the term "age-related maculopathy" was accepted and end stage of disease was acknowledged as age-related macular degeneration [1]. AMD is leading cause of blindness worldwide in older patient population. The highest risk of developing of AMD is in the population older than 65 years. With ageing of population in many countries, more than 20 % might have the disease [2]. Advanced forms of AMD are associated with visual progressive impairment. Visual acuity of this subjects decreases to practical blindness. This has big socioeconomic impact. AMD is progressive chronic disease that is located in central retinal area (macula lutheayellow spot) [2]. Most visual lost is identified in late stages of AMD. There are 2 categories: wet AMD and geographic atrophy. In wet AMD choroidal neovascularization breaks through neuroretina. Leaking vessels, hemorrhages and lipid deposits lead to scarring process in macular area. All retinal structures including photoreceptors are destroyed. In geographic atrophy occur progressive atrophy of retinal pigment epithelium and secondary photoreceptors. To the end of 20th century was AMD practically untreatable. However, new pharmaceuticals based on suppression of vascular endothelial growth factor (VEGF) have completely changed the treatment of the disease [2]. Nearly 95 % of patients can be prevented from visual lost, and nearly 40 % of them improve vision [2]. age-related macular degeneration was estimated to be 6.8% and late age-related macular degeneration 1.5% [3]. Results from the Baltimore Eye Study reported epidemiological data from other ethnic groups. Late AMD was nine to ten times more prevalent in white participants than in black ones [4]. Age-specific prevalence of late age-related macular degeneration in Asians is largely similar to that in white people [5]. In Asia population have often disease specific features. Many of them have polypoidal dilatation of the choroidal vasculature. Polypoidal choroidal vasculopathy can account for 50% of wet AMD cases in Asians, but only 8-13% in white people [5]. Another variant of AMD is retinal angiomatous proliferation (RAP), which accounts for 12-15% of neovascular age-related macular degeneration [6]. RAP usually not responds to standard management of wet AMD. There are few incidence studies on AMD. The US Beaver Dam Eye Study in the USA reported a 14.3% 15-year cumulative incidence for early AMD and 3.1% for late AMD in adults aged 43-86 years [7]. Risk factors of AMD The major risk factor for AMD is older age. More than 10% of people older than 80 years have late AMD. Female sex has been inconsistently reported as a risk factor as well [3]. The major systemic risk factors include cigarette smoking [8]. Cigarette smoking in particular is a strong and consistent risk factor for AMD. Smoking 20 cigarettes a day increases the risk of double. Obesity is further systemic risk factor. This is connected with systemic riscs of obesity. These patients are more likely to have hypertension and diabetes mellitus, which are another risk factors [9,10]. People with AMD are also at increased risk of stroke [11]. Ocular risk factors for age-related macular degeneration include darker iris pigmentation, previous cataract surgery, and hyperopic refraction. A meta-analysis suggested previous cataract surgery was a strong risk factor for age-related macular degeneration, but this association was not shown in a randomized clinical trial [12]. Genetic factors Last ten years several genes have been associated to have role in pathogenesis of AMD [2]. AMD is disease that is tightly connected with inflammatory reaction. Inflammatory and immunologic processes play major role in its pathogenesis. For this reason was identified complement factor H gene (CFH). Other confirmed genes in the complement pathway include C2, CFB, C3 and CFI.29-31. On the basis of large genome-wide association studies, HDL cholesterol pathway genes have been implicated, including LIPC and CETP, and possibly AB-CA1 and LPL.32-34 APOE in the LDL pathway might also be related to AMD [13]. The collagen matrix pathway genes COL10A1 and COL8A1 and the extracellular matrix pathway gene TIMP3 have also been linked to age-related macular degeneration [14]. Finally, genes in the angiogenesis pathway (VEGFA) have also been associated with age-related macular degeneration in a meta-analysis of two AMD genome-wide association studies [14]. Genes modifying several biological pathways are in AMD. Complement and immune processes, HDL cholesterol, and mechanisms involving collagen, extra-cellular matrix, and angiogenesis pathways are associated with the onset, progression, and bilateral involvement of AMD [2]. But it should be noted that genetic susceptibility can be modified by environmental factors. Genetic variations can also influence differential responses to treatments for agerelated macular degeneration, an emerging research area [2]. Table 2 summarizes major genes associated with onset and progression of AMD Clinical manifestations of the process of natural retinal aging Aging is a physiological process involving all body organs and tissues. This process also affects the eye. It is a physiological process. That is not a manifestation of any disease. Each body cell has a planned life cycle from its inception to apoptosis (cell death). Body tissue in which there is no restoration of extinct mitotic cells (nerve tissue, retina), have a high incidence of manifestations of aging especially after the 75th year of life. Clinical manifestation of retinal aging is mainly visible as a foveal reflex loss. Its background is in the loss of cells from the inner retinal layers around the foveola and extending of foveal avascular zone [15]. In macular zone are usually present small hard drusen, which are not yet a manifestation of AMD [16]. In macula also occur tigers like irregularities in pigmentation. Visual acuity remains on a physiological level unlike of subjects affected by AMD. Doppler velocimetry demonstrates decrease of blood flow to the macular area [17]. Further is detectable reduction of perifoveolar arterioles and venules together with enlargement of foveal avascular zone [18] and reduction of retinal ganglion cells amount [19]. Also can be diagnosed decrease of other visual functions in connection with the process of aging. There are especially adaptation to darkness, contrast sensitivity, color vision and ability of stereopsis [20]. Classification and clinical features of age-related macular degeneration Age-related macular degeneration can be divided into 2 categories: dry form (non-exudative) and wet form (exudative). The dry form is very prevalent and affects about 85 to 90 % of patients. The wet form occurs in the remaining 10 to 15 %. Impairment of central visual acuity is much higher in wet form of AMD than in dry form. Wet form is responsible for 85 % of severe vision loss. Dry form of AMD The dry form of AMD occurs independently on the choroidal neovascular membrane (CNV). It is associated with chorioretinal atrophy with no obvious defects in Bruch's membrane. Clinical studies show a decrease in chorioretinal blood flow [21]. Chorioretinal atrophy leads to subsequent degeneration of the retinal pigment epithelium cells (RPE). It is associated with involution of photoreceptors in the affected area [22]. The dry form of AMD includes atrophy of the outer part of hematoretinal barrier (HRB) without appreciable leakage. It seems that the barrier function is maintained and the area of atrophy remains dry. Both forms of AMD are presented with painless loss of central vision. Individuals with dry AMD will typically complain of blurred vision as well as difficulty seeing fine details clearly. In the advanced stages, atrophic macular areas often coalesce, creating central scotoma, or blind spots, in the central visual field. This central visual loss compromises an individual's ability to perform basic tasks such as recognizing faces, reading signs, and other activities of daily living. Individuals with wet AMD will commonly present with visual distortion in which straight lines appear deformed. A hallmark of conversion from dry to wet AMD is a sudden and profound loss or distortion of central vision. These visual changes occur as a result of the acute degenerative changes occurring in the macula -most notably, subretinal and intraretinal hemorrhages from choroidal neovascular membrane. Individuals will typically have preserved peripheral vision in both processes [23]. Dry AMD Dry AMD, the more common variety of the AMD, results from degeneration of outer retinal cells (RPE cells) with subsequent profound retinal dysfunction (damage of photoreceptors and retinal neurons). The dry form of the disease is usually asymptomatic. Progression to the wet form may be indicated by sudden, severe vision loss or new onset of visual distortion (metamorphopsia). The dry form of the disease is characterized by macular drusen, however alterations in RPE are visible. Intermediate to severe cases of the dry form are characterized by larger drusen and geographic atrophy of RPE layer. This can cause severe vision loss [24]. Regular examinations are important to determine whether patients may benefit from certain interventions. For patients over age 55 with no risk factors, a comprehensive eye exam every one to two years is recommended. Patients with early-stage disease or a family history of the condition may require closer follow-up. Those with an intermediate or advanced case of the dry form of the disease should be advised to take a particular combination multivitamin recommended in the Age-Related Eye Disease Study. These supplements reduce the risk of progression to the wet form of the disease by 25%. However, patients with early-stage disease may not benefit from such supplementation. Smoking cessation is associated with a substantial reduction in the risk of progression to late-stage disease [24]. Self-monitoring with an Amsler grid (available online at www.macula.org/amsler-grid) is critical and can help detect disease progression as early as possible. New onset of visual dis-tortion noted on an Amsler grid, or any other sudden change in vision, may indicate progression from dry to the wet form of AMD. In some cases, timely treatment can reduce the risk of permanent loss of vision [24]. Patients who describe a sudden change in vision should be referred for urgent ophthalmic evaluation [24]. Drusen In early dry AMD, various lipid and protein-rich extracellular deposits accumulate under the RPE [25]. Clinically, deposits of AMD are classified on fundoscopic features of morphology and size. Drusen are a marker of age-related macular degeneration (AMD). Lesions similar to drusen, both in histology and their clinical appearance, are also seen in choroidal tumours, chronic inflammatory and degenerative conditions of the eye. Drusen are yellowish-white deposits of extracellular material located between the retinal pigment epithelium (RPE) and the inner collagenous zone of Bruch's membrane. They are the result of ageing. Drusen seen in these varied conditions have a similar clinical and histological appearance [26]. As seen through the ophthalmoscope, drusen are dots ranging in color from white to yellow, sometimes with a crystalline, glittering aspect. The origin of drusen has remained unresolved for more than a century. Moreover, there is no agreement as to whether drusen in the absence of other ocular abnormalities always point to early agerelated macular degeneration [1]. Inside Bruch's membrane we can differentiate several biochemical and anatomical changes with aging, including collagenous thickening, calcification, and lipid infiltration, in the absence of apparent retinal dysfunction. The accumulation of specific deposits under the RPE is the hallmark histopathological feature of eyes with early AMD, when visual function is still not irreversibly impaired. Histopathological examination defines three main types of sub-RPE deposits on the basis of location, thickness, and content: basal laminar deposits (BLamD), basal linear deposits (BLinD), and nodular drusen. BLamD is seen as amorphous material of intermediate electron density between the plasma membrane and the basement membrane of the RPE, often containing banded structures (wide-spaced collagen), patches of electron-dense fibrillar or granular material, and occasionally, membranous debris [27]. They are distributed throughout the retina, including the periphery as well as the macula, underlying not only cones but rods as well. BLinD are diffuse, amorphous accumulations within the inner collagenous zone of BrM, external to RPE basement membrane, with similar content variations [Green]. BLinD are characterized by coated and non-coated vesicles as well as some membranous and empty profiles [28]. Biochemically, deposits contain phospholipids, triglycerides, cholesterol, cholesterol esters, unsaturated fatty acids, peroxidized lipids, and apolipoproteins [29]. In contrast to BLamD and BLinD, nodular drusen are discrete, dome-shaped deposits within the inner collagenous zone of BrM (i.e., external to the RPE basal lamina). Due to their location, nodular drusen are often contiguous with BLinD, and can be difficult to distinguish from BLinD without electron microscopy [25]. Differences between BLamD and BLinD are seen on Figure 1. A key factor influencing the classification of drusen is their size and shape. Simple aids, the widest diameter of venous branches at the edge of the disc, which has a dimension of 125 microns, determine the size of drusen. Drusen are classified according to their appearance and size in the two basic categories: Hard drusen Their size is smaller than 63 microns. Ophthalmoscopic examination shows us small and well-demarcated yellow deposits (Figure 2.). This type of drusen is associated with very low risk of progression to late forms of AMD. However the occurrence of more than 8 hard drusen is associated with an increased risk of occurrence soft drusen. Occurrence of drusen, however, is not a static phenomenon. Their presence is characterized by dynamic changes. Hard drusen can grow and change to soft drusen. Soft drusen can grow and coalesce into large confluent bodies. This leads to detachment of the RPE. Another change that can be seen is calcification. Inside drusen are visible cholesterol crystals. Drusen with advancing age usually increased in their amount. Presence of soft drusen in both eyes is an important risk factor in the development of advanced forms of AMD (geographic atrophy of the RPE and CNV). Hard drusen are, however, frequently associated with the occurrence of dry AMD [30]. Changes in retinal pigment epithelium Irregularities in the RPE are associated with all stages of AMD. Focal hyperpigmentation arises from changes at the level of the RPE. We can differentiate hyperpigmentation or RPE cells, proliferation or migration of RPE cells into the subretinal space (Figure 4.). Focal hyperpigmentation is commonly associated with chorioretinal anastomosis. Focal hypopigmentation is associated with areas of drusen, which leads to thinning of the RPE cells layer and reduction of melanin content. Low melanin content is associated with a high risk of transition to the wet form of AMD. Geographic atrophy of RPE cells Geographic atrophy (GA) of RPE is end-stage dry AMD. GA is characterized by well-circumscribed area of RPE atrophy, which allows good visualization of the choroid and in end stage of disease sclera (Figure 5., Figure 6.). The term geographic atrophy is not accurate name for this stage, because it is not only RPE atrophy, but also choriocapilaris and retinal atrophy. These three layers are inseparably joined together. The atrophy of one of them leads to irreversible atrophy of the other twos. GA can occurs either as a primary form of AMD, or followed by a secondary form after absorption of soft drusen, after flattening of RPE detachment, or as a consequence of CNV regression, or rupture of the RPE. GA of RPE is causing severe loss of visual acuity in 20% of AMD patients. The remaining 80% of the severe losses of visual acuity in AMD is caused by CNV. Patients with primary GA are on average older than patients with wet AMD. Based on these circumstances, it has been suggested that the GA process occurs as reaction to changes in Bruch's membrane in those eyes, which are not developed wet form of AMD. Patients with GA RPE have problems with near vision in particular, even if it is retained subfoveal RPE central area. These problems are caused by paracentral scotomas, abnormal ability to adapt to the darkness that reduces visual acuity under dimmed lighting, and the deterioration of contrast sensitivity [31]. Magnifying aids paradoxically don't bring a large profit because it carries the magnified image into the paracentral absolute scotomas. The patient's vision during the day varies depending on the ability to find a central area functioning retina within the zone of GA [32]. GA RPE occurs bilaterally. The second eye is affected by in about 50%. Area of GA in the second eye is around 20 % smaller. With the development of GA in one eye decreases the risk of CNV in both eyes (i.e. wet AMD) [31]. Research that is based on the RPE injury hypothesis postulate that the pathogenesis and progression of dry macular degeneration is characterized by three distinct stages: 1. Initial RPE oxidant injury causes extrusion of cell membrane debris together with decreased activity of matrix metalloproteinases (MMPs), under the RPE as BLamD. 2. RPE cells are subsequently stimulated to increase synthesis of MMPs and other molecules responsible for extracellular matrix removal affecting both RPE basement membrane and BrM [35]. This process leads to progression of BLamD into BLinD and drusen by admixture of blebs into BrM, followed by the formation of new basement membrane under the RPE to trap these deposits within BrM [36]. 3. Macrophages are recruited to sites of RPE injury and deposit formation. Macrophage recruitment may be beneficial or harmful depending upon their activation status at the time of recruitment [37]. Nonactivated or scavenging macrophages may remove deposits without further injury. Activated or reparative macrophages, through the release of inflammatory mediators, growth factors, or other substances, may promote complications and progression to the late forms of the disease [37]. Wet form of AMD Wet AMD occurs less commonly but is far more aggressive when compared with dry AMD. Wet AMD results from the development of neovascularization, or new blood vessel growth, beneath the retina. These abnormal blood vessels may break into the retinal cell layers. The leakage of fluid and proteins from these vessels causes scar formation throughout the macula, which ultimately results in deterioration of central vision. Wet AMD tends to be far more severe than dry AMD. The wet form of AMD is characterized by occurrence of RPE detachment, choroidal neovascular membrane (CNV), subretinal hemorrhage in the macula. The terminal stage of wet AMD is disciform scar (Figure 7.). In the last decade, the wet form of AMD allocated an additional 2 clinical units: angiomatose retinal proliferation (RAP) and polypoidal choroidal vasculopathy (PCV) (see below). Retinal pigment epithelium detachment Their prognosis isn't good if central part of fovea is affected. [38]. RPE detachment is generally characterized by elevation of RPE layer from the Bruch's membrane. RPE detachment is divided into 4 categories. Drusen RPE detachment is formed in the later stages of multiple connecting soft drusen, which elevate the RPE layer from Bruch's membrane. Drusen RPE detachment is a high risk due to the development of CNV [38]. On the fluorescein angiography (FA) we can see in early phase hyperfluorescence of soft drusen, which isn't widening until the late stages. Serous RPE detachment is roughly bounded elevation of the RPE cells, containing serous fluid that is usually clear, but may be turbid. On the FA we see early hyperfluorescence, which is sharply bounded, but not noticeable leakage. Fibrovascular RPE detachment (Figure 12.) Hemorrhagic and vascularized RPE detachments are very close, because both contain the CNV. They differ from each other in principle, only the extent of bleeding, which in hemorrhagic RPE detachment greater. Angiographic picture of hemorrhagic RPE detachment is different from the vascularized because hemoglobin overlaps fluorescence, and the extent of CNV is not completely well defined. In unclear cases are possible to use indocyanine green angiography (ICGA), which can display the vascular structure of the retina and choroid despite hemoglobin. The clinical course of RPE detachment may be as follows: Persistent RPE detachment Persistent RPE detachment can be stabilized without the presence of CNV. Over time, may be slowly progressing in its size [38]. Flattened RPE detachment Flattening of the RPE detachment is uncommon and when it occurs, usually develops in the affected area geographic atrophy of the RPE [39]. Rupture of RPE RPE rupture is very unfavorable state accompanying the process of development of RPE detachment [40]. It occurs mostly at the edge of detachment at the transition attached and detached RPE. The RPE constricts away from location of rupture to the center of the detachment. If it is affected subfoveal area, there is detected a rapid decrease in visual acuity. In this case, the photoreceptors had lost contact with the RPE cells, and there is an absolute central scotoma. In the course of rupture usually occurs subretinal bleeding. Less frequently develops CNV, which is very aggressive and rapidly progresses to the disciform scar [40]. Development of CNV But the most common complication of RPE detachment is the appearance of CNV. Increasing age is the basic risk factor of development of CNV in subjects with RPE detachment. CNV formation is rare in patients under 56 years of age, occurs in 29 % of those aged 56-75 years and affects 62.5 % of subjects in the group over 75 years. Another study showed that elderly patients have a larger RPE detachment with more fluid than younger and more often develop CNV [41]. Choroidal neovascular membrane CNV occurs when occurs the rupture of Bruch's membrane. Newly formed blood vessels from choroid grow first into the space under the RPE and later under the subretinal space. Size of edema of neuroretina is a sign of CNV activity. Attempt to unify the classification of CNV has become a necessity. Based on this was defined by the term classic and occult CNV. A typical picture of CNV includes subretinaly localized grayish lesion, which can vary in size, location and thickness. If the membrane has a classic character, the lesion is usually well defined and its edges are lined with subretinal hemorrhages (Figure 13.). On the FA it can be seen from early stages well-demarcated lesion that not increase in it size to the late stages of FA (Figure 14., Figure 15.). The size of occult membranes is most evident at biomicroscopy. Changes are visible at the level of the RPE (movement of RPE cells, RPE detachment). There may occur subretinal hemorrhages. Oedema of neuroretina is noticeable (Figure 16.). Based on the findings on fluorescence angiography (FA) distinguishes two basic types of CNV: classic and occult. Occult CNV (according to Gass classification type I) is characterized by the development of the neovascular complex and RPE choriocapillaris. CNV complex is characteristic for the beginning stages of wet AMD (Figures 16-18.). Classic CNV (according to Gass classification type II) causes the spread of CNV complex in the space between the RPE and neuroretina. We can say that the classic CNV arises from occult CNV to a breach of continuity Bruch's membrane (Figures 13-15.). Classification of CNV according to the center of the fovea Entire CNV complex localization in respect to the center of the fovea plays a crucial role in deciding on the method of subsequent therapy. Localization is possible only by using of high-quality FA. Depending on the position of CNV according to the fovea center we can diagnose 3 forms of CNV. The most common form is subfoveal localization, which has a CNV complex located beneath the center of the fovea. Another form is juxtafoveal localization. In this case, the CNV complex is located at a distance of 1 to 199 microns from the center of the fovea. The least frequent localization is extrafoveal location. Distance from the center of CNV complex fovea is larger than 200 microns. Special clinical units within the wet form of AMD In the last decades passed classification of wet AMD further development. There were created 2 new clinical entities distinguished from the model of classic and occult CNV: retinal angiomatous proliferation (RAP) and polypoidal choroidal vasculopathy (PCV). Retinal Angiomatous Proliferation (RAP) Yannuzzi created this term in order to describe the basic characteristics of clinical entity, in which the formation of neovascularization begins within retina [43]. RAP represents about 10 -15 % of newly diagnosed cases of wet AMD [44]. It occurs more frequently in elderly patients [43]. Most commonly occurs in Caucasians, in contrast to PCV, which is more common in pigmented races [45]. The disease is divided into 3 clinical stages Stage I -intraretinal neovascularization: New RAP lesions develop typically outside the foveal avascular zone, i.e. extrafoveal. The course is initially asymptomatic. Intraretinal neovascularization (IRN) begins in the deep capillary plexus outside the center of the fovea. During development, the most typically spread in the vertical direction, i.e. between the external and the internal limiting membrane. IRN that is spreading sideways is not typical in initial stages. Biomicroscopically can be observed capillary dilation with a large network of nourishing blood vessels and intraretinal haemorrhages. Haemorrhages are usually very discreet compared to subretinal hemorrhages accompanying classic and occult CNV and especially PCV [45]. Pathophysiological mechanism of RAP development is not explained in detail till now. It is assumed the proportion of VEGF produced by RPE cells [46]. Thus neovascularization begins intraretinally and later subretinally. Secondarily creates RPE detachment with the occult CNV [47]. Reduction of Bruch membrane permeability for VEGF may signify increases its intraretinal concentration. This situation is main cause of intraretinal neovascularization [48]. Another theory shows that the oxidative stress leads to migration of RPE cells, both subretinally and intraretinally. This leads to the production of VEGF and stimulation of neovascularization in an atypical location [49]. Diagnostic Basic diagnostic modality is beside biomicroscopy FA examination. In stage I leakage occurs at the region intraretinal neovascularization. In this area is also biomicroscopicaly demonstrated edema with accumulation of vascular loops and leakage of dye on FA. RAP can at this stage be erroneously mistaken for another microangiopathy, such as incipient diabetic maculopathy. RAP stage II and I may be misdiagnosed as classic CNV. In stage III we can see on FA finding very close to vascularized RPE detachment. It is therefore often diagnosed as occult CNV [43]. ICGA usually brings enough light to the uncertain cases. In stage I is observed focal hyperfluorescence in retinal circulation, which has the character of IRN [44]. There can often be diagnosed retino-retinal anastomosis. In stage II IRN is visible inside and under the retina. For location of IRN have to be used pseudo stereo view to state the position of neovascularization in the vertical axis. Hot spot for RAP must be distinguished from another hot spots, e.g. inside the choroid. In stage III is visible, a connection of choroidal and retinal neovascularization ( Figure 19.-21.). This creates a complex neovascularization, which has the character of vascularized RPE detachment. In some cases, we trace chorioretinal anastomosis. Polypoidal Choroidal Vasculopathy (PCV) This clinical entity has been in detail described and classified by Yannuzzi in 1990 as a peculiar hemorrhagic disorder of the macula, characterized by recurrent sub-retinal and sub-retinal pigment epithelium bleeding in middle aged black women [50]. Pathogenesis of the disease is not completely understood. The primary pathological changes that occur are saclike extension of choroidal vessels, which are sacculated polypoidal nature. Clinically it is manifested by multiple hemorrhagic PCV and serose RPE detachment accompanied by retinal edema [50]. PCV is a special type of CNV in wet AMD [45]. PCV usually occurs in pigmented races between 50 -65 years of age. Originally it was thought that the disease affected only black women. According to published data, the disease occurs in men, the ratio of affected women compared to men is 4,7:1 [51]. Prevalence varies between 4-10% in subjects with newly diagnosed wet AMD. For the basic clinical picture of PCV is characterized by the absence drusen accompanied by haemorrhagic or serous RPE detachment. Other symptoms are: minimal signs of scarring, vitreous hemorrhage, and signs of intraocular inflammation. The disease usually occurs bilaterally [50], although it is described one-side occurrence [52]. The main factor contributing to the development of PCV seems to be the long-term chronic hypoxia by RPE detachment together with destructive effect of hard exudates. Vascular structure PCV is located in choroid. Distinguish by size we have small, medium and large PCVs. PCV lesions reach a larger size if there are affected larger choroid vessels. When are affected medium choroid vessels, the lesions are smaller. Their diagnosis is more difficult because they don't have a characteristic image like a larger lesions [50]. PCV is located mostly around the optic disc. Some works but also show localization in the central periphery or in the central macular area [53]. PVC may be present as a single lesion, or may be multiple. Topographically are lesions localized to the area under the Bruch's membrane. The results of these studies are documented on OCT [54]. Natural course of the disease PCV has the character of a chronic disease that manifests by serosanguinolent RPE detachment often near the optic nerve. Disease comes in multiple relapses, and patients have maintained good visual acuity for long time. Chronic RPE detachment usually results to the creation scaring plaque beneath the RPE, which is hardly distinguishable from classical disciform scar that develops as a terminal stage of the wet form of AMD. Polyps can have very specific progress. They occlude often spontaneously, and after some time are again perfunded. If are polyps located in the central subfoveal area in the terminal stage of the disease can occur RPE atrophy and chronic cystoid retinal changes. Rarely may arise massive subretinal and intravitreal hemorrhage, which is usually fatal and final visual acuity is poor [55]. Diagnostic Blood vessels occurring with PCV have a characteristic shape. Form a bag-like aneurysms, RPE over them has a characteristic red-orange color (Figure 22., Figure 23.). In contrast, blood vessels in other types of CNV are made from very small vascular knitting and are usually gray-green color. The thickness of the choroid is smaller in other types of CNVs. In PCV is choroid thicker. In the initial phase of the angiogram polyps are usually smaller than in the late phase. This corresponds to the red-orange lesions, which are detectable by biomicroscopy. In the late stage, there is a reverse phenomenon. Center of the lesion becomes hypofluorescent and around the polyp occurs hyperfluorescence. At a very late stage angiogram can occur washout of dye. This phenomenon is only seen in the lesions without leaking; leaking lesions remain hyperfluorescent [56]. OCT examination demonstrates elevation of RPE layer, which corresponds to the red-orange lesions detected during biomicroscopic examination. PCV is manifested against serous RPE ablation by greater prominence of RPE layer [54]. Differential diagnosis Differential diagnosis distinguishes PCV from other vascular abnormalities, inflammatory conditions of the retina and choroid, other types of CNV and choroidal tumors. Improved diagnostic methods and clarifying the pathophysiological mechanisms lead to the correct diagnosis of PCV more often than before. In diagnostic help both FA and ICGA. CNV in PCV leaks already at an early phase, so as CNV different origin. In the late stage, the lesion on the basis of PVC may have washed out the dye [56]. If a bag-like aneurysms leak, in their neighborhood is evident late dye leakage (Figure 26., Figure 27.).
v3-fos-license
2018-08-28T09:51:17.020Z
2018-01-11T00:00:00.000
52091770
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.demographic-research.org/volumes/vol38/6/38-6.pdf", "pdf_hash": "19015c67fb3f1a74a6cf607154e42be6496303e6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1441", "s2fieldsofstudy": [ "Education" ], "sha1": "19015c67fb3f1a74a6cf607154e42be6496303e6", "year": 2018 }
pes2o/s2orc
The role of education in the association between race/ethnicity/nativity, cognitive impairment, and dementia among older adults in the United States BACKGROUND Older Black and Hispanic adults are more likely to be cognitively impaired than older White adults. Disadvantages in educational achievement for minority and immigrant populations may contribute to disparities in cognitive impairment. OBJECTIVE Examine the role of education in racial/ethnic and nativity differences in cognitive impairment/no dementia (CIND) and dementia among older US adults. METHODS Data comes from the 2012 Health and Retirement Study. A total of 19,099 participants aged ≥50 were included in the analysis. Participants were categorized as having normal cognition, CIND, or dementia based on the Telephone Interview for Cognitive Status (TICS) or questions from a proxy interview. We document age and educational differences in cognitive status among White, Black, US-born Hispanic, and foreign-born Hispanic adults by sex. Logistic regression is used to quantify the association between race/ethnicity/nativity, education, and cognitive status by sex. RESULTS Among women, foreign-born Hispanics have higher odds of CIND and dementia than Whites. For men, Blacks have higher odds for CIND and dementia compared to Whites. The higher odds for CIND and dementia across race/ethnic and nativity groups was reduced after controlling for years of education but remained statistically significant for older Black and US-born Hispanic adults. Controlling for education reduces the odds for CIND (women and men) and dementia (men) among foreign-born Hispanics to nonsignificance. CONTRIBUTION These results highlight the importance of education in CIND and dementia, particularly among foreign-born Hispanics. Addressing inequalities in education can contribute to reducing racial/ethnic/nativity disparities in CIND and dementia for older adults. Introduction Extensive research has been conducted on racial and ethnic disparities in cognitive functioning in the United States. These findings indicate that older non-Hispanic Blacks (hereafter, Blacks) and Hispanics have lower cognitive performance than non-Hispanic Whites (hereafter, Whites) on measures of memory (Masel and Peek 2009), executive functioning (Early et al. 2013), and global cognition (Díaz-Venegas et al. 2016). Blacks and Hispanics are also more likely to have cognitive impairment, and spend a larger proportion of their remaining years after age 50 with cognitive impairment/no dementia (CIND) and dementia than Whites (Alzheimer's Association 2010; Garcia et al. 2017a;Langa et al. 2017). These racial/ethnic disparities in cognitive functioning have been attributed to several factors including a higher prevalence of chronic health conditions associated with an increased risk for dementia (Mayeda et al. 2015), poor educational quality in childhood (Crowe et al. 2013), and low literacy (Mehta et al. 2004). Prior research has consistently found that higher educational attainment is associated with better performance on measures of cognitive functioning (Alley, Suthers, and Crimmins 2007) and decreased risk for dementia (Caamaño-Isorna et al. 2006). Hispanics, in particular, have lower educational attainment compared to Whites. In 2014, nearly 90% of Whites aged 55 and older reported having at least a high school level of education, compared to 80% for Blacks and 59% for Hispanics of comparable age (United States Census Bureau 2014). Racial and ethnic disparities in cognitive functioning may also be attributed to disadvantages in educational achievement for minority and immigrant populations. Controlling for differences in education has been shown to reduce disparities in cognition between Whites, Blacks, and Hispanics (Schwartz et al. 2004;Sisco et al. 2015). For example, Yaffe and colleagues showed the increased risk for dementia among older Blacks compared to Whites was no longer statistically significant after controlling for a composite measure of socioeconomic status that included educational attainment, income, financial adequacy, and literacy level (Yaffe et al. 2013). Despite considerable research into racial/ethnic disparities in cognitive functioning among older adults and the role of education in explaining these disparities, less scholarship has examined if the likelihood for CIND and dementia among older Hispanics varies by nativity status (i.e., US-born vs. foreign-born) compared to Whites. Prior research documents the risk for cognitive impairment, rates of cognitive decline, and proportion of years after age 65 lived with cognitive impairment vary by nativity status among older Hispanics (Downer et al. 2017;Garcia et al. 2017b;Garcia et al. forthcoming;). However, these analyses only included US-born and foreign-born Hispanics of Mexican origin residing in the Southwest United States, which prevented racial/ethnic and nativity comparisons in cognitive status with older White and Black adults. among foreign-born Mexican Americans in the incidence of cognitive impairment compared to Whites, independent of individual and neighborhood characteristics. However, this analysis did not differentiate by sex or between CIND and dementia when defining cognitive impairment (Weden et al. 2017). The relationship between nativity, cognitive functioning, cognitive impairment, and cognitive life expectancies among older Hispanics has been shown to differ between men and women (Downer et al. 2017;Garcia et al. 2017a;Garcia et al. 2017b;Garcia et al. forthcoming;. Differentiating between CIND and dementia is important because CIND is a less severe stage of cognitive impairment. The present analysis examines the role of education in racial/ethnic and nativity differences for CIND and dementia. This analysis extends previous research by (1) distinguishing different cognitive statuses, (2) differentiating between Hispanics by nativity, and (3) stratifying by sex. Data and methods This analysis is based on data from the 2012 Health and Retirement Study (HRS 2011). We use the harmonized version of the RAND HRS Version O Data File (RAND 2015) to assess the association between education and the odds for CIND and dementia among White, Black, and US-born and foreign-born Hispanic adults ages 50 and older in the United States. Respondents missing information on education and who identified as 'other' are omitted from the analysis. The final analytic sample includes 12,762 Whites, 3,715 Blacks, 992 USborn Hispanics, and 1,630 foreign-born Hispanics for a total of 19,099 participants. The cognitive functioning of HRS participants able to complete a direct interview is evaluated using a modified version of the Telephone Interview for Cognitive Status (TICS-M) (Brandt, Spencer, and Folstein 1988). The TICS-M assesses cognitive functioning in learning (immediate recall of 10-word list, 10 points), memory (delayed recall of 10-word list, 10 points), working memory (serial seven subtraction, 5 points), and attention (counting backward from 20-11, 2 points) (Crimmins et al. 2011). A total score for the TICS-M is obtained by calculating the sum of the individual domains. The range of possible scores is 0-27 points with higher scores indicating better cognitive functioning. We used the TICS-M as HRS participants younger than 65 years of age are not given the orientation or naming items that are included in the full cognitive assessment (Crimmins et al. 2011). Following previous research, we used cutoffs created to classify participants as having normal cognition (12-27 points), CIND (7-11 points), and dementia (0-6 points) (Crimmins et al. 2011). These cutoffs were created by HRS investigators so the frequency of cognitive states in the HRS matched what was estimated in the Aging Demographics and Memory Study (Crimmins et al. 2011), which is a substudy of the HRS who received an in-depth in-home neuropsychological exam. The cognitive status of HRS participants who were unable to complete a direct interview is categorized using questions from a proxy interview (Crimmins et al. 2011): (1) proxyreported memory ability (0 points [excellent] -4 points [poor]); (2) number of limitations in five instrumental activities of daily living (managing money, taking medication, preparing meals, using a telephone, and shopping for groceries; score 0-5 points); and (3) interviewer assessment of difficulty completing the interview due to respondent's cognitive limitations (0-2 points). The overall score was used to classify proxy respondents with dementia (6 points or higher), CIND (3-5 points), or normal cognition (0-2 points). Sociodemographic variables in the analysis include race/ethnicity, nativity, sex, age, and education. Race/ethnicity and nativity are self-reported. We include Whites, Blacks, and USborn and foreign-born Hispanics. Sex corresponds to whether the respondent identifies as female or male. Age is a continuous variable. We measure education as completed years of formal education. In the descriptive analysis, comparisons across age, education, and cognitive status were conducted using X 2 and t-tests to assess race/ethnicity and nativity differentials by sex. For the multivariate models, we used logistic regression to quantify the association between race/ethnicity/nativity, education, and cognitive status by sex. All models were fit separately for males and females to account for well-known sex differences in aging, including a higher lifetime risk for dementia and lower levels of education among women compared to men (Chêne et al. 2015;Ott et al. 1998). We use sampling weights provided by the HRS to adjust for nonresponse and the complex design of the survey. Table 1 presents sociodemographic characteristics for the study sample. White respondents are significantly older, more educated, and less likely to be classified as CIND or dementia than minority and immigrant groups, regardless of sex. In addition, females are older than their male counterparts and report fewer years of education across race/ethnicity and nativity (with the exception of education for Blacks). Table 2 presents cognitive status characteristics by race/ethnicity/nativity and sex. Overall, White adults were more likely to have normal cognitive status at older ages than minority and immigrant groups. In addition, the mean age for CIND and dementia is significantly lower among older Whites than Blacks and Hispanics. Furthermore, older Whites have significantly more years of education across all cognitive categories compared to Blacks and Hispanics, regardless of sex. Table 3 presents results from separate logistic regression models for CIND and dementia. For each regression model, the reference category is cognitively normal. Models 1 and 3 are base models that examine race/ethnic/nativity differentials controlling for age. Models 2 and 4 add education. The results in Panel A, for females, show that race/ethnicity/nativity and older age are associated with CIND and dementia. In Model 1, Black, US-born Hispanic, and foreign-born Hispanic women have higher odds (4.8, 4.1, and 5.4, respectively) of being classified as CIND than White women. In Model 3, Blacks, US-born Hispanics, and foreignborn Hispanics have higher odds (5.9, 6.4, and 8.9, respectively) of being classified as having dementia than White women, independent of age. Although adding education attenuates disparities in CIND (Model 2) and dementia (Model 4) among women, the higher odds of CIND and dementia for minority and immigrant groups remain statistically significant (with the exception of CIND for foreign-born Hispanic women). Results Among males (Panel B), a different pattern emerges. All minority and immigrant men have higher odds of CIND and dementia than White men, but Blacks exhibit 4.1 times higher odds while both US-born Hispanics and foreign-born Hispanics exhibit 3.1 times higher odds for CIND (Model 1). Similarly, in Model 3, Black, US-born Hispanic, and foreign-born Hispanic men have higher odds (8.3, 6.4, and 3.4, respectively) of being classified as having dementia compared to White men. When we include education, the odds for CIND (Model 2) and dementia (Model 4) are reduced among Blacks and US-born Hispanics; for foreignborn Hispanic males the difference for CIND and dementia is no longer significant. Models 2 and 4 for both sexes illustrate the impact of lower levels of education on cognitive status for older Blacks and Hispanics relative to Whites. Educational disadvantages among minority and immigrant groups contribute to the higher odds for CIND and dementia. These findings suggest that the increased odds for CIND and dementia among minority and immigrant populations relative to Whites would be reduced, but still present for some groups if years of education were equal across populations. Discussion and conclusion Our findings call attention to the importance of race/ethnicity/nativity and education when assessing the odds for CIND and dementia among older adults in the United States. First, we provide evidence that years of education accounts for a large proportion of the association between race/ethnicity/nativity, CIND, and dementia among Blacks, US-born Hispanics, and foreign-born Hispanics. Second, the role of education appears to be stronger for foreignborn Hispanics compared to Blacks and US-born Hispanics. It is important to interpret our findings in the context of previous research using the Health and Retirement Study. For example, analyses of data from the 2006 HRS has shown that among adults 55 years and older, Blacks were two to three times, and Hispanics two times, more likely to be cognitively impaired than Whites (Alzheimer's Association 2010). Though these disparities varied by age group, with larger racial/ethnic differences among adults aged 55-64 (four times more likely for Blacks and three times more likely for Hispanics), than adults 85 and older (two times more likely for Blacks and 1.6 times more likely for Hispanics, respectively) (Alzheimer's Association 2010). Our findings for the odds of CIND and dementia for Blacks and Hispanics after adjusting for educational attainment are similar to those reported by the Alzheimer's Association (2010). However, our odds ratios for dementia, particularly for Blacks, are considerably higher than what was reported in a recent study using 2000 and 2012 HRS data ). These differences may be due to several factors. First, Langa and colleagues (2017) combined cognitively normal and CIND categories to create a dichotomous variable. Thus, participants with CIND were included in the reference category. Second, the authors control for net worth as an additional measure of socioeconomic status, which may have reduced the association between race/ethnicity and dementia. Finally, Langa and colleagues (2017) did not stratify by nativity or sex, which may also have contributed to the different findings. We acknowledge that net worth may contribute to disparities in CIND and dementia. Though, previous findings suggest the increased risk for dementia associated with low income is reduced and no longer statistically significant after controlling for education (Evans et al. 1997). Thus, controlling for income may not have a considerable impact on our results. In addition, the racial/ethnic and nativity differences reported above may be due in part to temporal trends. Using longitudinal data can help tease out age and cohort effects by comparing data from different time points (Yang and Land 2013). However, recent longitudinal findings in the HRS show no significant differences in the increased odds for prevalent cognitive impairment among US-born and foreign-born Mexican Americans independent of individual social and economic factors (Weden et al. 2017), which are consistent with our findings. Furthermore, this study documents an immigrant advantage in the incidence of cognitive impairment among foreign-born Mexican Americans relative to Whites, consistent with the healthy immigrant hypothesis (Weden et al. 2017). Positive health selection that contributes to longer life expectancies and lower mortality may also contribute to a reduced risk for cognitive impairment and slower cognitive decline among foreign-born Hispanics. For instance, found mid-life (age 20-49 years) immigrants from Mexico had higher levels of cognitive functioning and slower rates of cognitive decline than their US-born co-ethnics. Furthermore, this study documented midlife migrant males were able to maintain higher cognitive function for a longer period of time compared to midlife migrant females, which is also consistent with the health immigrant effect ). Further research is needed to shed light on the cognitive benefits of formal education among different groups who most likely received vastly different qualities of formal education. Furthermore, it is possible that different effects of formal years of education for particular groups imply that informal education plays an important supplementary role. Moreover, additional research is needed to deepen our understanding of racial/ethnic and nativity disparities in cognition beyond educational attainment. The CIND and dementia differences we documented above have important policy implications. Educational attainment of Whites remains considerably higher than that of Blacks and Hispanics due to social and economic disadvantages experienced by minority populations (Gamoran 2001). Social policy specifically aimed at increasing educational attainment and quality for minority and immigration populations can potentially have major impacts on reducing or eliminating future disparities in adult CIND and dementia. Our analysis should help advance a social policy agenda aimed toward the eventual closing of cognitive disparities among minority and immigrant groups of older adults in the United States. Garcia et al. Page 10
v3-fos-license
2019-01-02T00:32:35.783Z
1994-01-01T00:00:00.000
90182422
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=1386&context=fgr", "pdf_hash": "19b9b9c69e2c5d687bb827e54a9bf9fb09f87111", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1442", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "66ed8618150d7235b17665d9fdb50e2349244c5f", "year": 1994 }
pes2o/s2orc
A novel method of growing fungi for DNA extraction Preparation of fungi for DNA extraction typically involves growing cultures in liquid culture in Erlenmeyer flasks, Roux bottles or even microfuge tubes (Cenis 1992 Nucl. Acids Res. 20:2380). Growing fungal cultures in liquid may require formulating new media or determining aeration requirements, and there are no rapid means of confirming the identification of the resulting mycelium. Fungi are grown routinely on agar media for identification, but agar complicates DNA extraction. Creative Commons License This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License. This regular paper is available in Fungal Genetics Reports: http://newprairiepress.org/fgr/vol41/iss1/24 A novel method of growing fungi for DNA extraction Keith A. Seifert, Centre for Land and Biological Resources Research Agriculture Canada, Research Branch, Ottawa, Ontario K1A 0C6 Canada Preparation of fungi for DNA extraction typically involves growing cultures in liquid culture in Erlenmeyer flasks, Roux bottles or even microfuge tubes (Cenis 1992 Nucl. Acids Res. 20:2380). Growing fungal cultures in liquid may require formulating new media or determining aeration requirements, and there are no rapid means of confirming the identification of the resulting mycelium. Fungi are grown routinely on agar media for identification, but agar complicates DNA extraction. Solutions of 'reverse agar' (BASF pluronic polyol F-127), a block polymer of polyoxypropylene and polyoxyethylene, are solid at normal room temperatures, but liquid at 4C. When the compound is used as a replacement for agar in solid media, its unusual properties allow the separation of mycelium and medium by simply placing a mature culture in a refrigerator. The compound has been employed for isolation of heat sensitive antagonistic microorganisms (Gardner and Jones 1984, J. Gen. Microbiol. 130:731-733; Olson and Lange 1989 Opera Bot. 100:197-199), isolation of enzymes associated with basidiome formation in Coprinus (Choi and Ross 1988 Exp. Mycol. 12:80-83), and for isolating mycelium from Neurospora race tubes (Munkres 1990, Fungal Genet. Newsl. 37:26). In this note, the possibility of extracting DNA suitable for PCR amplification from mycelium grown on reverse agar media is documented. 'Reverse Malt Agar' (RMA) medium was prepared containing 30% BASF pluronic polyol F-127 substituted for agar in the Malt Extract Agar medium of Pitt (1979 The Genus Penicillium, Academic Press). The liquid was poured over the F-127 granules and the resulting suspension left in the refrigerator overnight until dissolved. The solution was then autoclaved, resulting in a thick, sludge like substance, which was again refrigerated, until a homogenous, liquid solution resulted. The liquid was then poured into petri dishes, where it solidified as it warmed to room temperature. The resulting medium is not a true solid, but rather a dense gel. Cultures of Penicillium spinulosum Thom. (DAOM 216698), Aspergillus japonicus Saito var. japonicus (DAOM 216695), Gliocladium roseum Bainier (Doyle SB-03a, not saved) and Trichoderma harzianum Rifai (DAOM 216501) were grown on RMA for 7 days at 25C in 6 cm petri dishes. Growth rates were slightly slower than on the same medium made with 2% agar, but the resulting colonies produced microscopically typical sporulating structures. For DNA extraction, the petri dishes were placed in the refrigerator for approximately 1 hour until the medium had liquified. Subsequent handling was done on ice. Mycelium was lifted from the medium using an autoclaved pipette tip, placed on the inverted, slanted lid, and allowed to drain for 30-60 seconds. The mycelium was then cut into smaller pieces using a sterile scalpel blade, and transferred into autoclaved 1.5 ml microfuge tubes. The tubes were then spun in a cold microfuge for 5-10 minutes and the excess medium removed. The mycelium was washed twice with 750 uL cold, autoclaved distilled water followed by cold centrifugation, and then used Published by New Prairie Press, 2017 directly for DNA extraction. The DNA miniprep method of Edwards et al. (1991 Nucl. Acids Res. 19:1349), modified by the addition of a cold 70% ethanol wash of the final pellet, was used. The resulting DNA was treated with RNAase A for 1 hour at 37C. PCR amplification of the ITS1-ITS4 region of the ribosomal DNA was performed using the primers and conditions given by White et al. (pp. 282-287 In: PCR Protocols, Innis et al. eds Academic Press). The resulting products were digested for 2 hrs at 37C using HinfI in the buffer supplied with the enzyme. The DNA yields obtained from mycelium grown on RMA were similar to those obtained from mycelium grown in liquid culture. Washing away excess reverse agar with cold water significantly improved yields, but DNA also was isolated from unwashed mycelium. The resulting DNA performed normally in the ITS PCR amplification and subsequent restriction digests (Figure 1). Figure 1. Miniprep DNA (b-d), ITS 1-4 amplification (e-g) and HinfI restriction digests (h-j) from fungi grown on malt extract reverse agar. b, e, h Penicillium spinulosum. c, f, i Aspergillus japonicus var. japonicus. d, g, j Trichoderma harzianum. Lane a is the marker. Use of reverse agar for cultivation of fungi for DNA extraction may be convenient for certain studies. For fungi that do not produce characteristic structures in liquid broth, reverse agar provides a means of ensuring the correct identity of the mycelium before DNA extraction proceeds. Certain population genetics studies, for example, require the manipulation of a large number of cultures that must be cloned (eg. single-spore isolations) before genetic analysis can proceed. The use of reverse agar could eliminate one round of culture transfers, resulting in significant labour savings. Reverse agar solutions can be stored for some time at 4C and plates poured when required. Experience has shown that poured plates do not keep indefinitely at room temperature. After 4-6 weeks, the medium no longer liquifies, presumably because of higher polymer concentrations resulting from water evaporation. Also, because the polymer itself is slightly inhibitory to fungi, it does not seem to be suitable for weak nutrient media. Our trials with the Fusarium medium SNA (Nirenberg 1981 Can. J. Bot. 59:1599-1609), for example, resulted in sparse growth that could not be harvested following liquification of the medium. Acknowledgements: I am grateful to Dr. John Speakman (BASF, Limburghof, Germany) and BASF Performance Chemicals, Parsippany, NJ for providing samples of pluronic polyol F-127. http://newprairiepress.org/fgr/vol41/iss1/24 DOI: 10.4148/1941-4765.1386 Note from FGSC: Dr. K.D. Munkres donated a large quantity of pluronic F-127 to the stock center. We will gladly make samples (100-200 g) available available at no cost to interested researchers. Published by New Prairie Press, 2017 Preparation of fungi for DNA extraction typically involves growing cultures in liquid culture in Erlenmeyer flasks, Roux bottles or even microfuge tubes (Cenis 1992Nucl. Acids Res. 20:2380. Growing fungal cultures in liquid may require formulating new media or determining aeration requirements, and there are no rapid means of confirming the identification of the resulting mycelium. Fungi are grown routinely on agar media for identification, but agar complicates DNA extraction. Solutions of 'reverse agar' (BASF pluronic polyol F-127), a block polymer of polyoxypropylene and polyoxyethylene, are solid at normal room temperatures, but liquid at 4C. When the compound is used as a replacement for agar in solid media, its unusual properties allow the separation of mycelium and medium by simply placing a mature culture in a refrigerator. The compound has been employed for isolation of heat sensitive antagonistic microorganisms (Gardner and Jones 1984, J. Gen. Microbiol. 130:731-733;Olson and Lange 1989 Opera Bot. 100:197-199), isolation of enzymes associated with basidiome formation in Coprinus (Choi and Ross 1988 Exp. Mycol. 12:80-83), and for isolating mycelium from Neurospora race tubes (Munkres 1990, Fungal Genet. Newsl. 37:26). In this note, the possibility of extracting DNA suitable for PCR amplification from mycelium grown on reverse agar media is documented. 'Reverse Malt Agar' (RMA) medium was prepared containing 30% BASF pluronic polyol F-127 substituted for agar in the Malt Extract Agar medium of Pitt (1979 The Genus Penicillium, Academic Press). The liquid was poured over the F-127 granules and the resulting suspension left in the refrigerator overnight until dissolved. The solution was then autoclaved, resulting in a thick, sludge like substance, which was again refrigerated, until a homogenous, liquid solution resulted. The liquid was then poured into petri dishes, where it solidified as it warmed to room temperature. The resulting medium is not a true solid, but rather a dense gel. Cultures of Penicillium spinulosum Thom. (DAOM 216698), Aspergillus japonicus Saito var. japonicus (DAOM 216695), Gliocladium roseum Bainier (Doyle SB-03a, not saved) and Trichoderma harzianum Rifai (DAOM 216501) were grown on RMA for 7 days at 25C in 6 cm petri dishes. Growth rates were slightly slower than on the same medium made with 2% agar, but the resulting colonies produced microscopically typical sporulating structures. For DNA extraction, the petri dishes were placed in the refrigerator for approximately 1 hour until the medium had liquified. Subsequent handling was done on ice. Mycelium was lifted from the medium using an autoclaved pipette tip, placed on the inverted, slanted lid, and allowed to drain for 30-60 seconds. The mycelium was then cut into smaller pieces using a sterile scalpel blade, and transferred into autoclaved 1.5 ml microfuge tubes. The tubes were then spun in a cold microfuge for 5-10 minutes and the excess medium removed. The mycelium was washed twice with 750 uL cold, autoclaved distilled water followed by cold centrifugation, and then used directly for DNA extraction. The DNA miniprep method of Edwards et al. (1991Nucl. Acids Res. 19:1349, modified by the addition of a cold 70% ethanol wash of the final pellet, was used. The resulting DNA was treated with RNAase A for 1 hour at 37C. PCR amplification of the ITS1-ITS4 region of the ribosomal DNA was performed using the primers and conditions given by Innis et al. eds Academic Press). The resulting products were digested for 2 hrs at 37C using HinfI in the buffer supplied with the enzyme. The DNA yields obtained from mycelium grown on RMA were similar to those obtained from mycelium grown in liquid culture. Washing away excess reverse agar with cold water significantly improved yields, but DNA also was isolated from unwashed mycelium. The resulting DNA performed normally in the ITS PCR amplification and subsequent restriction digests (Figure 1). Use of reverse agar for cultivation of fungi for DNA extraction may be convenient for certain studies. For fungi that do not produce characteristic structures in liquid broth, reverse agar provides a means of ensuring the correct identity of the mycelium before DNA extraction proceeds. Certain population genetics studies, for example, require the manipulation of a large number of cultures that must be cloned (eg. single-spore isolations) before genetic analysis can proceed. The use of reverse agar could eliminate one round of culture transfers, resulting in significant labour savings. Reverse agar solutions can be stored for some time at 4C and plates poured when required. Experience has shown that poured plates do not keep indefinitely at room temperature. After 4-6 weeks, the medium no longer liquifies, presumably because of higher polymer concentrations resulting from water evaporation. Also, because the polymer itself is slightly inhibitory to fungi, it does not seem to be suitable for weak nutrient media. Our trials with the Fusarium medium SNA (Nirenberg 1981 Can. J. Bot. 59:1599-1609), for example, resulted in sparse growth that could not be harvested following liquification of the medium.
v3-fos-license
2022-01-16T16:12:37.827Z
2022-01-01T00:00:00.000
245979747
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/2/891/pdf", "pdf_hash": "9af4f1348a48acc2d103948e602ee515f9e36ea5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1445", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Materials Science" ], "sha1": "d6e8e65bd6b66f3617050e69401425446f4e2a4b", "year": 2022 }
pes2o/s2orc
The Landscape of Autophagy-Related (ATG) Genes and Functional Characterization of TaVAMP727 to Autophagy in Wheat Autophagy is an indispensable biological process and plays crucial roles in plant growth and plant responses to both biotic and abiotic stresses. This study systematically identified autophagy-related proteins (ATGs) in wheat and its diploid and tetraploid progenitors and investigated their genomic organization, structure characteristics, expression patterns, genetic variation, and regulation network. We identified a total of 77, 51, 29, and 30 ATGs in wheat, wild emmer, T. urartu and A. tauschii, respectively, and grouped them into 19 subfamilies. We found that these autophagy-related genes (ATGs) suffered various degrees of selection during the wheat’s domestication and breeding processes. The genetic variations in the promoter region of Ta2A_ATG8a were associated with differences in seed size, which might be artificially selected for during the domestication process of tetraploid wheat. Overexpression of TaVAMP727 improved the cold, drought, and salt stresses resistance of the transgenic Arabidopsis and wheat. It also promoted wheat heading by regulating the expression of most ATGs. Our findings demonstrate how ATGs regulate wheat plant development and improve abiotic stress resistance. The results presented here provide the basis for wheat breeding programs for selecting varieties of higher yield which are capable of growing in colder, drier, and saltier areas. Introduction Autophagy is an evolutionarily conserved intracellular vacuolar process that controls recycling cellular contents and organelles to promote cell survival and redistribute nutrients. As a highly conserved intracellular degradation system, autophagy is believed to be responsible for the self-defense and protection of plants from biotic and abiotic stress. Three types of autophagy, microautophagy, macroautophagy, and mega-autophagy, have been identified in plants [1]. The role of autophagy in plants is paradoxical. On the one hand, it may respond to either stress conditions or nutrient starvation to enable cell survival. On the other hand, autophagy may be associated with programmed cell death (PCD) in extensive cell component degradation [2]. Autophagy is a complex process that involves many autophagy-related proteins (ATGs). It is initiated by the phosphorylation and dephosphorylation of the ATG1/ATG13 complex [3]. ATG9 membrane delivery complex and class III phosphatidylinositol-3kinase (PI3K) complex are necessary for autophagosome formation [4,5]. Furthermore, ATG8-PE (phosphatidylethanolamine) and ATG5-ATG12 conjugation systems contribute to 2 of 20 phagophore expansion and closure [6]. Functions of autophagy in plants might result from the synergy of all autophagy protein complexes, and autophagy in plants might be studied as a whole. Many factors regulate the autophagy of plants. The target of the rapamycin (TOR) complex and sucrose non-fermenting 1-related kinase 1 (SnRK1) are crucial regulators of abiotic stress-induced autophagy in plants [7]. Furthermore, there are considerable overlaps and signaling crosstalks between different cell death pathways and how they are regulated [8]. The soluble N-ethyl-maleimide sensitive factor attachment protein receptors (SNARE) complex is a crucial regulator of vesicular traffic that mediates specific membrane fusion between transport vesicles and target membranes and is essential to autophagy [9]. SNARE proteins could participate in the autophagy process by interacting with many ATGs. In yeast, SNARE proteins mediate the homotypic fusion of Atg9 vesicles. It also bundles to regulate autophagosome-vacuole fusion which is controlled by Atg1 kinase [10,11]. In mammals, both Atg8 and Atg14 proteins could interact with SNARE to regulate lysosome and autolysosome biogenesis [12,13]. SNARE could likely mediate homotypic membrane fusion between vesicles and vacuoles in Arabidopsis [14]. Vesicle-associated membrane protein 727 (VAMP727) is a seed plant-specific R-SNARE that mediates vacuolar transport, plant growth, and seed maturation [14,15]. This protein has been reported to play a crucial role in autophagy [16]. However, further studies are needed on the specific functions of VAMP727 in autophagy. An increasing number of studies have been conducted to reveal the functions of autophagy in both fungi and animals. Similar to yeast, one of the significant physiological roles of autophagy in plants is a cellular adaptation to both nitrogen and carbon starvation [17]. However, it is challenging to use definitions developed in animal and fungi systems indiscriminately since plants may have specific autophagy mechanisms. During the lifespan of plants, autophagy is crucial in plants' resistance to various biotic and abiotic stresses [18]. Autophagy activation can represent one of the significant responses of plants to deal with these stress conditions. Although several functions of specific autophagyrelated genes (ATGs) have been studied, the mechanisms on how autophagy regulates plant development or plant responses to biotic and abiotic stresses are largely unknown. Wheat (Triticum aestivum L.) is the most widely cultivated crop worldwide, contributing to about a fifth of humans' total calorie requirements [19]. Genetically, it is a newly formed allohexaploid species (2n = 6x = 42, AABBDD) that originates from a combination of genomes of three diploid donor species through two naturally interspecific hybridization events [20]. Thus, wheat is an ideal model species for studying chromosome interaction and polyploidization in plants. In recent years, with the completion of genome sequencing and assembly of three wheat-related species, T. turgidum ssp. dicoccoides (wild emmer), T. urartu and A. tauschii [19,[21][22][23], large-scale wheat resequencing efforts have been carried out to explain the origins and domestication of wheat [24,25]. The ATGs in wheat such as ATG4, ATG8 and ATG6 have been identified as associated with biotic and abiotic stress responses [26,27]. Until now, genome-wide identification and characterization of ATGs have been investigated in various plant species. However, the role of ATGs in wheat's abiotic stress resistance and its evolution has not been systematically determined. This study aimed to investigate the function and evolution of the ATG gene family in wheat and its diploid and tetraploid progenitors and to explore the role of TaVAMP727 in autophagy-related plant growth, development, and abiotic stress responses. Identification of ATGs from Wheat and Its Diploid and Tetraploid Progenitors We identified 77, 51, 29, and 30 ATGs in wheat, wild emmer, T. urartu, and A. tauschii, respectively. These ATGs were grouped into 19 subfamilies according to the conserved motif and phylogenetic relations. ATG8, ATG13, ATG14, and ATG18 were the top four large subfamilies (Figure 1). On the whole, the number of ATGs across the species was consistent with the genome ploidy. The number of ATGs did not strictly follow the ploidy ratio (3:2:1) (Table S1: Additional file 1), possibly due to gene loss in hexaploidy wheat and intrachromosomal gene replication in diploid wheat. For example, the ATG8 subfamily had a significant expansion in tetraploid wheat species, in which it had seven members and was expanded to six chromosomes (1B, 2A, 2B, 5A, 5B, and 6A). However, ATG8 subfamily members in the 1B chromosome were lost in hexaploid wheat during the natural hybridization between wild emmer wheat and A. tauschii. The loss may be caused by an imbalance of gene number proportion in the A, B, and D subgenomes. In addition, the number of ATGs in plant species might correspond to the ploidy number of the genome (Table 1) [28][29][30][31]. At the same time, members of the ATG8 and ATG18 subfamilies were expanding dramatically in all detected plant species and not limited by the ploidy of the genome, indicating their durable and indispensable functions. Identification of ATGs from Wheat and Its Diploid and Tetraploid Progenitors We identified 77, 51, 29, and 30 ATGs in wheat, wild emmer, T. urartu, and A. tauschii, respectively. These ATGs were grouped into 19 subfamilies according to the conserved motif and phylogenetic relations. ATG8, ATG13, ATG14, and ATG18 were the top four large subfamilies (Figure 1). On the whole, the number of ATGs across the species was consistent with the genome ploidy. The number of ATGs did not strictly follow the ploidy ratio (3:2:1) (Table S1: Additional file 1), possibly due to gene loss in hexaploidy wheat and intrachromosomal gene replication in diploid wheat. For example, the ATG8 subfamily had a significant expansion in tetraploid wheat species, in which it had seven members and was expanded to six chromosomes (1B, 2A, 2B, 5A, 5B, and 6A). However, ATG8 subfamily members in the 1B chromosome were lost in hexaploid wheat during the natural hybridization between wild emmer wheat and A. tauschii. The loss may be caused by an imbalance of gene number proportion in the A, B, and D subgenomes. In addition, the number of ATGs in plant species might correspond to the ploidy number of the genome (Table 1) [28][29][30][31]. At the same time, members of the ATG8 and ATG18 subfamilies were expanding dramatically in all detected plant species and not limited by the ploidy of the genome, indicating their durable and indispensable functions. aestivum), wild emmer (T. turgidum ssp. dicoccoides), T. urartu and A. tauschii. IQ-TREE software was used to construct a phylogenetic tree of ATGs with the bootstrap value estimation based on 1000 iterations and visualized by iTOL. The names with a solid red circle are wheat ATGs, darkturquoise square are wild emmer ATGs, yellow triangle are T. urartu ATGs and purple star are A. tauschii ATGs. The Green to red color represents the low to high bootstrap value. All identified ATGs were grouped into 19 clusters (subfamilies). Physical positions of ATGs across the four species are presented in Figure S1 and the gene duplication events in Figure S2. A total of 72 gene duplication events were identified in wheat, out of which 25, 20 and 21 TaATGs appertained to the A, B, and D subgenomes, respectively. Most of the TaATGs had three homologous copies, except for the following five TaATGs: Ta4A_ATG1a, Ta6A_ATG2a, Ta7A_ATG2c, Ta2A_ATG10a and Ta2A_ATG101a. Members of the ATG8 subfamily tended to form intrachromosomal gene replication. For instance, duplicated genes Ta2A_ATG8a and Ta2A_ATG8b were found in chromosome 2A and Ta2B_ATG8d and Ta2B_ATG8c in chromosome 2B ( Figure S2A). There were 14 gene duplication events in wild emmer. The gene duplication events between Td1B_ATG8a and Td6B_ATG8g, which were in different chromosomes, might be caused by chromosome segment exchange in the B genome ( Figure S2B). Furthermore, two and three intrachromosomal gene replications were detected in T. urartu and A. tauschii, respectively. Physical property, subcellular localization, gene structure, and conserved motif analyzes suggest significant conservation within the ATG subfamilies (Table S1: Additional file 2, Figure S3). For example, the characteristics of the ATG1, ATG3, and ATG11 subfamilies members were similar within the subfamily. However, gene structure and motif varied markedly within the subfamily, indicating the potential functional differentiation and variation. The ATG18 subfamily members could be divided into three distinct clades. The physical property, subcellular localization, gene structure and conserved motif of this subfamily were diverse among different chromosomes. In addition, the exon four of Ta3D_ATG22d was split into three smaller exons. There are a total of 260 putative cis-elements in the 1.5-kb promoter region of ATGs from the four species. Of these, 59 were associated with plant growth and development, 37 with biotic and abiotic stress response, and 51 with plant hormone response (Table S2). The cis-elements related to water response (ACGTATERD1, MYCCONSENSUSAT and MYB-CORE), copper-response (CURECORECR), plant hormone response (WRKY71OS, DPBF-COREDCDC3 and LTRECOREATCOR15) and plant growth and development (CAATBOX1, CACTFTPPCA1, DOFCOREZM, EBOXBNNAPA, RAV1AAT, GTGANTG10, SORLIP1AT and POLLEN1LELAT52) were abundant in the promoter region of ATGs. Functional Annotation and Enrichment Analysis of Autophagy-Related Proteins The Gene Ontology (GO)enrichment results, performed according to annotation analysis of 187 ATGs, are shown in Figure 2A. As expected, different subfamilies hold diverse functions and play essential roles in various cellular life stages. The ATG2, ATG5, ATG6, ATG8, ATG10, and ATG11 subfamilies were mainly enriched in plant development, biotic and abiotic stress response, protein modification, transport and metabolic processes. Some autophagy-related subfamilies might work together on a particular biological function. For instance, the subfamilies ATG2, ATG5 and ATG7 were found to regulate leaf development and senescence and the ATG2 and ATG10 subfamilies might be responsible for plant resistance against bacteria. The functions of the ATG4 and ATG6 subfamilies were also similar to some extent. Functional Annotation and Enrichment Analysis of Autophagy-Related Proteins The Gene Ontology (GO)enrichment results, performed according to annotation analysis of 187 ATGs, are shown in Figure 2A. As expected, different subfamilies hold diverse functions and play essential roles in various cellular life stages. The ATG2, ATG5, ATG6, ATG8, ATG10, and ATG11 subfamilies were mainly enriched in plant development, biotic and abiotic stress response, protein modification, transport and metabolic processes. Some autophagy-related subfamilies might work together on a particular biological function. For instance, the subfamilies ATG2, ATG5 and ATG7 were found to regulate leaf development and senescence and the ATG2 and ATG10 subfamilies might be responsible for plant resistance against bacteria. The functions of the ATG4 and ATG6 subfamilies were also similar to some extent. The top ten significantly enriched GO terms are displayed. The horizontal axis represents the number of ATGs which are annotated to the displayed GO terms; the vertical axis represents descriptive information of the enriched GO terms. Colors correspond to a value of p.adjust, which turns from blue to red as the p.adjust value changes from low to high. (B) KEGG enrichment of ATGs amongst different subfamilies. The horizontal axis represents autophagy subfamilies; the vertical axis represents descriptive information of annotated KEGG pathways. The fuchsia color denotes that the corresponding subfamily members were annotated in the KEGG pathway and the yellow color means that family members were not annotated. KEGG pathways highlighted by bold red font refer that these pathways can be found in plants. Kyoto Encyclopedia of Genes and Genomes (KEGG) annotation implied that almost all autophagy-related subfamilies were annotated to the autophagy pathway except the ATG22 subfamily. No pathway was annotated in the ATG22 subfamily. Moreover, most of the ATG subfamilies participated in a specific cellular process. The ATG5 and ATG8 subfamilies held the most significant number of annotated pathways, indicating that these two subfamilies are crucial in the autophagy process. Our result implied that both the ATG5 and ATG7 subfamilies were annotated to ferroptosis, which is of potential interest for future studies on a simultaneous function of autophagy and ferroptosis ( Figure 2B). Protein-Protein Interaction Network of Autophagy-Related Proteins Protein-protein interaction relationships of ATGs in wheat can be divided into four clusters ( Figure 3). In cluster I (cyan color), six ATG8 proteins and two ATG22 proteins interacted protein phosphatase 2C (PP2C) (TraesCS6B02G231400.1), a cyclic nucleotidebinding/kinase domain-containing protein. Three ATG4 proteins were at the node of the network. They not only interact with cyclase-associated protein 1, PP2C-associated genes and a ubiquitin-conjugating enzyme E2 4-like isoform for further indirect interaction with the ATG22 and ATG8 subfamily members, but also directly interacted with Ta4A_ATG1a, Ta4D_ATG1b, Ta6B_ATG2b, Ta7D_ATG2d and Ta6B_ATG18d. Our results implied that the protein SPIRRIG could interact with Ta6B_ATG2b, Ta7D_ATG2d and Ta1D_ATG18a, indicating that it may have participated in the autophagy process. In addition, both Ta6B_ATG18d and Ta1D_ATG18a served as bridges between ATG4 and ATG7 or ATG9 subfamily members, respectively; three ATG9 subfamily members were closely co-expressed with ATG13 related proteins. Synteny Events and Selection Pressure Analysis of ATGs in Wheat and Its Diploid and Tetraploid Progenitors Synteny events were identified as markers for the relationship between ATG gene expansions and wheat polyploidization ( Figure 4A,B). The number of syntenies between wheat and wild emmer is the largest (28) and the smallest between A. tauschii and T. urartu (5). Some syntenies among ATGs that belong to the same subfamily have shown a high homology during wheat evolution. For example, Ta2A_ATG13a, Td2A_ATG13a, Tu2A_ATG13a, and Aet2D_ATG13a were collinear, indicating that these genes are evolutionarily conserved and necessary. Furthermore, our results also suggest those synteny events in intrachromosomal repeat sequences. Tu6A_ATG5a and Tu6A_ATG5b were both collinear with Ta6A_ATG5b, and the reason for this collinearity may be that this gene duplication event occurred in T. urartu, but has been lost during the wheat hexaploidization process (Table S3). Since a high dS value may imply a potential sequence saturation and misalignment [32,33], orthologous ATG gene pairs with dS > 0.3 were discarded and the remaining genes were further analyzed (Table S4). The highest average dN/dS value was recorded between wheat and wild emmer (0.35) and the lowest between wild emmer and T. urartu (0.26). The ATG subfamilies might be under various degrees of selection pressure during the evolution of wheat ( Figure 4C,D). The ATG1, 3, 8, 18, and 101 subfamilies have experienced a stronger purifying selection pressure (dN/dS < 0.2) during the wheat evolution process. In addition, the selection pressure of the ATG10 subfamily between wheat and wild emmer was distinctly different, indicating that the nonsynonymous mutations of the ATG10 subfamily in wild emmer wheat tend to be retained in hexaploid wheat species. An intense purifying selection was observed between wild emmer and T. urartu for the ATG16 subfamily. Fifteen ATGs were strongly negatively selected (dN = 0; dS = 0) between wild emmer and wheat and eight ATGs were strongly negatively selected between A. tauschii and wheat. Moreover, only one strongly negatively selected ATG gene was detected in each of the following combinations: wheat vs. T. urartu (Ta1A_ATG14a vs. Tu1A_ATG14a), wild emmer vs. A. tauschii (Td2B_ATG8c vs. Aet2D_ATG8a) and wild emmer vs. T. urartu (Td1A_ATG3a vs. Tu1A_ATG3a). Figure 3. Protein-protein association networks of ATGs. The identified ATGs were submitted to the STRING database to construct the protein-protein interaction networks based on the Triticum aestivum dataset. The network could be clustered into four parts and represented in different colors. The circles are the ATGs identified in this study, and the squares represent the co-expressed proteins found in the database. Synteny Events and Selection Pressure Analysis of ATGs in Wheat and Its Diploid and Tetraploid Progenitors Synteny events were identified as markers for the relationship between ATG gene expansions and wheat polyploidization ( Figure 4A,B). The number of syntenies between wheat and wild emmer is the largest (28) and the smallest between A. tauschii and T. urartu (5). Some syntenies among ATGs that belong to the same subfamily have shown a high homology during wheat evolution. For example, Ta2A_ATG13a, Td2A_ATG13a, Tu2A_ATG13a, and Aet2D_ATG13a were collinear, indicating that these genes are evolutionarily conserved and necessary. Furthermore, our results also suggest those synteny events in intrachromosomal repeat sequences. Tu6A_ATG5a and Tu6A_ATG5b were both collinear with Ta6A_ATG5b, and the reason for this collinearity may be that this gene duplication event occurred in T. urartu, but has been lost during the wheat hexaploidization process (Table S3). The circles are the ATGs identified in this study, and the squares represent the co-expressed proteins found in the database. The Single Nucleotide Polymorphisms (SNPs) Analysis of TaATGs The single nucleotide polymorphisms (SNPs) of TaATGs were analyzed across the 93 sequenced populations of wheat and its diploid and tetraploid progenitors (Table S5). No SNPs were detected in seven TaATGs localized in the promoter, exon, or intron regions, indicating indispensable roles of these ATGs. It is worth mentioning that SNP frequency within TaATGs varied amongst the A, B, and D subgenomes. ATGs located in the B subgenome held the most significant number of SNPs. Furthermore, π, fst and π ratio of all TaATGs were combined to analyze the evolution of ATGs during wheat's domestication and improvement process ( Figure 5, Figures S4-S6). In the A subgenome, six domestication-related and three improvement-related candidate TaATGs were found. For example, Ta7A_ATG2c has been identified as a candidate improvement-related gene and its π value for landrace was higher than that of other varieties, indicating that this gene might be selected during the improvement process of hexaploid wheat. There were six wheat domestication-related candidates and ten wheat improvement-related ATGs found in the B subgenome. Ta2B_ATG8d, Ta4B_ATG13e, Ta6B_ATG12b, and Ta6B_ATG18d were outstanding both in domestication and in improvement-related genes. In addition, five domestication-related and four improvement-related TaATGs were in the D subgenome. Nonsynonymous mutation of ATGs might represent essential selection effectors during wheat's domestication and improvement process. As shown in Figures S7 and S8, nonsynonymous mutations were also found in ATGs located in A, B, and D subgenomes during the evolution of wheat. Since a high dS value may imply a potential sequence saturation and misalignment [32,33], orthologous ATG gene pairs with dS > 0.3 were discarded and the remaining genes were further analyzed (Table S4). The highest average dN/dS value was recorded between wheat and wild emmer (0.35) and the lowest between wild emmer and T. urartu (0.26). The ATG subfamilies might be under various degrees of selection pressure during the evolution of wheat ( Figure 4C,D). The ATG1, 3, 8, 18, and 101 subfamilies have experienced a stronger purifying selection pressure (dN/dS < 0.2) during the wheat evolution process. In addition, the selection pressure of the ATG10 subfamily between wheat and wild emmer was distinctly different, indicating that the nonsynonymous mutations of the ATG10 subfamily in wild emmer wheat tend to be retained in hexaploid wheat species. An intense purifying selection was observed between wild emmer and T. urartu for the ATG16 subfamily. Fifteen ATGs were strongly negatively selected (dN = 0; dS ≠ 0) between wild emmer and wheat and eight ATGs were strongly negatively selected between A. tauschii and wheat. Moreover, only one strongly negatively selected ATG gene was detected in each of the following combinations: wheat vs. T. urartu (Ta1A_ATG14a vs. Tu1A_ATG14a), wild emmer vs. A. tauschii (Td2B_ATG8c vs. Aet2D_ATG8a) and wild em- Correlation Analysis of the Variations at the Ta2A_ATG8a Locus with the Seed Size of Tetraploid Wheat DNA polymorphism assays found that tetraploid wheat accessions could be separated into three haplotypes (Hap-CGC, Hap-CGA, and Hap-TAA) based on three cis-acting elements changing SNPs located in the promoter region of Ta2A_ATG8a ( Figure 6A-C). Four cis-acting elements, GAGAC, GRWAAW, GATAA, and GATA, were specific for the Hap-TAA accessions. Both AGAAA and TTATTT cis-acting elements were common in Hap-CGA and Hap-TAA accessions. Furthermore, three cis-acting elements, TGTCA, TGAC and CAAT, were specific to Hap-CGC accessions ( Figure 6G). Haplotypes of wild emmer and durum were different: Hap-CGC and Hap-TAA were distinct and dominant for wild emmer and durum, respectively, while Hap-CGA was shared between them with similar proportions. The tetraploid wheat accessions with Hap-TAA held the highest thousand kernel weight (TKW) and grain width but shortest grain length. The TKW of Hap-CGA accessions was significantly higher than that of Hap-CGC. These results indicate that Hap-CGA represents a transitional stage during the tetraploid wheat domestication process. The Hap-CGA accessions with high TKW could have been preserved during the evolution from wild emmer to durum ( Figure 6D-F). However, this association was not fully applicable to hexaploid wheat, indicating that the seed size of common wheat might be determined by the synergistic effect of multiple genes in the A, B, and D subgenomes. Moreover, the TKW was found to be significantly higher in hexaploid cultivar than in landrace ( Figure S9), suggesting this parameter is a critical trait strongly selected for during the improvement process of hexaploid wheat. indicating indispensable roles of these ATGs. It is worth mentioning that SNP frequency within TaATGs varied amongst the A, B, and D subgenomes. ATGs located in the B subgenome held the most significant number of SNPs. Furthermore, π, fst and π ratio of all TaATGs were combined to analyze the evolution of ATGs during wheat's domestication and improvement process (Figures 5, S4-S6). In the A subgenome, six domestication-related and three improvement-related candidate TaATGs were found. For example, Ta7A_ATG2c has been identified as a candidate improvement-related gene and its π value for landrace was higher than that of other varieties, indicating that this gene might be selected during the improvement process of hexaploid wheat. There were six wheat domestication-related candidates and ten wheat improvement-related ATGs found in the B subgenome. Ta2B_ATG8d, Ta4B_ATG13e, Ta6B_ATG12b, and Ta6B_ATG18d were outstanding both in domestication and in improvement-related genes. In addition, five domestication-related and four improvement-related TaATGs were in the D subgenome. Nonsynonymous mutation of ATGs might represent essential selection effectors during wheat's domestication and improvement process. As shown in Figures S7 and S8, nonsynonymous mutations were also found in ATGs located in A, B, and D subgenomes during the evolution of wheat. Effects of Overexpression TaVAMP727 in Arabidopsis The VAMP727 is a seed plant-specific R-SNARE that may be crucial in autophagy [15,16,34]. TaVAMP727 was cloned from wheat and then over-expressed in Arabidopsis to investigate its effects on autophagy and abiotic resistance. Seed germination showed that overexpressed TaVAMP727 could significantly improve the germination rate of Arabidopsis under both drought and salt conditions ( Figure 7A,B). The reverse transcription-quantitative real-time PCR (RT-qPCR) analysis revealed that TaVAMP727 expression was significantly up-regulated under salt (Na + ) stress but down-regulated under drought (Dmannitol) conditions in the transgenic line. As shown in Figure 7C, the expression profile of 11 AtATGs was significantly regulated by the overexpression of TaVAMP727 under Effects of Overexpression TaVAMP727 in Arabidopsis The VAMP727 is a seed plant-specific R-SNARE that may be crucial in autophagy [15,16,34]. TaVAMP727 was cloned from wheat and then over-expressed in Arabidopsis to investigate its effects on autophagy and abiotic resistance. Seed germination showed that overexpressed TaVAMP727 could significantly improve the germination rate of Arabidopsis under both drought and salt conditions ( Figure 7A,B). The reverse transcription-quantitative real-time PCR (RT-qPCR) analysis revealed that TaVAMP727 expression was significantly up-regulated under salt (Na + ) stress but down-regulated under drought (D-mannitol) conditions in the transgenic line. As shown in Figure 7C Effects of Overexpression TaVAMP727 in Wheat The results gained from Arabidopsis might do not fully apply to wheat. Therefore, TaVAMP727 was also overexpressed in wheat to ascertain whether TaVAMP727 can regulate the expression levels of ATGs and further improve the abiotic stress resistance of wheat. RT-qPCR analysis showed that TaVAMP727 was significantly overexpressed in transgenic lines under normal conditions. While under various abiotic stresses, including cold, drought and salt, it was significantly down-regulated, although it is still expressed at very high levels compared to Fielder at the same conditions ( Figure 8B). In the non-transgenic wheat, the expression level of TaVAMP727 was stable both under normal conditions and under abiotic stresses ( Figure 8A). The transgenic wheat had a higher survival rate under drought stresses than that of Fielder; its heading date was at least six days earlier than that of Fielder ( Figure 8C,D). The expression level of TaATGs was analyzed in transgenic wheat to explore the influence of TaVAMP727 on autophagy. The results showed that the overexpression of TaVAMP727 regulated the expression level of most TaATGs in response to the cold, drought and salt stresses. Most TaATGs in TaVAMP727 transgenic wheat were up-regulated expressed under cold and drought stresses. The expression pattern of TaATGs under salt stress was different from that under cold and drought stresses ( Figure 8E). 0.84; 180 mM NaCl, p = 0.0018; n = 3). (C) The expression levels of overexpressed TaVAMP727 and 15 Arabidopsis ATGs were analyzed via reverse transcription-quantitative real-time PCR (RT-qPCR) in both wild-type and TaVAMP727 transgenic line #19 under salt and drought stress. The ubiquitin 5 (UBQ5) (At3G62250) was used as housekeeping genes. Error bars represent the SD. * p < 0.05; ** p < 0.01; n =3. Effects of Overexpression TaVAMP727 in Wheat The results gained from Arabidopsis might do not fully apply to wheat. Therefore, TaVAMP727 was also overexpressed in wheat to ascertain whether TaVAMP727 can regulate the expression levels of ATGs and further improve the abiotic stress resistance of wheat. RT-qPCR analysis showed that TaVAMP727 was significantly overexpressed in transgenic lines under normal conditions. While under various abiotic stresses, including cold, drought and salt, it was significantly down-regulated, although it is still expressed at very high levels compared to Fielder at the same conditions ( Figure 8B). In the nontransgenic wheat, the expression level of TaVAMP727 was stable both under normal conditions and under abiotic stresses ( Figure 8A). The transgenic wheat had a higher survival rate under drought stresses than that of Fielder; its heading date was at least six days earlier than that of Fielder ( Figure 8C,D). The expression level of TaATGs was analyzed in transgenic wheat to explore the influence of TaVAMP727 on autophagy. The results showed that the overexpression of TaVAMP727 regulated the expression level of most TaATGs in response to the cold, drought and salt stresses. Most TaATGs in TaVAMP727 transgenic wheat were up-regulated expressed under cold and drought stresses. The expression pattern of TaATGs under salt stress was different from that under cold and drought stresses ( Figure 8E). Discussion An increasing number of studies have revealed that autophagy is crucial in plant development, stress response, senescence, and programmed cell death [5]. Polyploidy of a genome is accompanied by large chromosome segment loss, insertion, and rearrangement and these changes may lead to expression, silencing, or loss of genes [35]. Our study showed that two naturally interspecific hybridization events by three diploid donor species during the polyploidy process of wheat caused the unique landscape of the autophagy gene family among the wheat and its diploid and tetraploid progenitors (Table S1: Additional file 1). The number of ATGs was diverse among different subfamilies. The gene structures of ATGs were subfamily-specific, but within the subfamily, there were significant structural and functional differentiation ( Figure S3). Selection pressures in wheat's evolutionary process may have caused this within subfamily differentiations, since genes tend to evolve into diverse structures to fulfill multiple functions [36]. Prevalent structural divergences in duplicate genes can lead to the differentiation of functionally distinct paralogs [37]. Previous studies have shown that high wheat-specific inter-and intra-chromosomal gene duplications are potential sources of variability required for plant adaptation [38]. Gene rearrangement events, including both differential gene duplication and deletion within the A, B, and D regions, imply the rapid evolution of gene subfamilies after the separation of these three wheat subgenomes [39]. Our results support the view that gene duplication of ATGs came from homologous recombination, intrachromosomal replication and nonhomologous chromosome exchanges during the hybridization process. This process can lead to environmental adaptation of wheat, although some homologous genes may be lost in the long evolutionary process. Autophagy is a complex process that recruits a series of ATGs to perform different functions. In addition to its indispensable roles in the autophagy process, ATGs may also participate in other biological processes (Figure 2). Ferroptosis is a new form of cell death, first described in tumor cells and plant ferroptosis shares the main features of the process described in other systems [40,41]. Our results implied that both the ATG5 and ATG7 subfamilies were annotated to ferroptosis; thus, these two subfamilies may be related to wheat iron ion transport and are potential candidates for wheat quality breeding. In addition, ATG3, ATG5, ATG6, ATG7, and ATG8 subfamily members have been reported to affect plant immunity [42,43]. In this study, GO enrichment analysis showed that ATG2 and ATG10 subfamilies play essential roles in response to bacterial infection ( Figure 2). Furthermore, we have identified that ATGs play critical functional roles in plant stress resistance and growth. ATG1, ATG2, ATG5, and ATG7 proteins were significantly annotated to the plant growth and development functions. The previous study has characterized the crucial role of TdATG8 in drought and the osmotic stress response of wild emmer wheat [44]. Our results also revealed that ATG8 subfamily members of wheat and its Triticeae progenitors were indispensable to nitrogen starvation and abiotic stresses response. The interplay between phytohormones and multiple stresses is ubiquitous in plants [45]. Numerous cis-element of abscisic acid (ABA), auxin (IAA) and gibberellin (GA), as well as the cis-element of ethylene (ET), jasmonic acid (JA), and salicylic acid (SA) and cytokinin (CTK), were found concentrated in the promoter region of the identified ATGs. Our results indicated that ATG4 could interact with ATG8 ( Figure 3). The finding is consistent with previous studies in Arabidopsis [46]. In yeast, ATG4 proteolytic activity can be inhibited by ATG1 phosphorylation [47]. The interaction between TaATG4 and TaATG1 found in the present study suggest that this inhibition may also exist in plants. In Arabidopsis, protein SPIRRIG is essential for salt stress tolerance and endosomal transport routes [48,49]. Our interact network showed that SPIRRIG interacted with the ATG2-ATG18 complex further along with ATG9 to deliver lipids to the expanding phagophore, possibly improving the salt tolerance of wheat. In addition, close interactions between ATG9 and ATG13 were identified in our network. As a part of the ATG1/ATG13 kinase complex, dephosphorylated ATG13 interacted with ATG9 to stimulate lipids delivery [3]. PP2C has been reported as crucial in autophagy initiation and multiple abiotic stress responses [50,51]. Our results suggest PP2C may also be crucial in the crosstalk among TaATG4, Ta6B_ATG18d, TaATG7, and TaATG22 (Figure 3). Single nucleotide mutations play an important role in environmental adaptation [52]. In this study, we have identified nonsynonymous mutations of ATGs that may have been selected for the evolutionary process of wheat ( Figures S7 and S8). In Arabidopsis, overexpression of ATG8 can increase nitrogen remobilization efficiency and improve grain filling significantly [53]. In our study, the cis-element variation resulting from DNA polymorphism in the promoter region of Ta2A_ATG8a is closely associated with TGW of tetraploid wheat ( Figure 6). Wheat GAGAC cis-acting element has been reported to confer sulfur (S) deficiency response in Arabidopsis roots and S limiting can decrease wheat grain size [54,55]. The variation of the GAGAC cis-acting element in the promoter region of Ta2A_ATG8a may be responsible for the difference in grain size between wild emmer and durum wheat. In addition, the TTATTT cis-acting element has been reported to associate with the glutamine synthetase gene, which is crucial both for seed germination and for seed yield structure in Arabidopsis [56,57]. Thus, these two seed size-related cis-elements are prime candidates for selection in breeding tetraploid wheat. Autophagy is an intracellular material circulation pathway that delivers intracellular material to the plant vacuoles. The final step in this process is the fusion of autophagosomes with vacuoles, which requires SNARE proteins [10]. Exploring the function of SNARE complexes in autophagy is central to understanding fusion processes and autophagy regulation. VAMP727 is a seed plant-specific R-SNARE that was vital in plant growth, development and defense [15,58,59]. Our research overexpressed the TaVAMP727 in Arabidopsis and wheat to explore its roles in autophagy and plant growth (Figures 7 and 8). Results showed that the overexpression of TaVAMP727 can improve the abiotic stress resistance of plants by regulating the expression level of ATGs and promoting the heading of wheat. The overexpression of TaVAMP727 may promote SNARE-mediated fusion of the autophagosomes with the tonoplast downstream of the autophagy process, which would accelerate the circulation of intracellular material and energy. Furthermore, autophagic homeostasis in cells might be disrupted by promoting autophagy in the later stage, and ATGs may change their expression level to maintain intracellular homeostasis, especially under stress conditions. In summary, we have identified ATGs in wheat and its diploid and tetraploid progenitors systematically and characterized the landscape of the autophagy gene family in this study. We have also shown that overexpression of TaVAMP727 improved the cold, drought and salt stress resistance of the transgenic wheat. Our findings demonstrate how autophagy genes regulate wheat plant development and its improved resistance to abiotic stresses, thus opening the route for transgenic wheat to potentially expand its range into colder, drier, and saltier areas. Identification of ATGs in Wheat and its Diploid and Tetraploid Progenitors Newly published protein sequences of wheat (T. aestivum L.) and wild emmer (T. turgidum ssp. dicoccoides) were downloaded from the Ensembl Plants database (http: //ftp.ensemblgenomes.org/pub/plants/ (accessed on 1 January 2021)), T. urartu and A. tauschii genomes were downloaded from the NCBI (https://www.ncbi.nlm.nih.gov/ genome/ (accessed on 1 January 2021)) and used to construct a local protein database. ATGs from different plant species were collected from NCBI (http://www.ncbi.nlm.nih.gov/ (accessed on 1 January 2021)) and earlier studies [16,[28][29][30][31] and subfamilies of ATGs were then merged to construct an HMM profile using the 'hmmbuild' tool embedded in the HMMER3.0 web server (http://hmmer.org/download.html (accessed on 1 January 2021)). The 'hmmsearch' tool was further used to search for the ATGs with 1 × 10 −5 as the threshold. In addition, all of the downloaded sequences were merged as a query to perform the local BLASTP search against genome sequences with e-value < 1 × 10 −5 and identity > 60% as the threshold. The intersection proteins between HMM and the BLASTP search were considered as putative ATGs. Finally, redundant sequences were manually removed and one splice variant of putative ATGs was retained for further analysis. ATG domains were confirmed by PFAM (http://pfam.xfam.org/ (accessed on 6 January 2021)), NCBI Batch CD-search database (http://www.ncbi.nlm.nih.gov/Structure/ bwrpsb/bwrpsb.cgi (accessed on 6 January 2021)) and InterProScan (http://www.ebi.ac. uk/interpro/ (accessed on 6 January 2021)) databases. For nomenclature, the prefix 'Ta', 'Td', 'Tu' and 'Aet' for wheat, wild emmer, T. urartu and A. tauschii were used separately and attached with chromosome information, followed by 'ATG'. The serial number for each identified ATGs member was assigned according to their motif information. The online ProtParam tool (http://web.expasy.org/compute_pi/ (accessed on 6 January 2021)) was used to compute the grand average of hydropathicity (GRAVY), theoretical isoelectric point (pI) and molecular weight (Mw). The subcellular localization of each ATG protein was predicted using the online tool CELLO v.2.5 (http://cello.life.nctu.edu.tw/ (accessed on 6 January 2021)). Bioinformatics Analysis ATGs sequences were aligned to infer the unrooted phylogenetic tree using the IQ-TREE software with the bootstrap value estimation based on 1000 iterations and visualized by iTOL [60,61]. Chromosome locations of the identified ATGs visualized using Map Gene2Chromosome v2.0 (http://mg2c.iask.in/mg2c_v2.0/ (accessed on 10 January 2021)). The exon-intron structures were obtained from the annotation files and displayed using the Gene Structure Display Server website (http://gsds.cbi.pku.edu.cn/ (accessed on 10 January 2021)). Gene duplication events of ATGs were investigated and visualized by the Circos software [62,63]. The 1.5-kb genomic DNA sequences upstream of ATGs extracted from the genome data were then submitted to the online PLACE database (https://www. dna.affrc.go.jp/PLACE/?action=newplace (accessed on 10 January 2021)) to investigate putative cis-acting regulatory elements [64]. Conserved protein motifs were predicted by the MEME Suite (http://meme-suite.org/tools/meme (accessed on 15 January 2021)) using default parameters and visualized by the TBtools software [65]. The interaction network of identified ATGs from the four species was constructed using the STRING (v11) database (http://www.string-db.org (accessed on 20 January 2021)) based on the orthologous genes of wheat with the identity >50 as a threshold. All identified ATGs were annotated using the eggNOG database using the officially provided software 'eggnog-Mapper' with the DIAMOND algorithm [66]. The PAML_Yn00 tool was used to estimate the nonsynonymous (dN) and synonymous (dS) substitution rates of the ATG subfamily among four species [32]. The JCVI (v0.7.5) software was used to identify synteny genes based on the coding sequence (CDS) of four species [67]. The Single Nucleotide Polymorphisms (SPNs) Analysis of TaATGs Based on the Resequence Data SNP information of TaATGs was obtained from 93 whole-genome resequencing data, including wheat and its diploid and tetraploid progenitors, adopted from Cheng et al. [25]. SNPs found in the promoter, mRNA, intron and exon regions of TaATGs were extracted according to the location data using the 'bcftools' software from the Samtools package [68]. Additionally, π and fst values were also calculated using the 'vcftools' software [69]. Seeds of the sequenced taxa were from our lab collection. Seed grain characteristics were measured using a Wanshen SC-G seed test instrument (Wanshen Testing Technology Co., Ltd., Hangzhou, China). Genetic Transformation of Arabidopsis and Wheat The coding region of TaVAMP727 (TraesCS7A02G279100.1) was PCR-amplified using the primers F: 5 -CGCGGATCCATGAACGGTGGTAGCAAGC-3 and R: 5 -GGGGTACCCT AGCACTTGAAGCCCCTG-3 from the first-strand wheat cDNA and subsequently verified by Sanger sequencing. It was digested by BamHI and KpnI and ligated between the ED35S promoter and NOS-Ter (nopaline synthase terminator) in the binary vector pWR306 (transformed from pCAMBIA1303) [70]. This construct was introduced into Agrobacterium tumefaciens GV3101 and then transformed into Arabidopsis thaliana (Columbia) using the floral dip method [71]. Seeds of homozygous T3 lines and wild type (WT) were surfacesterilized and germinated on Murashige and Skoog medium (MS medium), MS medium supplemented with 400 mM D-mannitol and 180 mM NaCl for stress resistance analysis. Seeds of WT and TaVAMP727 transgenic lines were then placed on MS medium, MS medium enriched with 120 mM NaCl or 200 mM D-mannitol for RT-qPCR analysis. All Petri dishes were put into a growth chamber under 16-h-light/8-h-dark cycles and 22 • C/19 • C day and night temperature, respectively, and grown for three weeks. Seedlings from each treatment were collected, immediately frozen in liquid nitrogen and stored at −80 • C for further use. Total RNA was isolated using Plant RNA Kit (Omega Biotek, Norcross, GA, USA) and the integrity was checked on 1% agarose gels by staining with ethidium bromide. Furthermore, RNA amount and purity were quantified using a Nano Drop ND-1000 instrument (Nano Drop, Wilmington, DE, USA). The first-strand cDNAs were synthesized using Evo M-MLV Mix Kit with gDNA Clean for qPCR (AG11728, Accurate Biotechnology (Hunan) Co., Ltd., Hunan, China) with random primers. The RT-qPCR analysis was conducted using a StepOnePlus™ Real-Time PCR System (ABI, Carlsbad, CA, USA) with SYBR ® Green Premix Pro Taq HS qPCR Kit (Rox Plus) (AG11718, Accurate Biotechnology (Hunan) Co., Ltd., Hunan, China). Three replicates and three technical repetitions were used for each RT-qPCR experiment and the expression levels of 15 Arabidopsis ATGs were analyzed. The ubiquitin 5 (UBQ5) (At3G62250) were used as housekeeping genes [72]. Primers used in this analysis are presented in Table S6: Additional file 1. The expression level was calculated according to the 2 −∆∆CT method [73]. The sequencing verified coding region of TaVAMP727 digested by HindIII and EcoRI and ligated to plant binary expression vector pCAMBIA3301. This construct was transformed into Fielder using the Agrobacterium tumefaciens-mediated transformation method [74]. The specific detection primer pairs (F: 5 -TCGATGCTCACCCTGTTGTTTG-3 , R: 5 -TGTAT AATTGCGGGACTCTAATC-3 ) were used to validate the presence of TaVAMP727 in the transgenic wheat. Seeds of T3 transgenic lines and Fielder were soaked in water, germinated at 25 • C for two days, and then transferred to a half-strength Hoagland's liquid medium. Seedlings grown in the liquid medium were placed in a growth chamber under controlled conditions with 25 ± 1 • C, 16-h-light/8-h-dark cycles. Trileaf stage TaVAMP727 transgenic seedlings and Fielder seedlings were treated with cold (4 • C) for 12 h, air-dry for 6 h, salt (150 mM NaCl solution) for 24 h and normal conditions. Seedlings of Fielder under normal conditions were used as a control to detect the response of TaVAMP727 to cold, drought and salt in non-transgenic wheat. In addition, Fielder seedlings under normal, cold, drought and salt stresses were considered as a control for transgenic seedlings in corresponding conditions. All seedlings were collected at the stated time points and were immediately frozen in liquid nitrogen and stored at −80 • C for further use. RT-qPCR analysis was used to detect the influence of TaVAMP727 on 23 TaATGs and glyceraldehyde-3-phosphate dehydrogenase (GA3PD) was used as housekeeping genes [75]. Methods of total RNA extraction, first-strand cDNAs synthesis, RT-qPCR and relative expression level calculation were the same as the described above. The primers used here are listed in Table S6: Additional file 2. In addition, Fielder and TaVAMP727 transgenic wheat seeds germinated in the soil were grown in normal conditions. When seedling reached the trileaf stage, watering was stopped until wilting and then rehydrated to detect drought resistance. Statistical Analysis Statistical analysis was conducted by SAS (version 9.2), using an analysis of variance (ANOVA). The ANOVA means was tested by Duncan's multiple range test at the 0.05 and 0.01 level.
v3-fos-license
2020-03-05T11:03:01.640Z
2020-02-01T00:00:00.000
214245548
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.scipress.com/IFSL.15.1.pdf", "pdf_hash": "c561425382d0ba24ebfd769b81701e5063076592", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1446", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "cf297bb9b66fea0d16796e46bc44a637f2f4f7f1", "year": 2020 }
pes2o/s2orc
The Unsteady Flow of a Fluid of Finite Depth with an Oscillating Bottom In this paper, the unsteady flow of a fluid of finite depth with an oscillating bottom is examined. The flow is assumed in the absence of viscous dissipation. The governing equations of the flow are decoupled in the velocity and temperature fields. The velocity and temperature fields have been obtained analytically. The effects of various material parameters on these fields have been discussed with the help of graphical illustrations. It is noticed that the upward thrust ) ( y f ρ vanishes when Reiner Rivlin coefficient of viscosity ) ( c μ is zero and the transverse force ) ( z f ρ perpendicular to the flow direction vanishes for thermo-viscosity coefficient ) ( 8 α is zero. The external forces generated perpendicular to the flow direction is a special feature of thermo-viscous fluid when compared to the other type of fluids. Introduction Considerable interest has been evinced in the recent years on the study of viscous flows because of its natural occurrence and its importance in industrial geophysical and medical applications. Some practical problems involving such studies include the percolation of water through solids, the drainage of water for irrigation, the aquifier considered by the ground water hydrologists, the reserve bed used for filtering drinking water and the seepage through slurries in drains by the sanitary engineer , the flow of liquids through ion-exchange beds, cleaning of oil-spills etc. In the physical world, the investigation of the flow of thermo-viscous flows has become an important topic due to the recovery of crude oil from the pores of reservoir rocks, the extraction and filtration of oil from wells, the oil reservoir treated by the reservoir engineer, the extraction of energy from geo-thermal regions are some of the areas in which thermo-viscous flows have been noticed. The concept of thermo-viscous fluids which reflect the interaction between thermal and mechanical responses in fluids in motion due to external influences was introduced by Koh and Eringin in 1963. For such a class of fluids, the stress-tensor ' t ' and heat flux bivector ' h ' are postulated as polynomial functions of the kinematic tensor, viz., the rate of deformation tensor ' d ': with the constitutive parameters i α , i β being polynomials in the invariants of d and b in which the coefficients depend on density( ρ ) and temperature(θ ) only. The fluid is Stokesian when the stress tensor depends only on the rate of deformation tensor and Fourier-heat-conducting when the heat flux bivector depends only on the temperature gradient-vector, the constitutive coefficients 1 α and 3 α may be identified as the fluid pressure and coefficient of viscosity respectively and 5 α as that of cross-viscosity. Flow of incompressible homogeneous thermo-viscous fluids satisfies the usual conservation equations: Equation of continuity The components of stress tensor The components of rate of deformation tensor The interaction between thermal and mechanical responses of fluids in motion due to external influences was primarily observed by Koh and Eringin [6] in 1963. A systematic rational approach for such class of fluids has been developed by Green and Nagdhi [2] in 1965. Kelly [5] in 1965 examined some simple shear flows of second order thermo-viscous fluids. In 1979 Nageswara Rao and Pattabhi Ramacharyulu [12] later studied some steady state problems dealing with certain flows of thermo-viscous fluids. Some more problems of thermo-viscous flows studied by Anuradha [1] in plane, cylindrical and spherical geometries in 2006. Muthuraj and Srinivas [10] studied the problem of flow of a thermo-viscous fluid through an annular tube with constriction in 2006. Srinivas et al. [19] studied the problem of Slow steady motion of a thermo-viscous fluid between two parallel plates with constant pressure and temperature gradients in 2013. Pothanna and Srinivas et al. [18] examined the problem linearization of thermo-viscous fluid in a porous slab bounded between two fixed permeable horizontal parallel plates in the absence of thermo-mechanical interaction coefficient in 2014. Pothanna etal. [17,18] examined some steady and unsteady state problems dealing with certain flows of thermo-viscous fluids between parallel plates with various assumptions . Motsa and Animasaun [8] studied paired quasi-linearization analysis of heat transfer in unsteady mixed convection nano fluid containing both nano particles and gyrotactic microorganisms due to impulsive motion. Motsa and Animasaun [9] examined unsteady boundary layer flow over a vertical surface due to impulsive and buoyancy in the presence of thermal-diffusion and diffusion-thermo using bivariate spectral relaxation method. Koriko et al. [7]boundary layer analysis of exothermic and endothermic kind of chemical reaction in the flow of non-darcian unsteady micropolar fluid IFSL Volume 15 along an infinite vertical surface. Animasaun [4] studied dynamics of unsteady MHD convective flow with thermo phoresis of particles and variable thermo-physical properties past a vertical surface moving through binary mixture. Keeping this in mind the relevance and growing importance of thermo-viscous fluids in geophysical fluid dynamics, chemical technology and industry; the present paper attempts to study the variations of velocity and temperature fields on the unsteady flow of thermo-viscous fluid over a flat plate with an oscillating bottom for the various material parameters. II. Mathematical Formulation and Solution Consider the Let the velocity distribution be assumed in the form Substituting (7) in (1) and using the boundary conditions (5), the velocity distribution is obtained as ). In IV. Conclusion The present investigation deals with an unsteady flow of a thermo-viscous incompressible fluid of finite depth with an oscillating bottom. The following conclusions are drawn from the present study. • It is noticed that the upward thrust )
v3-fos-license
2020-09-08T13:06:15.272Z
2020-09-07T00:00:00.000
221523860
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://progressinorthodontics.springeropen.com/track/pdf/10.1186/s40510-020-00336-2", "pdf_hash": "017c1363e9f00ed2493ef12a4d8f01b671645128", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1447", "s2fieldsofstudy": [ "Medicine" ], "sha1": "29aa440087c9cca5f55003b80c561e8d1b62c1bc", "year": 2020 }
pes2o/s2orc
One versus two anterior miniscrews for correcting upper incisor overbite and angulation: a retrospective comparative study Background Miniscrews are effective devices for performing upper incisor intrusion. Different mechanics can be applied depending on the treatment objectives. This study aimed to evaluate the efficacy of one or two anterior miniscrews for upper incisor correction in cases of overbite and angulation in adult patients. Methods Forty-four adults with deep overbite were divided into two groups: group 1 was treated with one miniscrew between upper central incisors and group 2 with two miniscrews between upper lateral incisors and canines. Incisor intrusion and length were measured from lateral cephalograms before treatment, after treatment and at least 12 months into retention (T0, T1 and T2). Forces were applied (90 g) from the miniscrews to the archwire using elastomeric chains. ANOVA analysis was used to determine whether differences between evaluation times were statistically significant. Results Mean root resorption was 2.15 ± 0.85 mm, which ceased after active treatment. Overbite mean correction was − 3.23 ± 1.73 mm with no statistically significant relapse. Overbite correction and incisor intrusion were significantly greater in group 2 (− 3.80 ± 1.43 versus − 2.75 ± 1.63 for OB and 8.19 ± 3.66 versus 5.69 ± 2.66 for intrusion). Resorption and overbite correction were positively related. No counterclockwise rotation of the mandibular plane was observed. Conclusions Overbite correction can be performed by means of upper incisor intrusion without rotation of the mandibular plane. Correction of upper incisor intrusion and overbite is greater in patients treated with two miniscrews. The increase in upper incisor buccal angulation is greater with one miniscrew. Root resorption is positively related to the extent of intrusion. Stability is satisfactory regardless of whether one or two miniscrews are used. Introduction Vertical malocclusions with deep overbite can be treated with orthodontics alone or in combination with orthognathic surgery. The choice of one approach or the other will depend on the etiology and severity of the problem, as well as other individual factors such as the extent of gummy smile [1]. When surgical treatment is not an option because of the patient's refusal to undergo surgery, or because no maxillary vertical excess is present, the use of miniscrews is a treatment option that offers an effective method for attaining maxillary incisor intrusion and correcting the gummy smile [2]. Miniscrews offer the advantages of immediate loading, a range of possible placement sites, relatively simple placement and removal, and low economic cost [3,4]. Intrusion of upper and lower incisors, reducing overbite, can be easily achieved by placing miniscrews in anterior interradicular areas and applying the appropriate orthodontic mechanics. One or two miniscrews may be placed between central incisors [5,6], central and lateral incisors [7], or lateral incisors and canines [8][9][10][11][12] and, providing the miniscrew (or screws) are located correctly [13], a good outcome with minimal incisor protrusion can be obtained. Other auxiliary methods can be used to intrude upper incisors. Most of them use posterior teeth for anchorage, although this may produce unwanted reciprocal effects. An intrusion archwire is often used for overbite correction [14][15][16][17]. Comparing intrusion archwires with miniscrews, some authors have reported significantly more incisor proclination when using intrusion archwires [15], while others have found significantly more intrusion and generally better results using miniscrews [17]. Although the efficacy of anteriorly vs. posteriorly located miniscrew-assisted intrusion mechanics has been investigated, together with the resorptive root damage derived from miniscrew placement in different locations [19][20][21][22], no clinical trials have compared the effects (including root resorption) of treatment with 1 or 2 miniscrews placed in the anterior area. As both the forces applied and the vector position are different depending on whether one or two miniscrews are used, differences in the displacement pattern may occur, which could affect root resorption and treatment stability. Therefore, the purpose of this study was to evaluate the results of orthodontic movement produced by one and two anterior miniscrews for upper incisor correction of overbite and angulation in adult patients. Materials and methods This retrospective comparative human study was designed following STROBE guidelines and complied with the Helsinki Declaration for research involving human subjects. The study protocol was approved by the University of Valencia Ethics Committee for Human Research (Reg. No. 1069224). All patients whose records were used in the study received detailed information about its purpose and gave their informed consent to take part. Patients Data from 90 patients attending a private dental clinic between January 2013 and December 2015 were used in the study; all these patients had been diagnosed with overbite and gummy smile. Inclusion criteria were as follows: -Non-growing patients. Lateral cephalograms of the patients were analyzed to assess skeletal growth using the cervical vertebral maturation method [23]. -Gummy smile of 3 mm or greater, diagnosed by examining the patient directly. -Patients with incisor inclination smaller than 110°( U1-PP). -Increased overbite diagnosed from lateral cephalograms by measuring the distance between upper and lower incisors' incisal edges along a line perpendicular to the occlusal plane. -Patients treated without extractions. -Patients with good quality lateral cephalograms taken before treatment, just after treatment, and 12 or more months later during the retention period. -Skeletal class I (ANB 2°± 1). -No periodontal surgery required in the incisor area as part of treatment. -Patients treated with one or two anterior miniscrews. Exclusion criteria were as follows: -Patients with a history of any kind of trauma or endodontic treatment of the maxillary incisors. -Patients presenting systemic disease or taking periodic medication. -Patients exhibiting poor oral hygiene. Method All patients were treated using fixed Tip-Edge Plus® (TP Orthodontics Inc.) bracket appliances (metallic or ceramic) and miniscrews in the upper anterior area. 0.014in superelastic nickel-titanium (SE NT) archwires were applied to level and align maxillary and mandibular arches together with an upper 0.016-in A.J. Wilcock Australian stainless steel wire (G&H Orthodontics® , Franklin, USA), followed by 0.016 × 0.025-in SE NT archwires to define the arch shape and level the occlusal plane. Stainless steel 0.021 × 0.028-in archwires combined with 0.016-in SE NT archwires, introduced through the auxiliary slot, were placed to perform correct torque and tipping. At this point, intermaxillary elastics were used if needed to make final occlusion adjustments. Finally, 0.016-in SE NT archwires were placed for optimal interdigitation. Lastly, appliances were removed and upper and lower canine-to-canine fixed lingual retainers were bonded. Upper and lower clear removable retainers were delivered to the patients to be used at night, adjusted to avoid anterior occlusal contact. Miniscrew mechanics Miniscrews were placed in the upper incisor area to obtain intrusion of the upper incisors and to correct the gummy smile (length 8 mm; diameter 1.6 mm; head 2.3 mm, Dual Top, Jeil Medical Corporation, Seoul, South Korea) during the first treatment stage when brackets were bonded. The screws were inserted in the interradicular areas under local anesthesia, perpendicular to the teeth in order to endure the intrusion forces. Each miniscrew was used as a direct anchorage unit, applying a 90 g force after placing the 0.014-in SE NT and Australian stainless steel archwires. Australian wire was used at this point so that the intrusion forces applied from the miniscrews would be distributed more evenly between the six anterior teeth. Traction from the miniscrews was reactivated monthly. Intrusion forces from the miniscrews were applied until overbite correction was achieved. All miniscrews were placed by the same experienced operator (AVH) using a straight screwdriver. Patients were divided into two groups depending on the number of miniscrews placed and their location (Fig. 1). The decision of whether to place one miniscrew or two depended on the root inclination and position of the labial frenum. In group 1 (Figs. 1a and 2), a single miniscrew was placed in the upper incisor area between the upper central incisors, located anterior to the center of resistance (CR), aiming to achieve less intrusion and more labial tipping of the incisors. In group 2 (Figs. 1b and 2), two miniscrews were placed between upper lateral incisors and canines. Since both miniscrews were placed more posterior (being the distance to CR shorter), more intrusion with less labial tipping was expected [13]. Cephalometric analysis Three lateral cephalometric radiographs were obtained for each patient: before treatment (T0), after treatment (T1), and during the retention period (T2). Eight cephalometric landmarks were identified on each radiograph: S, N, Gn, Go, Me, ANS, PNS, and CR ( Fig. 3a) and eight skeletal and dental measurements were taken (Table 1 and Fig. 3b) by a single observer who had been fully trained and calibrated (LGZ). The same set of measurements were repeated by a second calibrated observer (FLL). All cephalometric measurements were taken using Nemoceph® 11.3.1 software. Outcomes The following parameters were evaluated: -Results of orthodontic movement by one or two anterior miniscrews for upper incisor correction of overbite and angulation in adult patients. Statistical analysis Power analysis showed that a sample size of at least 40 patients would provide an 80% probability of detecting a medium effect (f = 0.2) between time-points, using an ANOVA model at a confidence level of 95%, and assuming a correlation among repeated measurements of 0.5. Intraobserver and interobserver error was calculated by coefficients of variation (CV = SD × 100/mean, expressed as percentages) and by the Dahlberg formula. All lateral radiographs (132) were traced and measured again one week later by the principal observer (LGZ) and by a second calibrated observer (FLL). Data obtained from cephalometric measurements were entered on a spreadsheet, using the Microsoft® Excel 2011® program. Study variables were the dental measurements (both lineal and angular) taken at T0, T1, and T2. Descriptive statistics were calculated for each parameter, as well as the differences between times (T1-T0; T2-T1; T2-T0). Differences between times represented the effect of treatment (T1-T0), stability (T2-T1), and long-term overall effect (T2-T0). The normality of the measurement differences was checked by the Kolmogorov-Smirnov test, obtaining a confirmatory result (p > 0.05) for all parameters. A linear model repeated measures ANOVA was used to evaluate the effects of treatment at different times. Pearson's correlation coefficient was applied to evaluate different parameters between T1 and T0. The level of significance established was 5% (p = 0.05). Table 1. b Cephalometric skeletal and dental measurements used in the study described in Table 2 Vela-Hernández et al. Progress in Orthodontics (2020) 21:34 Results After applying inclusion and exclusion criteria, 46 patients were included in the study. Two patients in group 1 were excluded due to miniscrew loosening, so the final patient sample consisted of 44 patients, 24 (54.54%) women, and 20 (45.45%) men; this being a homogeneous distribution. Mean patient age was 36.6 ± 4.9 years. Ten female and six male patients with a mean age of 35.6 ± 6.3 years comprised the group with one miniscrew, while the group treated with two miniscrews was made up of 14 females and 14 males subjects with a mean age of 34.6 ± 3.48. Mean total treatment duration was 23.3 ± 7.7 months, and miniscrews were used for a mean period of 6.1 months ± 1.2. After the intrusion period, the miniscrews were kept in the mouth for a couple of months with a stainless steel ligature. The mean orthodontic retention period after treatment was 31.1 months ± 7.1. Intra-and inter-observer error was appropriate: d of Dahlberg was under 0.28 and CVs were below 2.55% in all cases. The measurements taken at the three evaluation times (T0, T1, and T2) are shown in Table 2; ANOVA analysis was used to determine whether differences between times were statistically significant (Table 3). Firstly, measurements from all patients were assessed together without separating the groups (one or two miniscrews). Secondly, measurements taken at the three times by group (one or two miniscrews). All values except for the mandibular plane underwent statistically significant changes as a result of treatment. Differences in cephalometric measurements between groups for T0, T1, and T2 are shown in Table 4. Upper incisor resorption after treatment was 2.15 ± 0.85 mm (9.9% of the initial length of the tooth), being 2.20 ± 0.88 for group 1 and 2.11 ± 0.82 for group 2. Tooth length remained stable after treatment. Pearson's correlation coefficient was used to determine whether upper incisor resorption was related to variations in other parameters. Table 5 shows the correlation coefficients and statistical significance for each pair. Upper incisor resorption was significantly related to overbite correction. A simple linear regression model was used to assess this correlation and a beta coefficient value of 0.193 ± 0.051 was obtained, meaning that for each millimeter of overbite reduction, 0.19 mm of root resorption was produced. The ANOVA model concluded that there were no statistically significant differences in root resorption between the two groups. Mean overbite decrease was − 3.23 ± 1.73 mm and relapsed by just 0.09 ± 0.29 mm. Overbite correction was achieved by upper incisor intrusion and lower incisor inclination but no counterclockwise rotation of the mandible was produced, since the mandibular plane angle did not undergo any statistically significant change as a result of treatment. The ANOVA model concluded that there was more overbite reduction in the group treated with two miniscrews located between the lateral incisors and canines (− 3.80 ± 1.43 versus − 2.75 ± 1.63). Upper incisor intrusion was observed in all patients indicated by the two measurements CR-SN and IE-SN; these did not undergo any significant relapse. Intrusion was greater in group 2 (two miniscrews), being this difference statistically significant. Regarding upper incisor angulation (U1-PP), an increase was achieved with no statistically significant relapse. Unlike intrusion, angulation was greater in group 1 (14.3 ± 9.99) than in group 2 (11.58 ± 8.03) with statistically significant difference (p = 0.048). However, less incisor angulation (IMPA) increase was observed with no statistically significant relapse. Unlike upper incisor angulation, no significant differences between groups were found. Due to variations in upper and lower incisor angulation, interincisal angle underwent a significant decrease with no relapse, although no significant differences between groups were found. Discussion Incisor intrusion assisted by miniscrews has gained popularity in recent years, as miniscrews reduce the need for complicated mechanics and avoid the side effects of more conventional methods [19]. The present study analyzed the changes produced during miniscrewassisted orthodontic treatment focusing on intrusion pattern, while the other factors assessed were a consequence of this intrusion. As shown in the present study, deep overbite can be corrected within a short period of time. Understanding the mechanisms, cephalometric changes and adverse effects related to overbite reduction using different treatment approaches can help clinicians make treatment planning more precise. In the present study, patients presenting maxillary incisors with a history of some kind of trauma, endodontic treatment, or patients with any systemic disease or periodic medication were excluded since there is a relationship between these disorders and root resorption [24][25][26][27]. The study showed that significant changes occurred as a consequence of orthodontic treatment assisted by miniscrews. It should be noted that these changes were a result of the combination of intrusion by vertical force from miniscrews and the effects of bracket wires. With regard to the present study's outcomes, the mandibular plane was the only result that did not undergo significant changes as a result of treatment. No counterclockwise rotation of the mandible was produced by this type of treatment, which concurs with the findings of previous research [12]. This proves that deep overbite correction by means of miniscrews produces more genuine incisor intrusion and less molar extrusion, and so does not produce significant counterclockwise rotation of the mandibular plane [11,28]. All the measurements were taken from lateral cephalograms, following the methodology established in most other studies of similar design [28]. Although a few authors have measured root resorption from CBCTs [14,19], this being a more accurate method, we did not consider taking CBCT scans justifiable in the context of the present study. Vertical incisor movement was measured using two different reference points (incisal edge and CR), making it possible to compare the results with a wider range of studies. Several authors [15] have used these two landmarks to assess incisor intrusion. The CR was set as 40% of the distance from the alveolar crest to the root apex [29]. The CR is a more reliable point since it is not affected by incisor inclination, unlike the incisal edge or root apex [30]. Unlike studies that have used the palatal plane as reference for these measurements (ANS-PNS) [12,14], the present study used the SN plane, as it is considered more reliable for studies of intrusion since the palatal plane has been shown to move slightly after intrusion [11]. Patients were allocated to one of two groups depending on root inclination and frenum. In group 1, one miniscrew was placed in the interradicular space between the two central incisors, this location being anterior to the CR. In this way, the force applied produced less intrusion but more buccal tipping. In group 2, two miniscrews were inserted between the roots of canines and lateral incisors. In this way, force was applied more posteriorly but still anterior to the CR, producing less labial tipping but more intrusion. These effects have already been described by Lindauer and Isaacson [13], who demonstrated that the different effects that obtained during intrusion and extrusion movements depend on the point where force is applied in relation to the CR of the anterior teeth. Although buccal tipping produced by miniscrew mechanics could be considered an undesirable effect, this is often not the case as many of the patients presenting overbite and gummy smile may present retroclination of the upper incisors, making buccal inclination a favorable effect leading to better and more stable outcomes. It should be noted that in group 2, the total force applied from the miniscrews was greater than that applied in group 1 (180 g and 90 g respectively), which could alter the velocity of movement and the amount of root resorption. Since no comparative clinical studies on the effects of miniscrews in relation to the incisor area where they are inserted have been published, one of the aims of the present study was to assess the overall root resorption produced by incisor intrusion when using miniscrews, and to analyze the differences in root resorption between one and two miniscrews located in different areas. Our results showed that overall root resorption was 2.15 ± 0.85 mm with no statistically significant differences between the two groups. Other studies of incisor intrusion have obtained lower root resorption values when using miniscrews [7,19,31] or conventional intrusion archwires [14,32,33]. These differences may be due to the amount of intrusion produced, as there is a positive correlation between intrusion and resorption rates, as the present study demonstrates, the amount of intrusion found in the present study being higher than amounts reported in other studies (3.84 ± 2.96 mm). Dermaut et al. [34] found higher resorption rates (2.5 mm) when using the Burstone intrusion technique. Some authors have found that lingual root torque was a strong predictor of external root resorption [35]. In this regard, our results show significantly greater incisor buccal inclination in group 1, root resorption also being higher in this group. It should be noted that several additional patient-based factors can affect root resorption rates, such as a long and narrow root shape, deviated root, or proximity to the cortical plates [27]. Intrusion values were found to be higher in the present study than those reported by other authors using conventional methods, such as utility arches or Burstone intrusion arches [7, 32-34, 36, 37]. Our results show that the amount of upper incisor resorption depends on the amount of intrusion, the results being in agreement with other studies [20][21][22]34] even though the methods used by other authors were different to those in the present study: intrusive forces applied to premolars rather than incisors, or forces applied by means of appliances other than miniscrews, or forces applied directly to teeth rather than to archwires. Although differences between groups were found for all the factors analyzed, most of them did not show statistical significance despite the major differences in force vectors. This fact may be due to other factors affecting orthodontic movement, such as the level of crowding present or archwire effects. The results of the present study showed that the use of miniscrews for incisor intrusion provided good stability for all measurements in both groups. But the stability results cannot be compared to any other studies since none of the published works on incisor intrusion with miniscrews have reported this data, as noted in the single systematic review conducted to date [28]. Although resorption occurred in all teeth, the degree of root resorption recorded can be considered clinically irrelevant and in any case ceased when treatment came to an end. Besides, when resorption percentages were considered, length losses were relatively small. This study suffered several limitations. Firstly, a twodimensional method was used to measure root resorption but, as resorption constitutes a volume loss, a three-dimensional quantitative method such as CBCT would be much more precise [19]. However, the patients did not have CBCTs and taking CBCTs just for the purposes of the study was not considered justifiable. Secondly, lateral incisor root resorption was not considered, although some authors have found no differences in resorption between lateral and central incisors [34]. Thirdly, variations in the type (continuous or transient) and magnitude of force, duration of intrusion, and measurement methods using conventional radiographs made it difficult to compare the present results with previous studies. Lastly, the groups could not be randomized since the allocation was based on the position of the roots and labial frenum. Conclusions According to the results of the present study, it may be concluded that: --Overbite correction may be achieved successfully by a combination of upper incisor intrusion and lower incisor proclination with no rotation of the mandibular plane using one or two miniscrews. Upper incisor buccal angulation increase is greater in patients treated with one miniscrew, while upper incisor intrusion and overbite correction are greater in patients treated with two. --Root resorption is slightly over 2 mm, being positively related to the amount of intrusion with no significant differences between cases treated with one or two miniscrews; it ceases at the end of active treatment. --Stability is satisfactory when using either one or two miniscrews. Abbreviations ANOVA: Analysis of variance; CBCT: Cone beam computed tomography; STROBE: Strengthening the Reporting of Observational Studies in Epidemiology; SE-NT: Superelastic nickel-titanium; CR: Center of resistance; CV: Coefficient of variation; SD: Standard deviation set of measurements. VPG, JLGF, and VGS performed data synthesis, carried out the statistical analysis, and prepared the manuscript. All the authors read, approved, and revised the manuscript. Funding This study received no funding Availability of data and materials The datasets used and/or analyzed in the course of this study are available from the corresponding author on reasonable request. Ethics approval and consent to participate The present study was approved by the Ethics Committee of the University of Valencia for Human Research (Valencia, Spain), code 1069224
v3-fos-license
2018-10-12T16:34:06.250Z
2013-01-01T00:00:00.000
53140845
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://www.m-hikari.com/imf/imf-2013/33-36-2013/jakimczukIMF33-36-2013.pdf", "pdf_hash": "8303ffd88284248ff31b631b5217499bd81f5f5c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1450", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "8303ffd88284248ff31b631b5217499bd81f5f5c", "year": 2013 }
pes2o/s2orc
Asymptotic Formulas Composite Numbers III Let k ≥ 1 and h ≥ 1 arbitrary but fixed positive integers. Let us consider the numbers such that in their prime factorization there are k primes with exponent h and the remainder of the primes have exponente greater than h. Let Pk,h(x) be the number of these numbers not exceeding x. We prove the formula Pk,h(x) ∼ Ah+1 hx 1/h(log log x)k−1 (k − 1)! log x , where Ah+1 is a constant defined in this article. Let k ≥ 1, h ≥ 1 and t ≥ 1 arbitrary but fixed positive integers. Let us consider the numbers such that in their prime factorization there are k primes with exponent h and the t primes remaining have exponent greater than h. Let Ak,h,t(x) be the number of these numbers not exceeding x. We prove the formula Ak,h,t(x) ∼ At,h+1 hx 1/h(log log x)k−1 (k − 1)! log x , where At,h+1 is a constant defined in this article. Let Et,h(x) be the number of h-ful numbers with exactly t distinct prime factors in their prime factorization. We prove the asymptotic formula Et,h(x) ∼ hx 1/h(log log x)t−1 (t − 1)! log x . where A h+1 is a constant defined in this article. Let k ≥ 1, h ≥ 1 and t ≥ 1 arbitrary but fixed positive integers.Let us consider the numbers such that in their prime factorization there are k primes with exponent h and the t primes remaining have exponent greater than h.Let A k,h,t (x) be the number of these numbers not exceeding x.We prove the formula where A t,h+1 is a constant defined in this article.Let E t,h (x) be the number of h-ful numbers with exactly t distinct prime factors in their prime factorization.We prove the asymptotic formula E t,h (x) ∼ hx 1/h (log log x) t−1 (t − 1)! log x . In particular if h = 1 then we obtain the following well-known Landau's Theorem E t,1 (x) ∼ x(log log x) t−1 (t − 1)! log x , where E t,1 (x) is the number of numbers not exceeding x with exactly t distinct prime factors in their prime factorization. Introduction, Notation and Lemmas Let n be a number such that its prime factorization if of the form where a i ≥ h + 1 (i = 1, 2, . . ., t), (h ≥ 1) is fixed and p 1 , p 2 , . . ., p t (t ≥ 1) are the different primes in the factorization.Note that the a i (i = 1, 2, . . ., t) and t are variable.These number are well known, they are called (h + 1)-ful numbers. There exist various studies on the distribution of these numbers using not elementary methods (see [1]). Let C n be the sequence of (h + 1)-ful numbers and let C h+1 (x) be the number of (h + 1)-ful numbers that do not exceed x.It is well known (see [2] for an elementary proof) that where b h+1 and c h+1 are positive constants.Note that C n depends of h + 1. For sake of simplicity we use this notation. In this article C denotes a (h + 1)-ful number. From (1) we can obtain without difficulty the following lemma. Lemma 1. 1 The following series are convergent.That is, we have Let us consider the sequence P n of the numbers whose prime factorization is of the form where The number of these numbers not exceeding x we shall denote P k,h (x) In this article we prove the asymptotic formula Let us consider the sequence E n of the (h + 1)-ful numbers with t different prime factors, where t ≥ 1 is a fixed positive integer.Note that the sequence E n depends of t and h + 1.For sake of simplicity we use this notation. We shall denote these numbers in the compact form E. The number of these numbers not exceeding x we shall denote E t,h+1 (x). Let us consider the sequence A n of the numbers whose prime factorization is of the form ) is fixed and p 1 , p 2 , . . ., p t+k are the different primes in the factorization.Note that the sequence A n depend of k, h and t.For sake of simplicity we use this notation. We shall denote these numbers in the compact form Ep The number of these numbers not exceeding x we shall denote A k,h,t (x).Since in this case the E numbers are (h + 1)-ful numbers, Lemma 1.1 imply that the following series are convergent, that is In this article we prove the asymptotic formula On the other hand (2) imply that from a certain value of x we have Let π(x) be the number of primes not exceeding x.We shall need the prime number Theorem which we shall use as a lemma. Lemma 1.2 The following formula holds Let us consider the numbers whose prime factorization is of the form where k ≥ 2 is fixed and p 1 , p 2 , . . ., p k are different primes.Let B k (x) be the number of these numbers not exceeding x.We have the following theorem (Landau's Theorem) which we shall use as a lemma (see [1]). Lemma 1. 3 The following asymptotic formula holds We shall also need the following two lemmas whose proofs are simple. Lemma 1.5 The function (c > 1) is increasing from a certain value of x.Note that f (x) depends of c. In this article we also prove the asymptotic formula (see ( 4)) In particular if h = 1 then we obtain the following well-known Landau's Theorem where E t,1 (x) is the number of numbers not exceeding x with exactly t distinct prime factors in their prime factorization. Main Lemmas The method of proof in the following Lemma 2.1 is similar to the method used in [4].For sake of completeness we give the proof.Note that the meaning of E is different here. The method of proof in the following Lemma 2.2 is similar to the method used in [5].For sake of completeness we give the proof.Note that the meaning of E is different here. Lemma 2.1 and Lemma 2.2 can be united in the following lemma. Lemma 2.3 Let > 0. There exists x such that if x ≥ x then we have the following inequality Lemma 2.4 Let > 0. There exists x such that if x ≥ x then we have the following inequality Main Results Theorem 3.1 We have the following asymptotic formula Proof.In the sums (see ( 6) and (33)) are generated undesirable numbers.The number of these undesirable numbers not exceeding x is F k (x) (k ≥ 1).Let us consider the number (we take h = 1) This number is undesirable when some primes p i appear in the prime factorization of the (73) In particular If k > t we have Equations ( 73), ( 74), ( 75), ( 67) and ( 4) give That is The theorem is proved. If h = 1 then we obtain as corollary of Theorem 3.3 the following well-known Landau's result. 1, 2, . .., t) are variable, (h ≥ 1) is fixed, (t ≥ 1) is variable, (k ≥ 1) is fixed and p 1 , p 2 , . .., p t+k are the different primes in the factorization.Note that the sequence P n depends of k and h.For sake of simplicity we use this notation.We shall denote these numbers in the compact form Cp h ) Proof.The proof is the same as Lemma 2.1 and Lemma 2.2.In the proofs of Lemma 2.1 and Lemma 2.2 we replace A k,h,t (x) by P k,h (x), E by C, E i by C i , E n by C n , A t,h+1 by A h+1 , E n+1 by C n+1 and E t,h+1 (x) by C h+1 (x).The lemma is proved. Clearly we can withdraw of (71) two primes with exponent greater than 2. In contrary case we do not obtain a 2 − f ul number.The number of possible ways is then bounded by 8 2 .Therefore if k ≤ t we have 2 − f ul number with t different prime factors E. Equations (68) and (78) give (77), since is arbitrarily small.The theorem is proved.Let us consider the h-ful numbers with exactly t distinct primes in their prime factorization.If h = 1 we obtain the numbers with exactly t distinct primes in their prime factorization.The number of these numbers not exceeding x is (see the introduction) E t,h (x).where t ≥ 1 and h ≥ 1 are fixed and p 1 , p 2 , ..., p t are different primes.Let B t,h (x) be the number of these numbers not exceeding x.We have the following asymptotic formula The proof of this formula is an immediate consequence of Lemma 1.2 and Lemma 1.3.We have (see (80), (67) and (4))E t,h (x) = B t,h (x) + A t−1,h,1 (x) + A t−2,h,2 (x) + • • • + A 1,h,t−1 (x) + E t,h+1 (x) 3heorem 3.3The following asymptotic formula holdsE t,h (x) ∼ hx 1/h (log log x) t−1 (t − 1)! log x .(79)Proof.Let us consider the numbers whose prime factorization is of the form
v3-fos-license
2014-10-01T00:00:00.000Z
2010-01-18T00:00:00.000
15642690
{ "extfieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-11-S1-S19", "pdf_hash": "8719ae0a564bd233b7bcdff1108ed614aec636e5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1451", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "sha1": "8719ae0a564bd233b7bcdff1108ed614aec636e5", "year": 2010 }
pes2o/s2orc
AntiBP2: improved version of antibacterial peptide prediction Background Antibacterial peptides are one of the effecter molecules of innate immune system. Over the last few decades several antibacterial peptides have successfully approved as drug by FDA, which has prompted an interest in these antibacterial peptides. In our recent study we analyzed 999 antibacterial peptides, which were collected from Antibacterial Peptide Database (APD). We have also developed methods to predict and classify these antibacterial peptides using Support Vector Machine (SVM). Results During analysis we observed that certain residues are preferred over other in antibacterial peptide, particularly at the N and C terminus. These observation and increased data of antibacterial peptide in APD encouraged us to again develop a new and more robust method for predicting antibacterial peptides in protein from their amino acid sequence or given peptide have antibacterial properties or not. First, the binary patterns of the 15 N terminus residues were used for predicting antibacterial peptide using SVM and achieved accuracy of 85.46% with 0.705 Mathew's Correlation Coefficient (MCC). Then we used the binary pattern of 15 C terminus residues and achieved accuracy of 85.05% with 0.701 MCC, latter on we developed prediction method by combining N & C terminus and achieved an accuracy of 91.64% with 0.831 MCC. Finally we developed SVM based model using amino acid composition of whole peptide and achieved 92.14% accuracy with MCC 0.843. In this study we used five-fold cross validation technique to develop all these models and tested the performance of these models on an independent dataset. We further classify antibacterial peptides according to their sources and achieved an overall accuracy of 98.95%. We further classify antibacterial peptides in their respective family and got a satisfactory result. Conclusion Among antibacterial peptides, there is preference for certain residues at N and C terminus, which helps to discriminate them from non-antibacterial peptides. Amino acid composition of antibacterial peptides helps to demarcate them from non-antibacterial peptide and their further classification in source and family. Antibp2 will be helpful in discovering efficacious antibacterial peptide, which we hope will be helpful against antibiotics resistant bacteria. We also developed user friendly web server for the biological community. Results: During analysis we observed that certain residues are preferred over other in antibacterial peptide, particularly at the N and C terminus. These observation and increased data of antibacterial peptide in APD encouraged us to again develop a new and more robust method for predicting antibacterial peptides in protein from their amino acid sequence or given peptide have antibacterial properties or not. First, the binary patterns of the 15 N terminus residues were used for predicting antibacterial peptide using SVM and achieved accuracy of 85.46% with 0.705 Mathew's Correlation Coefficient (MCC). Then we used the binary pattern of 15 C terminus residues and achieved accuracy of 85.05% with 0.701 MCC, latter on we developed prediction method by combining N & C terminus and achieved an accuracy of 91.64% with 0.831 MCC. Finally we developed SVM based model using amino acid composition of whole peptide and achieved 92.14% accuracy with MCC 0.843. In this study we used five-fold cross validation technique to develop all these models and tested the performance of these models on an independent dataset. We further classify antibacterial peptides according to their sources and achieved an overall accuracy of 98.95%. We further classify antibacterial peptides in their respective family and got a satisfactory result. Conclusion: Among antibacterial peptides, there is preference for certain residues at N and C terminus, which helps to discriminate them from non-antibacterial peptides. Amino acid composition of antibacterial peptides helps to demarcate them from non-antibacterial peptide and their further classification in source and family. Antibp2 will be helpful in discovering efficacious antibacterial peptide, which we hope will be helpful against antibiotics resistant bacteria. We also developed user friendly web server for the biological community. Background In the past few decades, a large number of bacterial strains have evolved ways to adapt or become resistant to the currently available antibiotic [1]. The widespread resistance of bacterial pathogens to conventional antibiotics has prompted renewed interest in the use of alternative natural microbial inhibitors such as antimicrobial peptides. Antimicrobial peptides (AMPs) are a family of host-defense peptides most of which are gene-encoded and produced by living organisms of all types [2][3][4][5][6][7][8]. Antimicrobial peptides (AMPs) are small molecular weight proteins with broad spectrum antimicrobial activity against bacteria, viruses, and fungi [3,10]. These evolutionarily conserved peptides are usually positively charged and have both a hydrophobic and hydrophilic side that enables the molecule to be soluble in aqueous environments yet also enter lipidrich membranes. Once in a target microbial membrane, the peptide kills target cells through diverse mechanisms [5]. Antimicrobial peptides have a broad spectrum of activity and can act as antibacterial, antifungal, antiviral and sometimes even as anticancer peptide [10]. These antibacterial peptides have other properties like antibacterial activity, mitogen activity or act as signaling molecules including pathogen-lytic activities [10]. Extensive work has been done in the field of antibacterial peptide, describing their identification, characterization, mechanism of action etc. keeping in mind their numerous biotechnological applications [11][12][13]. Lot of work has been done to collect and compile these peptides in form of a database [14][15][16][17]. These antibacterial peptides have very low sequence homology, despite their common function [18]. Previously we developed a very robust method AntiBP [19], for predicting antibacterial peptide using SVM, QM (quantitative matrix) and artificial neural network (ANN). Growth of antibacterial peptides in APD database in the last 2 years motivated us to develop a prediction method based on the newer and larger (almost double) dataset. We once again analyzed the antibacterial peptides and developed SVM based models to predict antibacterial peptides, because our previous study show that SVM over perform than other method. In AntiBP2 we also extracted clean dataset of antibacterial peptide families from Swiss-Prot and developed classification models for them. In the following text, we first discuss the method developed to distinguish antibacterial peptides from non-antibacterial peptides (prediction part) and in the next step describe the method for classifying these peptides on the basis of source and classes (classification part). Analysis of the antibacterial peptides Analysis of antibacterial peptides in AntiBP [19] had shown a preference for certain residues over others at both the termini. By drawing the pLOGOs [20] it was also seen that there seems to be a residue preference at different position of antibacterial peptides. As the dataset in AntiBP2 was almost double in size compared to the dataset used in the previous method AntiBP, we again decided to analyze the antibacterial peptides and look for any change or shift in preference trend. We again generated sequence logos of 15 N-terminal and C-terminal residues using pLOGO program (Figures 1 and 2). Sequence logo of last fifteen residues (C-terminus) of antibacterial peptides. The figure depicts the sequence logo of last fifteen residues (C-terminus) of antibacterial peptides, where size of residue is proportional to its propensity. It was seen that the pLOGOs drawn in AntiBP2 showed similar trend as shown in the method AntiBP [19]. Here also in the N-terminus dataset G, F, V, R was predominating at first position and L, I, W, F were frequently present at 2nd position. Similarly, certain residues are preferred at the C-terminus, for example residues K, G, C, and R are preferred at most of the positions. Though both N and C terminus have a higher proportion of positively charged residues but in AntiBP2 analysis also we could notice a higher frequency of positively charged residues at the C-terminus as compared to the N-terminus (Figures 3 and 4). This may be because it is the C-terminus first interacts with the negatively charged membrane of the bacteria and penetrate it [21]. The N-terminus later helps to hamper the crucial bacterial metabolic functions by interacting with intracellular components like DNA and RNA [22]. Antibacterial peptides also have a high propensity of the residues Cys which is normally not preferred in most of the proteins Overall amino acids composition comparison of antibacterial and non-antibacterial shows positively charged Lys is prominent in antibacterial peptides ( Figure 5). Similarly Gly and Ile propensity is also high in antibacterial peptides Prediction The performances of NT15, CT15, NTCT15 and whole peptide based prediction method for antibacterial peptides are given below in Table 1. The accuracies achieved by NTCT15 model and whole peptide based model were almost equal (~91%) and is highest among all the models. The performance of NT15 model was better that that of CT15 model. Performance on independent or blind dataset The prediction models developed in this study were evaluated on a 466 sequence independent dataset ( Table 2). These antibacterial peptides in the independent dataset were not used for developing above models either in training or testing. The results of classification of frog's antibacterial peptides and mammalian antibacterial peptides into their respective families (5 each) are given in detail in Table 5 and Discussion A great deal of interest is shown nowadays in antibacterial peptides or the so called "nature's antibiotics", which seem to be promising to overcome the growing problem of antibiotic resistance [23][24][25]. The design of novel peptides with antimicrobial activities requires the development of methods for narrowing down the candidate peptides so as to enable rational experimentation by wetlab scientists. Attempts have been made to develop methods and strategies for designing effective antimicrobial peptides [26,27]. AntiBP is one such method meant to discover efficacious antibacterial peptides that we hope could prove to be a boon to combat the dreadful antibiotic resistant bacteria. Enormous growth of antibacterial peptide data in the databases motivated us to develop an improved version of AntiBP using the same strategy. The new version was name AntiBP2. The N and C terminus sequence logos of AntiBP2 dataset were almost similar to those in the previous method AntiBP. This indicates that though there seems to be an absence of great homology or conservation among antibacterial peptides but the pattern of positional preference of certain residues remains constant. We once again developed the prediction method to classify antibacterial peptide from the non-antibacterial peptide. But this time the method was developed using a training data that was double in size to the one previously used. We developed both whole peptide based compositional BMC Bioinformatics 2010, 11(Suppl 1):S19 http://www.biomedcentral.com/1471-2105/11/S1/S19 models as well as binary pattern based terminus approaches. This time we retained the whole peptide based method also as it becomes difficult to predict peptides that are less than 15 residues in length by the binary pattern based terminal models. In this method also we achieved impressive results with all the above approaches but the best performers were the NTCT15 and whole peptide based prediction models (achieving 91% accuracy). This was followed by the NT15 based prediction model while the CT15 based model being the poorest performer among all. This trend is just similar to what was seen in AntiBP. The performance evaluation of prediction models on the independent dataset followed the trend shown during development of prediction models (in sync with the trend followed by the AntiBP method). The NTCT15 model performed the best followed by NT15 and CT15 models in respective order. In AntiBP2 we have also developed models that could classify antibacterial peptides further into families with high accuracy. First we successfully made an attempt to develop classification models that could assign the source of origin to predicted antibacterial peptides. The classification models to classify the antibacterial peptides further into corresponding families were also developed. The results attained in all the classification methods clearly indicate that although the antibacterial peptides do no show a greater conservation or homology, but they become more and more as we go down to the level of a particular family. This is evident from the high accuracies achieved for each family in various classification models. Therefore, AntiBP2 is an efficient method that can predict and classify the antibacterial peptides. We hope that our method would help the wet lab scientists to design improved and efficacious antibacterial peptides in future. Conclusion There is a rapid growth in the field of antibacterial peptide research in response to the demand for novel antibacterial agents. AntiBP2 is one such efficient method that can predict and classify the antibacterial peptides and help to find newer antibacterial peptides more speedily and conveniently. We hope that our method would promote the research to design improved and efficacious antibacterial peptides in future. Main dataset The positive dataset for this method was once again fetched from the antimicrobial peptide database APD [17]. We retrieved a total of 999 unique antibacterial peptides from this database. We used this dataset to build the whole peptide composition based SVM models to predict antibacterial peptides of any length. Negative dataset against whole peptide dataset As there is no source of experimentally proven nonantibacterial peptides, so we adopted the same strategy that was used to generated the negative dataset in AntiBP. We chose to extract random peptides from proteins belonging to all intracellular locations except from the secretary proteins (because antibacterial peptides are mostly secreted outside the cell). Though some of these randomly selected peptides could be antibacterial in nature but the possibilities are remote. To do this we used the data which was used in MitPred [28]. MitPred dataset had proteins belonging to various intracellular locations (nucleus, cytoplasm, ER, golgi complex, mitochondria). These proteins were then mixed and shuffled thoroughly so that the negative dataset does not have overrepresentation of proteins belonging to any particular location. Now we selected those proteins that were >100 amino acids in length. This was done as many of the antibacterial peptides in the positive dataset having >90 residues in length. Now for peptide in the positive dataset, we calculated its length and cut a random peptide of corresponding length from the negative dataset protein. Thus we got 999 negative peptides in result. NT15, CT15 and NTCT15 datasets We created NT15 and CT15 datasets by taking first fifteen and last fifteen residues respectively from the antibacterial peptides as done in AntiBP [19]. For NTCT15 dataset we concatenated the CT15 peptides with their corresponding NT15 counterparts. To reduce the redundancy in the positive dataset, duplicates were removed and we were left with 782 NT15, 786 CT15 peptides and 861 NTCT15 peptides. Negative dataset against NT15, CT15 and NTCT15 datasets The strategy to generate the negative datasets for NT15, CT15 and NTCT15 datasets was the same as used in AntiBP. Once again the dataset having thoroughly mixed and shuffled proteins belonging to various subcellular locations was taken. For NT15 and CT15 negative datasets 15 residues long peptides were cut randomly from this dataset. From these peptides we selected 786 peptides to be used as negative dataset against both, NT15 and CT15 datasets. The negative dataset for NTCT15 dataset was created by extracting 861 random peptides (30 residues in length) from the non-secretary protein dataset. Independent dataset We took 466 peptides from the family classification dataset (which was fetched from Swiss-Prot) which were not present in our main dataset (taken from APD database). This dataset was not used either for training or testing the method. These peptides served as the independent dataset for evaluating the performance of the prediction models. Techniques used As the SVM based technique performed the best in the method AntiBP [19], we therefore exploited SVM to develop the prediction method in this case. In this study, all SVM models have been developed using a freely available program SVM_Light [29]. This program allows users to run SVM using various kernels and parameters. In this study, the accuracy was computed at a cut-off score where sensitivity and specificity are nearly equal. Evaluation of parameters Five-fold cross-validation technique has been used to evaluate the performance of all the models developed in this study. In five fold cross-validation technique a dataset is randomly divided into five sets, where each set consists of nearly equal number of antibacterial peptides and non antibacterial peptides. Four sets are used for training and the remaining set for testing. This process is repeated five times so that each set is used once for testing. The performance of method is average performance of method on five sets. Following parameters has been used for assessing the performance of a method. Where, TP and TN are correctly predicted antibacterial peptides and non-antibacterial peptides respectively. FP and FN are wrongly predicted antibacterial peptides and non-antibacterial peptides respectively. Sensitivity (Sn) or percent coverage of antibacterial peptide is the percentage of antibacterial peptide predicted as antibacterial peptide; specificity (Sp) or percent coverage of non-antibacterial is the percentage of non-antibacterial peptide predicted as non-antibacterial peptide; overall accuracy (Ac) is the percentage of correctly predicted antibacterial and non antibacterial. The five fold cross validation technique was used for evaluation of all the three methods. Prediction of antibacterial peptides Whole peptide based approach Though it is seen that the terminus approaches are useful to scan the antibacterial peptide in a larger protein sequence but it becomes difficult of predict peptide which are less than 15 residues. Therefore, a whole peptide based SVM model was also developed in order to predict antibacterial peptides of any length. Amino acid composition of the amino acid residues was fed to train the SVM. NT15, CT15 and NTCT15 approach Again the binary patterns of NT15, CT15 and NTCT15 datasets were used to develop prediction methods as described in AntiBP. The performance was evaluated using Five-fold cross validation technique. Classification of antibacterial peptides Multiclass SVM was exploited to develop the classification models and thus models were developed to classify the antibacterial peptides belonging to different sources e.g. Bacteria, Insect, Frog, mammals and plants. N SVMs model were constructed for N-class classification. For antibacterial peptide classification, the number of classes was equal to 5. Five 1-v-r SVMs models were constructed for classification of antibacterial peptides. The ith SVM was trained with all the samples of ith class labelled positive and all other samples labelled negative. An unknown example was classified into the class that corresponds to the SVM with the highest output score. The results for the family prediction are given in Table 2. Antibacterial peptides belonging to various sources were further classified into families. Classification models were developed for peptides belonging to insects, frogs and mammals. To classify Insect antibacterial peptides into families 5 1-vs-r SVMs were developed. In a similar way 5 1-vs-r SVM models were developed to classify frog and mammalian antibacterial peptides into their respective families. The detailed results of classification of BMC Bioinformatics 2010, 11(Suppl 1):S19 http://www.biomedcentral.com/1471-2105/11/S1/S19 insect, frog and mammalian peptides are given in results section (Table 3, 4 and 5). Availability and requirements We developed a web server AntiBP2 [30] freely available for predicting and classify antibacterial peptides using models developed in this study. This web server was developed on SUN server (model T-1000) under Solaris environment using PERL programming languages.
v3-fos-license
2014-10-01T00:00:00.000Z
2013-05-29T00:00:00.000
9748617
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/18/6/6281/pdf", "pdf_hash": "92b15fe3a2f13b86a01a63e1b4729eb8027f9c0e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1453", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "92b15fe3a2f13b86a01a63e1b4729eb8027f9c0e", "year": 2013 }
pes2o/s2orc
Chemical Composition of Aspidosperma ulei Markgr. and Antiplasmodial Activity of Selected Indole Alkaloids A new indole alkaloid, 12-hydroxy-N-acetyl-21(N)-dehydroplumeran-18-oic acid (13), and 11 known indole alkaloids: 3,4,5,6-tetradehydro-β-yohimbine (3), 19(E)-hunteracine (4), β-yohimbine (5), yohimbine (6), 19,20-dehydro-17-α-yohimbine (7), uleine (10), 20-epi-dasycarpidone (11), olivacine (8), 20-epi-N-nor-dasycarpidone (14), N-demethyluleine (15) and 20(E)-nor-subincanadine E (12) and a boonein δ-lactone 9, ursolic acid (1) and 1D,1O-methyl-chiro-inositol (2) were isolated from the EtOH extracts of different parts of Aspidosperma ulei Markgr. (Apocynaceae). Identification and structural elucidation were based on IR, MS, 1H- and 13C-NMR spectral data and comparison to literature data. The antiplasmodial and antimalarial activity of 1, 5, 6, 8, 10 and 15 has been previously evaluated and 1 and 10 have important in vitro and in vivo antimalarial properties according to patent and/or scientific literature. With the aim of discovering new antiplasmodial indole alkaloids, 3, 4, 11, 12 and 13 were evaluated for in vitro inhibition against the multi-drug resistant K1 strain of the human malaria parasite Plasmodium falciparum. IC50 values of 14.0 (39.9), 4.5 (16.7) and 14.5 (54.3) μg/mL (μM) were determined for 3, 11 and 12, respectively. Inhibitory activity of 3, 4, 11, 12 and 13 was evaluated against NIH3T3 murine fibroblasts. None of these compounds exhibited toxicity to fibroblasts (IC50 > 50 μg/mL). Of the five compounds screened for in vitro antiplasmodial activity, only 11 was active. Introduction Malaria continues to be a disease that afflicts the whole World, especially the African continent. However, data from 99 countries reveals that based on the overall number of deaths malaria is in decline [1]. The main antimalarials available today are the quinolines that are structural mimics of the plant-derived natural product quinine and the semi-synthetic derivatives of another plant-derived natural product, artemisinin. Resistance of the malaria parasites to these drugs is an issue of concern and it is important to discover new compounds that may be developed into the next generation of antimalarial drugs [2]. The Aspidosperma spp. (Apocynaceae) comprise trees distributed in Central and South America. Aspidosperma spp. extracts exhibit antimalarial activity and remedies prepared from the bark are used in traditional medicine for the treatment of malaria [3]. Screening of bark extracts representing six Aspidosperma spp. for in vitro inhibition against chloroquine-resistant W2 and chloroquine-sensitive 3D7 strains of the human malaria parasite Plasmodium falciparum revealed good activity (IC 50 = 5.0-65.0 μg/mL). Thus, A. ulei (syn. A. parvifolium) trunk bark EtOH extracts were found to be active, as were the extracts of two other Aspidosperma spp. [4]. The aim of the present work was to perform a compositional study on the extracts of A. ulei and isolate indole alkaloids from this traditionally used antimalarial plant. Several isolated indole alkaloids were evaluated for in vitro antiplasmodial activity and cytotoxicity against fibroblasts as a means to discover new antiplasmodial compounds from this species. Analysis of Spectral and Physical Data for Isolated Compounds From the leaf EtOH extract (LEE) of A. ulei, ursolic acid (1), a white solid, m.p. 296.5-297.6 °C and [α] 20 D = +26.0° (c. 0.33, MeOH) was isolated for the first time from this species [18,19]. Stem bark EtOH extracts (SBEE) exhibited a precipitate, methyl-chiro-inositol (2), an amorphous solid, m.p. 150.3-152.2 °C that could be identified based on comparison of its spectral data with that of the literature [20]. Based on acquired spectral data and comparison with data in the literature [21] one of the substances was identified as (+)-3,4,5,6-tetradehydro-β-yohimbine (3, 25. N-demethyluleine (15) data [22], together with data for β-yohimbine (5) [22]. Alkaloids 5 and 6 were evaluated for in vitro antiplasmodial activity against the chloroquine-resistant Fc M29-Cameroon strain of Plasmodium falciparum and found to present IC 50 values > 1 μg/mL [23]. Several alkaloids, including 6, were cited in a patent on new antimicrobial agents that included antimalarials [24]. The alkaloid 19,20-dehydro-17α-yohimbine (7, 4.0 mg) could be identified by comparison of its NMR data to literature data [25]. (10) and 20-epi-dasycarpidone (11) provided evidence for the difference of the normal series and epi series. In the piperidine ring of uleine the ethyl side chain is in the equatorial position and in 20-epi-uleine this side chain is in the axial position. In the spectra of 11, this difference is evidenced by a 1,3-diaxial γ-effect by the ethyl group on the axial H of C-14, resulting in steric compression of C-14, C-20 and to a lesser extent C-18 and C-19. These C-atoms are more shielded than in the normal series. Olivacine (8, 5,0 mg) was isolated from the root bark through precipitation of the root bark EtOH extract acidic fraction (RBEEAF) and exhibited 1 H and 13 C-NMR data consistent with those found in the literature [35]. 20(E)-nor-subincanadine E (12, 36.0 mg) was isolated from the stem bark of A. ulei and its spectral data were similar to those found in the literature [36]. It has been reported as an intermediate in syntheses of Strychnos alkaloids [37,38]. The new indole alkaloid 12-hydroxy-N-acetyl-21(N)-dehydroplumeran-18-oic acid (13, 4.4 mg) was isolated as a resin from the root wood EtOH extract (RWEE) of A. ulei. The IR spectrum exhibited overlapped broad O-H and N-H stretching bands at 3,440 cm −1 and characteristic C=O bands of a conjugated acid and an amide, 1,683 and 1,631 cm −1 , respectively. In the 1 H and 13 C spectra, only three aromatic H signals and three aromatic CH signals were observed. Through long-distance couplings evidenced in the HMBC spectrum it was concluded that the OH group was at the C-12 (δ c 149.2) position thus confirming the monosubstitution of the aromatic ring. Analyses of the HMBC spectrum confirmed the presence of a quaternary N-atom and the C-atom of the iminium (C=N + ) group (signal at Figure 2. A structural similarity search allowed for models to be obtained for comparison of data [39] together with 1 H and 13 C data from the literature [40]. General Procedures Melting points were determined on a Digital Microdetermination apparatus (Mettler Toledo) equipped with a FP82HT heating plate and FP90 processing unit. Determinations were performed at a heating velocity of 2 °C/min and were not corrected. IR spectra were acquired on a Perkin-Elmer Spectrum 100 FT-IR spectrometer using a Universal Attenuated Total Reflectance Accessory (UATR) in the range of 400 to 4,000 cm −1 . HPLC analysis of calibration solutions and those of extracts and fractions of A. ulei was performed on a Waters modular chromatograph. This system was controlled by Empower software. The system consisted of a Waters-1525 binary pump and a photo diode array detector (PDA) model 2996. HPLC separations were performed on a Phenomenex RP-18 column (4.6 × 250 mm, 5 μm) and a Phenomenex RP-18 (10 × 250 mm, 10 μm). The samples were eluted with ACN, MeOH and a solution containing ultrapure H 2 O (Milli-Q, Millipore) and trifluoroacetic acid (TFA, 0.1-0.3%). High-resolution mass spectra (ESI-HRMS) were obtained by dissolving samples in suitable solvents and infusing the resulting solutions directly into the electrospray ionizer of a Shimadzu LCMS-IT-TOF (225-07100-34) mass spectrometer. 1D and 2D 1 H and 13 C-NMR spectra such as COSY, HSQC, HMBC and NOESY were obtained on a Bruker Avance DRX500 instrument. Collection, Botanic Identification and Processing of Plant Materials Aspidosperma ulei is commonly known as pitiá or piquiá. It was collected in Garapa in the City of Acarape in Ceará State, Brazil. Voucher specimens (registry numbers 30823, 32630 and 34813) were deposited in the Prisco Bezerra Herbarium of the University of Ceará. Botanic identification was performed by Prof. Edson P. Nunes of the Department of Biology of the Federal University of Ceará, Fortaleza, Ceará. Leaves, stem bark, heartwood, root bark and root wood were separately dried and milled. Powdered plant materials were weighed and then extracted as described below. Preparation of Extracts of A. ulei and Isolation Procedures Extraction of dry, powdered plant materials was carried out by maceration in EtOH at r.t. for 72 h. The mass of each plant material was extracted a total of three times (3 × 10 L). The EtOH solutions obtained from each extraction were rotary evaporated under reduced pressure and combined to provide each extract ( Table 2). Acid-Base Fractionation of EtOH Extracts Heartwood EtOH extract (HWEE), RWEE and RBEE (20 g of each) were separately dissolved in 2M HCl (200 mL) with stirring (30 min). Each resulting solution was extracted with DCM (3 × 300 mL). The combined organic phases were dried over anhydrous Na 2 SO 4 , evaporated to dryness and gave rise to the acidic alkaloid fractions of the heartwood, root wood and root bark EtOH extracts (HWEEAF (255 mg), RWEEAF (287 mg) and RBEEAF (384 mg), respectively). Conc. NH 4 OH was added dropwise to each acid fraction until each was pH 9 (Merck 0-14 Indicator Paper). Each fraction was then extracted with DCM (3 × 200 mL). The organic layers were combined, dried over anhydrous Na 2 SO 4 , filtered and totally evaporated to yield basic alkaloid fractions of the heartwood, root wood and root bark EtOH extracts (HWEEBF (363 mg), RWEEBF (302 mg) and RBEEBF (792 mg), respectively). Isolation of Chemical Components from Acidic Fractions RBEEAF was subjected to normal-phase CC (10 g silica gel,  = 2.5 cm) using a gradient of increasing polarity of MeOH in DCM as eluents and resulting in 12 chromatographic fractions. Chromatographic fractions 4-9 (331 mg) were combined. The alkaloid olivacine (8, 5.0 mg) was obtained by precipitation from the combined fraction. The combined fraction was further separated by HPLC using a reverse-phase, semi-preparative column (10.0 × 250 mm, 5 μm) that was eluted using 0.1% aq. TFA and MeOH (45:55). The run time was 15 min at a flow rate of 4.5 mL/min. Six fractions were collected using a detector wavelength of 323 nm. This procedure yielded a boonein lactone (9, 10.0 mg) and the alkaloids uleine (10, 40.0 mg) and 20-epi-dasycarpidone (11,26.0 mg). The fraction HWEEAF was separated by reverse-phase, semi-preparative HPLC (4.6 × 250 mm, 5 μm) using 0.1% aq. TFA and MeOH (70:30) at a flow rate of 3.0 mL/min, a total run time of 20 min and detector running at a wavelength of 300 nm. Four fractions were collected and fraction 4 (43.0 mg) was sufficiently pure for full spectrometric characterization by 1D and 2D 1 H and 13 C-NMR techniques and its structure proved to be that of an indole alkaloid, 20(E)-nor-subincanadine E (12), derived from the stemmadenine skeleton. In Vitro Culture of Plasmodium Falciparum and in Vitro Antiplasmodial Assay The multi-drug resistant K1 strains of P. falciparum (Thailand, MRA-159, MR4-ATCC) were maintained in continuous culture [41]. The in vitro antiplasmodial test was performed as previously described [14]. Briefly the substances were diluted in DMSO to a stock concentration of 5 mg/mL and subsequently diluted in complete culture medium to obtain sample solutions having concentrations in the range 100-0.14 µg/mL. Sample solutions were applied to the wells of 96-well test plates containing red blood cell suspension having initial parasitemia of 1.5%. Each sample concentration was tested in triplicate and each test plate was incubated for 48 h at 37 °C. After incubation, the contents of the wells were evaluated by optical microscopy. The inhibition of the growth of parasites (IGP%) was evaluated as a percentage by comparison with controls: Cell Culture and Cytotoxicity Test Using the Alamar Blue TM Assay The NHI-3T3 cell line of mouse fibroblasts was grown in DMEN medium supplemented with 10% fetal bovine serum, 2 mM glutamine, 100 µg/mL streptomycin and 100 U/mL penicillin, and incubated at 37 °C with a 5% atmosphere of CO 2 . For assays, the cells were plated in 96-well plates (10 4 cells per well) and the Alamar Blue TM assay was performed using previously described procedures [42,43]. Briefly, after 24 h, the compounds were dissolved in DMSO and added to each well to give final concentrations of 50 µg/mL. Plates were incubated for 48 h. Control groups had final well concentrations of 0.1% DMSO. Two hours before the end of the incubations, 10 µL of Alamar Blue TM was added to each well. The fluorescent signal was monitored with a multiplate reader using 530-560 nm excitation and 590 nm emission wavelengths. Conclusions This work represents a significant contribution to the knowledge of the chemical composition of A. ulei. This included the structural elucidation of a new indole alkaloid, identification of two indole alkaloids not previously reported in Aspidosperma spp. and identification of seven known compounds for the first time in A. ulei. Isolated indole alkaloid 20-epi-dasycarpidone (11) was shown to exhibit moderate inhibitory activity against the K1 strain of P. falciparum. Furthermore, the presence of highly active antimalarial indole alkaloids olivacine and uleine in A. ulei extracts was confirmed in the present study as was the absence of in vitro cytotoxicity of several isolated compounds. Taken together, these results lend further support to earlier reports regarding the antimalarial potential of botanicals prepared from A. ulei and isolated antiplasmodial and antimalarial components.
v3-fos-license
2017-12-23T03:22:59.768Z
2011-09-01T00:00:00.000
36618540
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ocl-journal.org/articles/ocl/pdf/2011/05/ocl2011185p263.pdf", "pdf_hash": "8b653b578a78c028f1e1daa53351e6cb45fa0e49", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1455", "s2fieldsofstudy": [ "Biology" ], "sha1": "f13fb60ca15bf1657e5f1f9c89c135f962e70b36", "year": 2011 }
pes2o/s2orc
Brain Ciliary Neurotrophic Factor (CNTF) and hypothalamic control of energy homeostasis belongs to the IL-6 family. It is expressed in both the peripheral and the central nervous systems by neuronal and glial cells. Originally, CNTF was shown to promote the survival of ciliary ganglion neurons (Barbin et al. , 1984; Helfand et al. , 1976) and to play a major role in Abstract: Cytokines play an important role in energy-balance regulation. Notably leptin, an adipocyte-secreted cytokine, regulates the activity of hypothalamic neurons that are involved in the modulation of appetite. Leptin decreases appetite and stimulates weight loss in rodents. Unfortunately, numerous forms of obesity in humans seem to be resistant to leptin action. The ciliary neurotrophic factor (CNTF) is a neurocytokine that belongs to the same family as leptin and that was originally characterized as a neurotrophic factor that promotes the survival of a broad spectrum of neuronal cell types and that enhances neurogenesis in adult rodents. It presents the advantage of stimulating weight loss in humans, despite the leptin resistance. Moreover, the weight loss persists several weeks after the cessation of treatment. Hence, CNTF has been considered as a promising therapeutic tool for the treatment of obesity and has prompted intense research aimed at identifying the cellular and molecular mechanisms underlying its potent anorexigenic properties. It has been found that CNTF shares signaling pathways with leptin and is expressed in the arcuate nucleus (ARC), a key hypothalamic region controlling food intake. Endogenous CNTF may also participate in the control of energy balance. Indeed, its expression in the ARC is inversely correlated to body weight in rats fed a high-sucrose diet. Thus hypothalamic CNTF may act, in some individuals, as a protective factor against weight gain during hypercaloric diet and could account for individual differences in the susceptibility to obesity. Obesity is a chronic, metabolic disease of complex and multiple causes leading to an imbalance between energy intake and output, and to the accumulation of large amounts of body fat. It is caused by inherited as well as acquired factors, including excessive food intake, sedentary lifestyle and unhealthy eating habits. During the past 20 years, obesity among adults has risen significantly with urbanization, economic development and market globalization. According to the World Health Organization (WHO) statements, more than one billion people worldwide are overweight or suffer from obesity, and the number of affected children has more than doubled since 1980 in the USA and Europe. In France, the latest data from Roche show that overweighting and obesity affect, respectively, more than 30% and 14.5% of adults (ObEpi-Roche, 2009). But far more worrying are the increasing and acceleration of this problem into developing countries and, based on current trends, it is predicted that the levels of obesity will continue to rise unless action is taken now (McLellan, 2002). The consequences of obesity for adults are well known. Obesity contributes to the development of many diseases, including diabetes, hypertension, dyslipidemia (for instance, high total cholesterol or high levels of triglycerides), stroke, cardiovascular disease, and some cancers (Abelson and Kennedy, 2004). As a result, the obesity epidemic has prompted important efforts to develop safe and potent therapies. However, currently approved drugs for obesity such as appetite suppressants have limited efficacy and act acutely, with patients rapidly regaining weight after the cessation of treatment. The neurocytokine ciliary neurotrophic factor (CNTF) seems to deviate from this paradigm since its administration to rodents or patients maintains lowered body weights several weeks after terminating treatment (Lambert et al., 2001). CNTF is a 200-amino acid cytokine that belongs to the IL-6 family. It is expressed in both the peripheral and the central nervous systems by neuronal and glial cells. Originally, CNTF was shown to promote the survival of ciliary ganglion neurons (Barbin et al., 1984;Helfand et al., 1976) and to play a major role in Abstract: Cytokines play an important role in energy-balance regulation. Notably leptin, an adipocyte-secreted cytokine, regulates the activity of hypothalamic neurons that are involved in the modulation of appetite. Leptin decreases appetite and stimulates weight loss in rodents. Unfortunately, numerous forms of obesity in humans seem to be resistant to leptin action. The ciliary neurotrophic factor (CNTF) is a neurocytokine that belongs to the same family as leptin and that was originally characterized as a neurotrophic factor that promotes the survival of a broad spectrum of neuronal cell types and that enhances neurogenesis in adult rodents. It presents the advantage of stimulating weight loss in humans, despite the leptin resistance. Moreover, the weight loss persists several weeks after the cessation of treatment. Hence, CNTF has been considered as a promising therapeutic tool for the treatment of obesity and has prompted intense research aimed at identifying the cellular and molecular mechanisms underlying its potent anorexigenic properties. It has been found that CNTF shares signaling pathways with leptin and is expressed in the arcuate nucleus (ARC), a key hypothalamic region controlling food intake. Endogenous CNTF may also participate in the control of energy balance. Indeed, its expression in the ARC is inversely correlated to body weight in rats fed a high-sucrose diet. Thus hypothalamic CNTF may act, in some individuals, as a protective factor against weight gain during hypercaloric diet and could account for individual differences in the susceptibility to obesity. the adult nervous system's early response to lesions. Today, we know that its spectrum of functions is much broader since it includes the differentiation and/ or survival of a variety of nervous cells such as motor neurons, oligodendrocytes and astrocytes (Hughes et al., 1988;Mayer et al., 1994;Sendtner et al., 1992). In an initial clinical trial designed to test the efficacy of a CNTF analogue (Axokine 1 , Regeneron Pharmaceuticals, Tarrytown, NY) in the treatment of amyotrophic lateral sclerosis, a degenerative motor neuron disease, some patients suffered a substantial weight loss (Miller et al., 1996a;Miller et al., 1996b). Since then the mechanisms by which CNTF induces weight loss have been deciphered using animal models: CNTF mimics the ability of leptin to reduce food intake and to induce fat loss. Indeed, similar to leptin, an adipocytesecreted cytokine well known for its role in the long-term homeostasis of body weight, CNTF reduces appetite and body fat by providing a signal of energy intake and energy stores in the body to the arcuate nucleus (ARC) of the hypothalamus, a nucleus involved in hunger control (Markus, 2005). Adjacent to the third ventricle and to the median eminence, the ARC is ideally located to be a putative brain sensor of factors circulating in the blood and the cerebrospinal fluid. Notably, ARC integrates changes in circulating levels of nutrients and hormones such as leptin and insulin to respond to the energy body requirements (Schwartz, 2000). The ARC contains two main neuronal populations that exert contrary effects on energy balance. Neuropeptide Y (NPY)-producing neurons stimulate while pro-opiomelanocortin (POMC)-synthesizing neurons inhibit appetite. In rats, the anorexigenic action of exogenous CNTF has been associated to a decrease in NPY gene expression (Xu et al., 1998) and to an increase in POMC transcription (Ambati et al., 2007). Interestingly, the chronic administration of CNTF causes a decrease in food intake and body weight without inducing a rebound effect at the cessation of treatment, usually observed after a sustained reduction in caloric intake. This effect has been attributed to a resensitization of the ARC to leptin due to a CNTF-induced neurogenesis (Kokoeva et al., 2005). Efforts to understand the mechanisms of action of CNTF in the nervous system have led to the identification of a threecomponent receptor complex for this cytokine. CNTF first binds to its specific CNTF receptor (CNTFRa), which does not play a direct role in signal transduction (Davis et al., 1993a). CNTFRa exists in two forms, membrane bound and soluble. The glycosyl phosphatidylinositol linkage of CNTFRa to the cell membrane can be cleaved by phospho-lipases releasing CNTFRa to act as a soluble protein (Taga et al., 1989). Then, binding of CNTF to the membranebound or soluble CNTFRa induces heterodimerisation of the ''b'' components of the receptor complex, gp130 and LIF receptor b (LIFRb), which trigger intracellular signaling cascades (Davis et al., 1993b). The b components of CNTF receptor complex are preassociated in Body weight (g) Control diet High-sucrose diet R 2 =0.0618 R 2 =0.7382 p<0.001 Figure 1. CNTF levels determined in the hypothalamus by Western blot negatively correlates with body weight in high-sucrose diet fed rats but not in control diet fed rats. an inactive state with the cytoplasmic Jak/Tyk tyrosine kinases. The b component dimerisation initiates the activation of mitogen-activated protein kinase/ extracellular signal-regulated kinase (MAPK/ERK) and Jak/Tyk kinases, which, in turn, phosphorylate the signal transducer and activator of transcription 3 (STAT3). In this condition, phospho-STAT3 forms a dimer that translocates to the nucleus where it activates the transcription of target genes (Stahl and Yancopoulos, 1993). The activation of this signaling pathway by CNTF is negatively modulated by the suppressor of cytokine signaling (SOCS) family of proteins (Bjorbaek et al., 1999). Thus, in rodents, CNTF shares signaling cascades with leptin in the ARC. More interesting is the fact that CNTF, which signals through leptin-like pathways, has been shown to bypass leptin resistance in diet-induced obesity model, a more representative model of human obesity (Gloaguen et al., 1997;Munzberg et al., 2005). We have shown that leptin but not CNTF is able to induce protein-tyrosine phosphatase-1B (PTP-1B) expression. In addition, and contrary to leptin, CNTF signaling was not affected by PTP-1B over-expression, suggesting that PTP-1B is a key divergent element between CNTF and leptin signaling pathways. This may at least partially explain the efficacy of CNTF administration to reduce food intake and body weight in leptin resistant state (Benomar et al., 2009). It is noteworthy that CNTF is highly expressed both in neurons and astrocytes of the hypothalamic nuclei that regulate energy balance, including the POMC anorexigenic neurons located in the ARC. To test the hypothesis of a relationship between the hypothalamic expression of CNTF and the control of energy homeostasis, the influence of a 6-week high-sucrose diet was studied on CNTF levels in the hypothalamus and the ARC in rats (Vacher et al., 2008). The highsucrose diet induces a 2-fold increase in CNTF hypothalamic levels compared to control. Interestingly, while no association is observed between CNTF hypothalamic levels and body weight in control animals, a significant inverse correlation appears in rats fed the highsucrose diet (figure 1). Indeed, in these conditions, animals with lower body weight exhibit higher amounts of CNTF in the hypothalamus. The variations in protein contents parallel those of mRNA levels. Moreover, the increase in CNTF expression is specific to the ARC, as evidenced by an immunohistochemical analysis. Thus, CNTF may be considered as an endogenous modulator of energy homeostasis in the ARC that possibly contributes to the protection of some individuals against diet induced weight gain. CNTF could account for individual differences in the susceptibility to obesity. Genetic polymorphisms studies corroborate the involvement of endogenous CNTF in the control of body weight. Indeed, it has been found that a null mutation in CNTF gene is associated with a significant increase in body mass in humans (Heidema et al., 2010;O'Dell et al., 2002), and that variants in CNTF or CNTFRa gene in humans are associated to lower age at onset of eating disorders (Gratacos et al., 2010). The anorexigenic properties of exogenous and endogenous CNTF have conferred to this cytokine a promising therapeutic potential in the treatment of obesity. However, the comprehension of the physiological significance of neural CNTF action is still incomplete because CNTF lacks a signal peptide (Sendtner et al., 1994), and thus may not be secreted by the classical exocytosis pathways. We have previously shown that CNTF distribution shares similarities with that of its receptor subunits in the rat ARC. Indeed, a majority of neurons and astrocytes express both CNTF and CNTFRa, and both b components of the receptor are ubiquitous in the rat ARC (figure 2) (Vacher et al., 2008). Thus, as previously envisaged in cell culture (Monville et al., 2002), a direct intracellular action may constitute a plausible mechanism of CNTF action. The involvement of such a process in the protective action of endogenous CNTF against dietinduced weight gain deserves further investigation. Nevertheless, these data could influence future drug discovery A B C LIFR Merge ARC 3V ME EtH-2 Figure 2. Immunohistochemical detection of LIFR (green) in the rat arcuate nucleus (counterstaining with ethidium homodimer-2 in red). Confocal Z-stacks of three 0.5 mm-thick focal planes. 3V, third ventricle; ARC, arcuate nucleus; EtH-2, ethidium homodimer-2; LIFR, LIF receptor; ME, median eminence. Scale bars = 50 mm. efforts for the development of new therapeutic targets against obesity.
v3-fos-license
2014-10-01T00:00:00.000Z
2013-04-16T00:00:00.000
4840839
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://biosignaling.biomedcentral.com/track/pdf/10.1186/1478-811X-11-25", "pdf_hash": "14b1333c26b0de8ab8a85f9edef3befef5eb0c5c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1456", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4c6d444620f0147404caae2e4eb1d6b6e57ac992", "year": 2013 }
pes2o/s2orc
Negative regulation of NF-κB signaling in T lymphocytes by the ubiquitin-specific protease USP34 Background NF-κB is a master gene regulator involved in plethora of biological processes, including lymphocyte activation and proliferation. Reversible ubiquitinylation of key adaptors is required to convey the optimal activation of NF-κB. However the deubiquitinylases (DUBs), which catalyze the removal of these post-translational modifications and participate to reset the system to basal level following T-Cell receptor (TCR) engagement continue to be elucidated. Findings Here, we performed an unbiased siRNA library screen targeting the DUBs encoded by the human genome to uncover new regulators of TCR-mediated NF-κB activation. We present evidence that knockdown of Ubiquitin-Specific Protease 34 (USP34) selectively enhanced NF-κB activation driven by TCR engagement, similarly to siRNA against the well-characterized DUB cylindromatosis (CYLD). From a molecular standpoint, USP34 silencing spared upstream signaling but led to a more pronounced degradation of the NF-κB inhibitor IκBα, and culminated with an increased DNA binding activity of the transcription factor. Conclusions Collectively, our data unveils USP34 as a new player involved in the fine-tuning of NF-κB upon TCR stimulation. Findings Nuclear factor-κB (NF-κB) transcription factors initiate transcription of genes essential for mounting an adequate immune response [1]. Ubiquitously expressed NF-κB heterodimers of Rel family proteins are normally sequestered in the cytosol of the cells by Inhibitors of NF-κB (IκBs) proteins [2]. In lymphocytes, the ligation of antigen receptors assembles the so-called CBM complex that consists of the scaffold CARMA1 and the heterodimer BCL10/MALT1 [3]. The CBM microenvironment drives oligomerized BCL10 and MALT1 to undergo K63-linked non-degradative ubiquitinylation [4][5][6][7]. This authorizes the recruitment and activation of the IκB kinase (IKK) complex that comprises two catalytic subunits (IKKα and IKKβ) and a regulatory subunit (NEMO, also called IKKγ) [8]. IKK phosphorylation of IκBs precipitates their K48-linked ubiquitinylation and proteasomal elimination, and thereby allows NF-κB to translocate to the nucleus where it binds DNA and initiates transcription [8]. NF-κB-dependent neosynthesis of IκBs subsequently drives NF-κB to shuttle back to the cytosol [1]. Although reversible ubiquitinylation processes are central for T-cell receptor-(TCR)-mediated NF-κB activation, the deubiquitinylases (DUBs) in charge of trimming these poly-ubiquitin chains to ensure optimal signaling, as well as to reset the system to basal levels remain poorly defined [9]. Thus far, two DUBs, namely cylindromatosis (CYLD) and A20 (also known as TNFAIP3), were demonstrated to negatively regulate antigen receptor signaling [9,10]. Herein, we provide evidence that Ubiquitin-Specific Protease 34 (USP34) also contributes to the fine-tuning of NF-κB upon TCR engagement. To identify additional negative regulators of TCRmediated NF-κB activation, we conducted a siRNA library screen against 98 DUBs through a gene reporter luciferase assay in Jurkat T cells stimulated with either anti-CD3 and anti-CD28 antibodies or PMA plus ionomycin to mimic TCR engagement ( Figure 1A and Additional files 1 and 2). As expected, CYLD silencing led to an enhanced NF-κB activity upon TCR stimulation ( Figure 1A). Furthermore, this screening also uncovered siRNA sequences specific for USP34 that potentiated NF-κB activation with a similar magnitude to CYLD siRNA ( Figure 1A). USP34 encompasses a 404 kDa protein with a central catalytic domain [11]. However, little is known about this DUB, albeit it was previously linked to the Wnt developmental signaling pathway [12]. Subcellular fractionation experiments showed that USP34 was essentially distributed in the cytosol of cells regardless of TCR stimulation, and was notably absent from the nucleus and organelles ( Figure 1B and Additional file 3A). We next verified by immunoblot that CYLD and USP34 endogenous levels were efficiently decreased by their respective siRNA sequences ( Figure 1C). Of note, an additional siRNA duplex specific for USP34 was also included to reinforce our initial findings (named sequence 3). Consistent with the primary screening, NF-κB reporter activity was similarly boosted upon TCR stimulation in USP34-and CYLDsilenced Jurkat when compared to control non-targeting siRNA transfected cells ( Figure 1D and E). As a consequence, the levels of the NF-κB targets NFKBIA (IκBα), interleukin-2 (IL-2) and TNFα, as measured by RT-PCR were increased in USP34-knocked down cells ( Figure 1F). Accordingly, downstream IL-2 secretion was enhanced in supernatants of USP34-silenced cells ( Figure 1G). Finally, ectopic expression of a plasmid encoding for the catalytic domain of USP34 (USP34-CD [13]) markedly dampened TCR-mediated NF-κB activity ( Figure 1H). Because USP34-CD is a large segment (383 amino acids), it is possible that in addition to the catalytic domain, it also comprises a domain required for the binding to its partners to regulate NF-κB in lymphocytes. Collectively, our data suggest that USP34 is a cytosolic protein, which functions as a negative regulator of NF-κB upon TCR engagement. In addition to NF-κB, TCR ligation kindles various signaling pathways including Nuclear factor of activated T-cells (NFAT) and the Mitogen-activated protein kinase (MAPK) Extracellular signal-regulated kinases (ERK) [14]. Gene reporter assays showed only modest increase in NFAT activation in USP34-silenced when compared to control cells (Figure 2A). Furthermore, ERK phosphorylation occurred normally without USP34 ( Figure 2B). Keeping with this, no overt change in the general pattern of tyrosine phosphorylation was observed upon TCR stimulation, further arguing against a general impairment of TCR signaling in the absence of USP34 ( Figure 2C). We next investigated whether USP34 also curtailed NF-κB activity emanating from TCR-autonomous signaling triggers. To this end, USP34-silenced Jurkat cells were stimulated with the cytokine TNFα or with the genotoxic stress agent etoposide that functions via an unconventional ATM/PIASy/Sumoylated-NEMO axis [15]. Paralleling the situation with TCR, knocking down USP34 markedly increased NF-κB in cells treated with TNFα or etoposide ( Figure 2D). Supporting previous studies with CYLDdeficient cells [10,16], CYLD silencing in Jurkat cells also increased TNFα-and etoposide-mediated NF-κB activation (Additional file 4). Combined, these results indicate that USP34 shares some functional similarities with CYLD and selectively targets the NF-κB signaling pathway. To gain insights on the signaling basis for the exacerbated NF-κB activity in USP34-depleted cells, we first examined BCL10 and MALT1 ubiquitinylation status since it governs the strength of TCR-mediated NF-κB activation [4][5][6]. BCL10 ubiquitinylation, which can be assessed in fractions enriched with heavy membranes [17], remained unchanged without USP34 (Additional file 3A). Moreover, pulldown of CK1α to precipitate the CBM complex [7,17], showed similar amounts of ubiquitinylated MALT1 bound in both control-and USP34-siRNA transfected cells (Additional file 3B). Keeping with this, BCL10 association to CARMA1 occurred normally without USP34 (Additional file 3C). We finally evaluated the impact of USP34 on the phosphorylation of IKK, which reflects its activation [8]. IKK phosphorylation was not exacerbated in those cells, and rather appeared slightly decreased ( Figure 3A, and Additional file 3D). Although puzzling, this might result from a feedback loop triggered by enhanced NF-κB. We next assessed directly NF-κB DNA binding activity. Consistent with the gene reporter assays, more active NF-κB-DNA complexes were detected in nuclear extracts from TCR stimulated cells when USP34 was silenced ( Figure 3B). Although no obvious difference in NF-κB subunit p65 translocation into the nucleus was detected, the degradation of the primary NF-κB inhibitor IκBα was more pronounced in cytosolic fractions from USP34-silenced cells when compared to control cells ( Figure 3C). Accordingly, IκBα degradation was also prolonged or more dramatic in lysates from cells transfected with USP34 siRNA even on longer time points up to 3 hours ( Figure 3D). Hence, our results suggest that USP34 likely functions downstream of the CBM-IKK nexus to enhance NF-κB activation. Almost 100 DUBs were identified in the human genome and yet, only a few have been ascribed a function [11]. As for TCR signaling, the well studied A20 and CYLD thwart NF-κB at different levels [10]. CYLD targets and inhibits the ubiquitin-dependent IKKβ kinase TAK1 and therefore prevents aberrant lymphocyte activation [18,19], while A20 dampens NF-κB activity by trimming K63-ubiquitin chains attached to MALT1 [20,21]. Our study now unveils USP34 as an additional negative regulator of NF-κB in lymphocytes. How USP34 tempers NF-κB activity remains unclear. In contrast to CYLD and A20, which target apical signaling [18,20], USP34 rather seems to function downstream of the IKK complex. Given that USP34 does not bind to the NF-κB core components (our unpublished results), we favor a model in which USP34 impacts on the activity of a cytosolic co-activator to ensure IκBα fine-tuning [22,23]. Alternatively, USP34 might also intervene in other checkpoints to control NF-κB signal outcome and intensity such as post-translational modifications, nuclear shift, or DNA three-dimensional structure [22,24,25]. Nevertheless, our data illustrates how various layers of control cooperate to ensure the fine-tuning of NF-κB following engagement of the TCR. Additional files Additional file 1: Methods description. Additional file 2: Design of the siRNA library screen.
v3-fos-license
2022-07-15T15:20:33.674Z
2022-04-30T00:00:00.000
250548305
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academicjournals.org/journal/JENE/article-full-text-pdf/4F397A869020.pdf", "pdf_hash": "35fc91649d55093aefe1a27cc259122b11480d67", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1457", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "sha1": "d12ec9db2c31a2d61a967d07f66e23e637323ea3", "year": 2022 }
pes2o/s2orc
Morphological variation in Diospyros mespiliformis (Ebenaceae) among different habitats in Benin, West Africa Interesting morphological traits in tree organs are essential for selecting the best plant germplasm. Variation in morphometric traits of leaf, fruit and seed of multipurpose tree Diospyros mespiliformis were studied in two climatic zones in Benin using 735 trees from 4 major habitats (provenances) in each climatic zone. Morphological trait measurements were combined with architectural parameters and analyzed using two-way ANOVA, principal components analysis and hierarchical clustering. Results indicated that significant differences were observed between leaves, fruits and seed morphological traits within climatic zones and habitats in terms of dimensions across the study area. Compared to the other three habitats (low dimensions), leaves from woodland showed large dimensions (µL_leaf=130 mm, µl_leaf=50 mm). Soudano Guinean zone recorded the highest fruits morphological traits values (µd_fruit= 36 mm µl_fruit=30 mm) while Soudanian zone had the lowest (µd_fruit= 12 mm, µl_fruit=8 mm). More seeds per fruit were recorded in woodlands and parklands. Out of these, hierarchical analysis grouped ebony morphotypes into four clusters. There was no significant correlation between the number of seeds per fruits and other fruit traits. However, significant and strong positive correlations were found between morphometric characters, bearing and architectural parameters (R = 0.96). The provenance significantly affected variations in organ-related traits. Correlation relationships suggested morphotypes for breeding improvement. Interesting and desired characteristics delineating individuals and populations can guide future selection of targeted ebony trees with the aim of improving high-value species in agroforestry domestication program. INTRODUCTION Removal of forest flora contribute to crucial global environmental concerns such as climate change and loss of biodiversity in addition to the regional and local problem (Romeiras et al., 2018;Okpanachi et al. (2019); *Corresponding author: E-mail: laurentgnonlonfin@gmail.com. Tel: +22997113964. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Wade et al., 2018;Tiokeng et al., 2019). Local conservation agencies must contend with the loss of genetic resources and associated traditional knowledge (Catarino et al., 2019). Forests are subject to haphazard modification following anthropogenic pressures including tree felling for agriculture. Over time, human activities have led to the destruction of economic and culturally important tree species (Kimpouni et al., 2019). In order to reduce the risk of extinction, urgent actions are needed to develop protective, domestication, propagation and valorization programs for food and woody tree species. Also, all forms of domestication begin with the exploitation of the existing natural variability while selecting, with the help of the populations, the topics to the most interesting phenotypes for the considered criteria, designated as ' trees plus' (Dicko et al., 2019). Morphological variability study is appropriate for overall genetic improvement and tree varietal selection activities. It enables the identification of attractive morphological descriptors; traits linked to the origin of seeds sources and possible genetic groups. Indeed, the phenotypic variation observed for plants is generally a response to differences in climatic conditions that reflects either adaptive evolution or phenotypic plasticity, or a combination of both. In Benin Republic, the variability of woody savannah and indigenous fruit species is poorly studied, when ecosystems that support them are highly threatened and native species are lost, along with their gene pools. Diospyros mespiliformis (Ebenaceae) is an evergreen tree, found in the tropical forest of Sub-Saharan Africa; from Ethiopia to Swaziland (Wallnöfer, 2001). It is also found in Angola, Nigeria, South Africa, Tanzania, Uganda, Yemen, Republic of Zambia and Zimbabwe (Arbonnier, 2002). D. mespiliformis, commonly known as African ebony tree, is an indigenous fruit tree species widespread in the Soudano Guinean and Soudanian zones in Benin. The fruits of the species are edible, while the bark, leaves, stem and roots are employed for various purposes (Gnonlonfin at al., 2018). Wide environmental and geographical variation often occurs within the natural range of the plant species (Akoegninou et al., 2006). Adaptation of a species to this variation may produce different morphological and physiological characteristics, resulting in the development of ecotypes. Therefore, the conservation of a forest species requires knowledge on morphological variability to differentiate individuals and target interesting morphotypes. With an increasing demand for products and by-products of ebony trees, their supply and diversity are threatened by increasing deforestation. Therefore, the necessity for domestication (which involves selection of elite trees for multiplication) is paramount. As all species are distributed by habitat fragmentation or other anthropogenic factors, Beninese ebony diversity is declining (Abasse et al., 2011). Therefore, enriching quantitative databases and monitoring programs for conservation is of great significance. Moreover, ebony species play a central role in many ecosystems. There is paucity of studies on ebony morphology in Benin. Based on a literature search, there is evidence indicating the existence of a number of local types differing in habitat, vigor, and size, quality of fruits and vitamin content of species leaves, fruits and seeds (Mkwezalamba, 2015). However, gaps in research still exist in eliciting different morphological traits, check-listing plus-trees, cataloging and monitoring of ebonies. Consequently, it therefore is highly essential to evaluate in detail the species' genotype at the morphological level with the aim of providing knowledge on which source to base plus-tree selection for plant resource tree quality improvement and conservation. This is with the view to conserve the germplasm of indigenous plant species and contain biodiversity loss that may lead to extinction. Specifically, this study aims to: (1) identify different morphological groups within D. mespiliformis based on morphological traits of interest and; (2) analyze the influence of habitattype and climatic zone on the morphological variability of D. mespiliformis. Study area This study was carried out in the two northern climatic zones of Benin Republic where the species is predominant. This zone lies between latitudes 7° 30' N and 12°40' N, and longitudes 1°6' E and 3°45' E. The population of the study area is estimated at 2,941,180 inhabitants (INSAE, 2015). Livelihood activities carried out by the inhabitants include subsistence agriculture, ranching, fishing and hunting, trade and craft (Dicko, 2016). Characteristics of D. mespiliformis D. mespiliformis is one of the most important native in the wild species introduced in agro-forestry systems in Sahelian Africa (Gnonlonfin et al., 2018). Adult trees can grow up to 20 -50 m in height, and 150-300 cm diameter in breast height. Leaves are alternate; shiny-green above and paler beneath, 3.5-19 cm long, 1.5-7.5 cm wide, oblong elliptic. Flowers are pentamerous, white and fragrant. Flowering starts in April-May during the rainy season and fruits reach maturity in November-January during the dry season. Fruits are usually globose, fleshy, up to 3 cm in diameter, greenish and pubescent when young, yellowish to orange yellow and glabrous when ripe with dark brown seeds (Arbonnier, 2002). Sampling population and data collection All structural and morphological characteristics were recorded using the KoBo toolbox for smartphones according to Olajide (2019). A multi-stage sampling procedure was applied which involved purposive sampling. Data was collected from areas where D. mespiliformis occurred from April 2017 to October 2018. A total of 22 villages were selected based on the presence of at least one adult individual D. mespiliformis (DBH > 10 cm). Along with the selected villages and for each habitat (woodlands "tree with heavy DBH" savannah "tree with weak DBH" in natural or protected area and parklands in agricultural systems or non-protected area and inundated forests) (Arbonnier, 2002), four circular or rectangular plots of 1000 m 2 were randomly installed. This resulted in 16 plots of 1000 m 2 per village. In total, 263 circular and 88 rectangular plots were installed along the study area. In each plot, vegetation type and species other than D. mespiliformis (adult individual and regeneration) (Arbonnier, 2002;Akoegninou et al., 2006) were recorded using species key determination or in-site recognition based on field experience. All D. mespiliformis trees were inventoried and registered via a Global Positioning System (GPS). The recorded geographical coordinates of the central tree in the plot were charted onto a map of Benin ( Figure 1). Morphological data of D. mespiliformis populations For morphological characterization, the following variables were collected according to Ouinsavi (2010): organ descriptive parameters (leaf length from the stalk to the apex of the leaf, leaf width at the center of the leaf, mature fruit diameter (globosely form of the fruit), and fruit length using Vernier caliper and the number of seeds per fruit (by counting) combined with bearing and architectural parameters (stem diameter at breast height, total height, bole height, and crown diameter). Bearing and architectural parameters of the sampled trees were measured while organ descriptive parameters were measured on five leaves and fruits each, randomly collected from adult trees. For each fruit, number of seeds was recorded. A total of 1755 leaves and 1585 fruits were measured (Ouinsavi, 2010;Dadegnonet et al., 2015). The Soudanian climatic zone is a woodland and savannah region with more ferruginous soils. The mean annual temperature in this zone is 35°C while mean annual rainfall varies between 900 -1100 mm. The Soudano-Guinean climatic zone is a transitional zone between the sub humid Guinean and Soudanian zones. This zone is characterized by a vegetation mosaic of forest islands, gallery forests and savannahs. Data treatment and analysis For organ descriptive parameters (leaves and fruits dimensions) measurements, data were averaged for individual trees before undertaking the series of multivariate analysis using appropriate procedures. Then a generalized linear model was used to test the effect of climate zones and habitats on tree organ descriptive parameters (leaves and fruits dimensions). For the number of seeds per fruit the study used a generalized linear model of the Poisson family test in R. Two-way ANOVA was used to analyze the effect of climatic zone and habitat on morphological variation. Five significant codes were used : ***Pr=0.001, **Pr=0.01, *Pr=0.05, Pr=0.1'Pr=1. Thus, principal component analysis (PCA) was performed on the untransformed morphometric data using the correlation matrix (Kouyaté, 2005;Rindyastuti, 2021). Correlogram and dendrogram were finally generated to explain the degree of relationship between bearings and architectural and tree organ descriptive parameters (Juma et al., 2020). Morphological parameters and occurrence of ebony trees in Benin The largest values for leaf length (220 mm) and width (80 mm as well as fruit diameter (30 mm) and width (60 mm) and the number of seed per fruit (6), were in woodlands and parklands of Soudano-Guinean transition zone; while the lowest (60, 25, 15, 10, 20 and 2 mm) respectively were obtained in inundated forests of Soudanian zone (Table 1 and Figure 1). Effect of environmental climatic zones and habitats on leaves, fruits and seeds variation leaf characteristics The pairwise comparison of leaf dimensions of the ANOVA output indicated a high degree of variability in leaf shapes at tree level. Thus, five leaf shapes that is, elliptic, oblong, oblong-elliptic, oblolanceolate-elliptic and lanceolate-elliptic were recorded across the 735 trees studied in the two climatic zones and four habitats ( Figure 2). About 74% of the trees exhibited leaf shapes commonly listed in the description of D. mespiliformis ( Figure 3). Table 2 indicated significant differences in morphological effect of leaf dimensions. We note that between Soudano-Guinean climatic zone and Soudanian zone, respectively, habitat associated probability is low at Pr < 0.0013. Significant differences observed among climatic zones and habitats for leaf length and width indicated high variation in leaf form across climatic zones and habitats. Indeed, field observations revealed that leaves from the Soudano Guinean transition zone were larger than those harvested in Soudanian zone. Likewise, leaves from swampy forests were larger than those from other habitats. This can be explained by the fact that Soudano Guinean transition zone is the zone with higher rainfall than the Soudanian zone. This suggests that D. mespiliformis is a wide range tree and several leaf traits can be selected from any climatic zone or habitat: there is a potential for genetic selection among individuals based on leaf morphology (Table 2 and Figure 1). R proposed also the boxplot of mean value with standard error associate to each habitat ( Figure 1). For instance, this figure shows that the three habitats (Inundated forests, parkland and savannah were not significantly different according to the plant part dimensions. Also, for the habitats woodland, savannah and inundated forests dataset on plant organ dimensions of these three habitats were not significantly different. Plant part dimensions (leaves, fruits) of parkland were significantly different from other habitats (woodland). Fruit characteristics An analysis of fruit characteristics of ebony plants revealed 4 fruit shapes. Data on the distribution of the four most common fruit shapes are shown in Figure 4. Results indicated significant morphological effect for fruit dimensions. Significant differences observed among climatic zones and habitats for fruit diameter and width indicated high variation in fruit form across climatic zones and habitats. Indeed, field observations revealed that fruits from Soudano Guinean transition zone were larger than those harvested in Soudanian zone. Likewise, fruits from swampy forests were larger than those from other habitats. This can be explained by the fact that Soudano Guinean transition zone is the zone that is better provided in rain than Soudanian zone. This suggests that D. mespiliformis is a wide range tree and several fruit traits can be selected in any climatic zone or habitat: there is a potential for genetic selection among individual based on the form of fruit (Table 3 and Figure 2). Table 4 shows the result of analysis of variance of number of seeds as influenced by climatic zones and habitats. Number of seed differed significantly with In total, significant differences for a variety of traits of economic importance (mainly fruit form and number of seeds per fruit) are a basis of genetic selection programs (genetic research). For the morphological parameters (leaf length and width, fruit diameter and length and the Morphological variations among individuals of D. mespiliformis In order to confirm the previous association between quantitative variables, Principal Components Analysis (PCA) was performed on bearing and architectural parameters as well as tree organs descriptor parameters. The principal component analysis showed that morphological variation (83.8%) was explained by the first two principal axes ( Figure 5). Morphological trait coefficient (that is, Eigen vectors) indicated that DBH, total height, bole height, crown diameter and height were the loading variables that are positive in the first axis while leaf dimensions (length and width), fruit dimensions (diameter and length) and number of seeds were the loaded variables that are negative on the second principal axis ( Figure 5). Along the principal component axis 1, most of individuals from savannah and parkland occupied the right side whereas the mixed group (individuals from all habitats) occupied the center as well as the left side ( Figure 5). The populations on first axis were differentiated on the basis of tree height and leaf width. On the principal axis 2, most of the individuals from inundated forest and woodland occupied the middle lower part while the mixed populations occupied the middle around the central point. It is concluded that there is a great link between certain bearing parameters and tree organ descriptive parameters. Scatter plot matrix of all bearing, architectural and morphological parameters To classify individuals into homogenous groups with similar characteristics, database on bearing and architectural characteristics and tree organ descriptive parameters were used to make a correlation matrix. This correlation matrix was subjected to pairwise analysis to determine the correlation coefficients between the sixteen variables and the significance levels using the performance analytics package (Figure 6). This graph reveals that variables such as DBH, total height, bole height, crown diameter and crown height are highly positively correlated on one hand and leaf length was highly correlated with leaf width and the same regarding dimensions of fruit. The absence of a high link between bearing and architectural parameters and tree organ descriptive parameters indicates the possible way of homogenous groups of individuals of D. mespiliformis from different habitats. Hierarchical clustering and principal component's analysis The dendrogram ensuing from the Ward D2 method sorted the 735 ebonies samples originating from the two climatic zones into four major groups (G1, G2, G3 and G4) with each group containing samples from at least three habitats (Figure 7). The largest group was G1 followed by G4 and G2 and lastly G3; the 735 ebonies samples were sorted into each of these group. Sorting individuals into the groups was based on tree descriptive organ which unveiled four homogenous groups: The first cluster composed mainly individuals from all habitats with middle fruits and leaves; the 2 nd cluster contained mainly individuals from parkland and inundated forests with large fruits and leaves; 3 rd group composed mainly individuals L_leafL_leafd_fruit, l_fruit, n_seeds from inundated forests and savannah with small and middle fruits and leaves and the 4 th group is composed of individuals from riparian soil from all habitats but only located in the north east (Natitingou-Tanguieta-Cobbly-Toukountouna) with small fruits and leaves only. Individuals from all habitats which composed group 1 have the largest organ dimensions size (leaves mean 22-8, fruit 3-3.3) and are both from Banikoara parkland and Alafiarou woodland (Figures 6 to 8). In particular, the lack of clear grouping of the 735 samples in relation to their habitats of origin indicate the exchange of seeds between several provenances (here swampy forests, woodlands, savannah and parklands) (Figures 7 and 8). The PCA on the ebony qualitative and quantitative morphological traits revealed extensive variation among the sampled trees at the habitat level, with four clear grouping of samples ( Figure 6). The first two axes explained 91.6% of the total variation, corresponding to 77.00 and 14.6% for the first and second axis, respectively. Leaves and fruits shapes (qualitative) variation across provenances Knowledge on intraspecific diversity of ebony trees is fundamental in order to fulfill the goal of meeting the demands of subsistence as well as the wellbeing of farmers (Assogbadjo et al., 2005). Qualitative and quantitative approaches were used determine the relationship between morphological structure according to the considered climatic zone and habitats. Regarding the qualitative approach, several shapes of ebony tree organs (leaves and fruits) were identified through climatic zone and habitats in the study area. This diversity of leaf shapes could be due to phenotypic factors influenced by climatic zone diversity and inside site conditions. The results of the study contribute in filling the gap of information on African ebony phenotypic diversity in leaf and fruit characteristics for use in domestication and tree improvement processes (Maroyi, 2018a, b). The results of the present research work indicated that morphological traits in leaf and fruit were strongly influenced by environmental factors. Leaves and fruits measurements (quantitative) variation among environmental factors The important variability between climatic zones and habitats observed for some characters as the measurements of the leaves and fruits could be bound more to environmental conditions than to genotypic factors. This justification finds its foundation in the fact that an interrelationship would exist enters the quantitative morphological features and the genetic data of the individuals of the species (IPGRI, 2016). The micro -variations of soil characteristics and to a certain extent, anthropic effects and the parasitic attacks that can return the shrunken plant can be added to the main factor . The weak variability intra and inter under -populations observed for the morphological fruit descriptors (diameter and length) with exception to the number of seeds by fruit could be explained by the crossed fertilization, since D. mespiliformis is a dioïc species. Thus, fruit and leaf dimensions variability observed among individuals is probably due to the effects of the environment (Kouyaté, 2005). Concerning the four morphotypes (under -populations) of D. mespiliformis gotten by the ascending hierarchical classification of Ward D2, a strong variability intra and inter morphotypes (under -populations) of most studied morphological descriptors was observed. Moreover fruit and leaf variability indicates that these four groups of D. mespiliformis showed phenotypic differences. This suggested a strong genetic diversity of the species. These results is in agreement with the study of Boukary et al., 2010) on D. microcarpum, Assogbadjo et al. (2005) on Adansonia digitata; Mkwezalamba (2015) on the fruits of Sclerocarya birrea and Abasse et al. (2011) on Balanites aegyptiaca, but not in agreement with the study of Gbemavo et al. (2015) on the fruits and the seeds of the Jatropha curcas. This strong variability to the level of the fruits can be bound in particular also to the phenotypic characters shown in the survey. Indeed, Gbemavo et al. (2015) showed other factor such as the period of flowering, the type and number of inflorescence, the color and the texture of the leaves are the quantitative features that contribute to the variations of the plant. These characters are not considered in this survey. To these factors, can be added weak influences of the environment and genotypic interaction x environment on the studied characters. Based on these results and for a better valorization of the species in order to get better outputs of fruits and the pulp (only edible part of the fruit with interesting qualities), it is necessary to proceed with a selection inside the group 1 of Diospyros mespiliformis individuals from gallery (inundated or swampy) forests and parklands of Dunkassa, Bembérèkè, Kérou, Banikoara, Kouandé and Alafiarou where the fruits are thicker (Diameter > 3 cm) with an important mass of the pulp (Boukary et al., 2010). The largest and most important component of ebony fruit is found in its pulp. Ebony fruit pulp has several chemical constituents and consequently many dietetic attributes (Kalinganire et al., 2007;Vinceti et al., 2013). Presently, the product is not fully utilized. It is usually considered as a waste by product of juice making. Once promoted, ebony pulp juice could be improved through selection that may significantly elevate the life of many rural masses. Thus, quantitative traits require classical breeding to achieve high genetic gains (Voss-Fels et al., 2019). The diversity of fruit sizes found in this study has unveiled the high polymorphism existing in ebonies populations. Generally all the populations possess fruit sizes that are of economic value and can be used for domestication purposes (Juma et al., 2020). However, further research is required to assess the frequencies of different fruit types between and within provenances. In general, sale of fruits is based on size (weight, length, width) (Yimer, 2015); bigger fruits fetch higher prices (field observation).Tree breeding may target trees with bigger fruits. However, there seems to be no relationship between the taste and size of the fruit which complicates the selection criteria. It has been reported by Katsvanga et al. (2007) that high fruit diversity attributes among sites could be attributed to climatic, edaphic, genetic and cultural factors. In the case of our sampling sites, there were huge differences in environmental factors that may be linked to differences in fruit weight related parameters observed. Domestication process involves moving genotypes from one site to another. Presently, it is not known how genotypes would respond once planted in an exotic habitat. Here we used quantitative approach to set in evidence a relation between morphological structure according to the considered climatic zone and habitats. Several shapes of ebony tree organs (leaves and fruits) have been identified through climatic zone and habitats in the study area. They have been regrouped in four big groups whose measurements of a fruit diameter and length vary from one group of size to the other. Nearly all the forms of ebony fruits described by Wallnöfer (2001) and Arbonnier (2002) such as globose, bluntly ovoid, sharply ovoid, large ovoid, shallow sulcate and oblong-cylindrical exist within the nation. This implies rich diversity that does not require an infusion of external genetic material for immediate domestication and genetic improvement programs. The patterns of fruit sizes presently found cannot be used to specifically classify populations as ecotypes due to overlapping in multiple comparison tests. Will the fruit quality (shape, size, seed number, pulp color, nutrition and weight) be consistent when seed is moved from one ecological zone to another? In this case, provenance and family evaluation are prerequisite for successful large domestication programs. Some of the questions to answer in an evaluation program will be whether there are relationships between altitude, latitude, rainfall, temperature, relative humidity, stress period and edaphic factors on the fruit attributes (Katsvanga et al., 2007). Overall morphological variation in D. mespiliformis Without showing any present geographical trend (except altitudinal range for Atacora chains' population), our find indicates that ebony morphological traits are influenced by climatic zones and habitats (environmental factors). In addition, a substantial part positive correlation was observed between the diameter and height of D. mespiliformis and tree organ dimensions. Studies of the morphological variation of D. mespiliformis were carried out on 701 natural individuals spread throughout Benin country located in West Africa. The morphological traits studied revealed relatively high variation in bearing and architectural parameters on the one hand and in tree descriptive organ on the other hand. This indicates that a fairly significant level of traits exchange occurs between populations individuals from different habitats and that a significant part of traits diversity of the species is of intrapopulation origin. The matrices of means distances among parameters show values that are very high (comprising between 0.32 and 0.95), which indicates that the populations display a morphological dissimilarity and leads to the assumption that they belong to several group (to be confirmed with genetic/molecular analysis). This level of diversity and this genetic structure might essentially be due to the difference in habitats/climatic zones of the species, without any bottleneck or slowing down of the biology of reproduction (incidence of synchronism and increased rate of fertilization) and the actions of man who has developed the parks of African ebony tree over time and helped the gene flow between them. This suggests that although D. mespiliformis prefer inundated forests because of permanence of water, individuals with plant parts of high dimensions are not found in this favourable (hospitable) habitat. Since D. mespiliformis is a plant with high light demanding, absence of high light in inundated forests explained in part low dimensions of tree organs. Conclusion Results showed that study area localities sheltered individuals of interested phenotypes (qualities) with well geo-referenced plus trees. This study highlights that adaptation of the species to environmental and geographical variation produce different morphological characteristics within its leaves and fruits. The significant difference in provenance (climatic zones and habitats) observed provided opportunities to select and conserve interesting materials from these locations. As findings from this more in-depth study, D. mespiliformis plant resources is used for a variety of purposes such as food, wood, charcoal, furniture, housing material, and medicine to mention just a few, the plant resource must be domesticate for its sustainable conservation. This research is the first report to assess the diversity within and among D. mespiliformis species in Benin using morphological descriptors. Our results should be considered for plant breeding and genetic resource conservation programs. Further investigations must be undertaken in order to determine the biochemical characteristics of each species morphotypes.
v3-fos-license
2017-06-16T17:19:37.997Z
2009-10-01T00:00:00.000
36788794
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/rsp/a/vyNZBdgspMDLjnfRPq77hFK/?format=pdf&lang=pt", "pdf_hash": "e1de5a08c8218fcd81c0ab3a4a18067494ccd2bb", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1458", "s2fieldsofstudy": [ "Education" ], "sha1": "3f9c890b45bd9c4fbdbdedf13b98f0dc1e1ba082", "year": 2009 }
pes2o/s2orc
Motor vehicle driving after binge drinking , Brazil , 2006 to 2009 The present study aimed to analyze the proportion of adults who drive under the influence of alcohol in the Brazilian capitals and in the Federal District, after Law 11,705 was established. Data from the Vigilância de Fatores de Risco e Proteção para Doenças Crônicas por Inquérito Telefônico System (VIGITEL – Surveillance System of Risk and Protective Factors for Chronic Diseases by Telephone Interview) were analyzed. In 2008, 1.5% of individuals interviewed reported having driven a motor vehicle after binge drinking in at least one occasion. The frequency of adults who drove after binge drinking remained between 1.8% and 2.2% in the eight months preceding the Law, decreased in the month following its establishment, and increased again two months later, reaching a maximum of 2.6% by the end of 2008 and returning to the initial levels in the first months of 2009. DESCRIPTORS: Alcohol Drinking, legislation & jurisprudence. Automobile Driving. Accidents, Traffic, prevention & control. Risk Factors. Chronic Disease, prevention & control. Health Surveys. INTRODUCTION During the 60 th World Health Assembly, held in 2007 and representing 193 WHO member countries, binge drinking was shown to be responsible for 3.7% of deaths and associated with 4.4% of diseases in the world.In the Americas, 8.7% of mortality in men is due to chronic alcoholism.a The literature shows an association between binge drinking and occupational accidents, and between episodes of violence and traffic accidents. 3udies show that different blood alcohol concentrations cause several neuromotor changes: 0.3 dcg/l, which corresponds to one serving of alcoholic beverage with 14 g of alcohol, causes loss of attention, false perception of speed, euphoria and difficulty in discriminating lighting conditions in space.Concentrations of 0.6 dcg/l cause an increase in time of reaction and sleepiness and of 0.8 dcg/l may lead to loss of peripheral vision, decrease in discerning lighting conditions and worse performance of routine activities. 4ese pieces of evidence influenced the Brazilian Congress to implement Law 11,705 in 2008, which reduces the blood alcohol level allowed to zero, increases the administrative penalty and criminalizes drivers who drive with 0.6 dcg of alcohol or more per liter of blood.a Thus, binge drinking constitutes a public health problem and monitoring it is essential to find out consumption patterns and more vulnerable population segments, crucial aspects to subsidize public policies of health promotion and prevention of risk behavior.In 2006, the Brazilian Ministry of Health implemented the Vigilância de Fatores de Risco e Proteção para Doenças Crônicas por Inquérito Telefônico System (VIGITEL -Surveillance System of Risk and Protective Factors for Chronic Diseases by Telephone Interview) in the 26 state capitals and in the Federal District.VIGITEL included binge drinking monitoring and the question of driving a motor vehicle after binge drinking was incorporated into this system in 2007. 2 The present study aimed to analyze the proportion of adults who drive under the influence of alcohol in Brazilian capitals and in the Federal District after Law 11,705 was established. METHODS Every year, a little over 54,000 individuals aged 18 years or older are interviewed by VIGITEL and a minimum of 2,000 interviews are conducted per city. 2 One of the interview questions was about alcohol consumption.Consumption above five servings of alcoholic beverages for men and above four for women, at the same time and in the last 30 days, was considered excessive.Those who reported excessive consumption were asked whether they had driven a motor vehicle after drinking. In RESULTS Data from 2008 for the entire adult population (≥ 18 years of age) of the 27 cities studied showed that 1.5% (n=815) of individuals reported having driven a motor vehicle after excessive alcohol consumption, in at least one occasion and in the last 30 days.This proportion was higher (p<0.05) in men (3.0%) than in women (0.3%).The practice of driving after excessive alcohol consumption showed higher frequency in the 25-to-34year age group (4.0% in men and 0.7% in women) and in the "more than 11 years of school" level of education (5.6% in men and 0.9% in women).In 2007, these frequencies were 2.0% in the general population, 4.0% in men and 0.3% in women. The Figure shows the frequency of adults who drove after excessive alcohol consumption remained between 1.8% and 2.2% in the months preceding the establishment of Law 11,705, subsequently decreasing to 1.3% in July, the month following the establishment of this Law.The lowest frequency recorded was 0.9%, in August 2008, which subsequently increased in September and October, reached 2.6% in December, and finally decreased again in the beginning of 2009.However, in May of the following year, this frequency increased again, reaching a maximum of 2.8%. DISCUSSION This national population-based study, 1 performed in 2005/2006, shows that daily alcohol consumption reaches more than 7% of the population above 34 years of age and that 22% of young adults between 18 and 24 years of age drink alcohol one to four times a week, more frequently among men.Between 18 and 24 years of age, 40% report binge drinking in the last 12 months, 37% between 25 and 34 years of age, 28% between 35 and 44 years, 20% between 45 and 54 years, and finally decreasing to 10% among those aged 60 years or older.In addition, this study shows that 8.2% of men report frequently driving after alcohol consumption. According to the WHO, daily alcohol consumption varies from 1.4% in India to 31.8% in Colombia, with riskier and more frequent consumption patterns found in low-and average-income countries, reaching between 4% and 69% of drivers, 18% to 90% of pedestrians, and 10% to 28% of motorcyclists involved in traffic accidents. 4udies performed in several countries emphasize the evidence that the adoption of legal measures that regulate blood alcohol level and driving is effective to reduce traffic accidents. 4In Australia, there was a reduction of almost 50% in alcohol consumption as cause of accidents from 1981 to 2001.These data show the importance of establishing legal measures and policies to restrict alcohol consumption and motor vehicle driving, control of alcoholic beverage advertisements, prohibition of purchases by minors, restriction of hours when alcoholic beverages are sold, and continuous inspection measures, aiming to reduce the risk of exposure to accidents resulting from binge drinking. 4 Brazil, the extensive promotion of Law 11,705 in the media and the population's great adherence to the measure led to the immediate reduction in motor vehicle driving after excessive alcohol consumption in the first months following the establishment of this Law.In the present study, VIGITEL data show an initial decrease right after the implementation of this Law, followed by an increase in November and December 2008.This may have occurred because the population remembered more at first, once the promotion of the Law in the media emphasized that this act could result in punishment, or because they behaved as they had done before this Law. Approximately 1,000 cities, including capitals, are responsible for local traffic management, including inspection.However, there are no unified data on the inspection conducted by cities, which prevents assessment of inspection before and after the establishment of Law 11,705. Once the present study was performed with people who have home landline telephones, weighting factors were used to extend the estimates of frequencies of factors studied to the adult population in the group of cities analyzed, thus compensating for possible bias.Yet, the possibility of sub-information must be considered for both the excessive consumption of alcoholic beverages (socially discriminated) and motor vehicle driving (legally prohibited), even with the guarantee of respondent confidentiality and anonymity. Data show the importance of Law 11,705, in addition to the need for greater, continuous population awareness and systematic maintenance of inspection measures, once VIGITEL data are concerning.Moreover, VIGITEL was found to be an important instrument to follow the "drinking and driving" behavior, a monitoring system that enables the assessment of the impact of policies and interventions on public health. Further studies are necessary to assess the impact of the Law on changing the behavior towards binge drinking and driving and on reducing traffic accidents. a Brazil.Law 11,705 from June 19th, 2008.It changes Law 9,503 from September 23rd, 1997, which establishes the Código de Trânsito Brasileiro (Brazilian Traffic Code) and Law 9,294 from July 15th, 1996, which deals with restrictions on the use and advertisement of smoking products, alcoholic beverages, medications, therapies and agrochemicals, following paragraph 4 of article 220 of the Federal Constitution, to reduce alcoholic beverage consumption among motor vehicle drivers, in addition to other measures.Diario Oficial Uniao.20 jun 2008; Seção 1:1. Figure . Figure.Frequency of adults who reported driving after binge drinking, before and after the establishment of Law 11,705.Brazil, 2007-2009. 2007, VIGITEL began data collection in July; in 2008, in April 2008; and in 2009, in January.In 2007 and 2008, collection ended in December.Overall, 54,251 individuals were interviewed in 2007; 54,353 in 2008; and 22,009 in 2009, until May of this year.
v3-fos-license
2021-11-18T16:20:37.636Z
2021-01-01T00:00:00.000
244301403
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/100/e3sconf_macific2021_05002.pdf", "pdf_hash": "7f8a201cc4d7ba536a166c835ff668c45107f663", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1460", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Computer Science" ], "sha1": "3eb211fb1315ea73f8f7a59b1c5fcfea108c18ee", "year": 2021 }
pes2o/s2orc
An Analysis on Wind Speed Forecasting Result with the Elman Recurrent Neural Network Method Weather factors in the archipelago have an important role in sea transportation. Weather factors, especially wind speed and wave height, become the determinants of sailing permits besides transportation’s availability, routes, and fuel. Wind speed is also a potential source of renewable energy in the archipelago. Accurate wind speed forecasting is very useful for marine transportation and development of wind power technology. One of the methods in the artificial neural network field, Elman Recurrent Neural Network (ERNN), is used in this study to forecast wind speed. Wind speed data in 2019 from measurements at the Badan Meteorologi Klimatologi dan Geofisika (BMKG) at Hang Nadim Batam station were used in the training and testing process. The forecasting results showed an accuracy rate of 88.28% on training data and 71.38% on testing data. The wide data range with the randomness and uncertainty of wind speed is the cause of low accuracy. The data set is divided into the training set and the testing set in several ratio schemas. The division of this data set considered to have contributed to the MAPE value. The observation data and data division carried out in different seasons, with varying types of wind cycles. Therefore, the forecasting results obtained in the training process are 17% better than the testing data. *Corresponding author: mbettiza@umrah.ac.id © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). E3S Web of Conferences 324, 05002 (2021) https://doi.org/10.1051/e3sconf/202132405002 MaCiFIC 2021 Introduction Indonesia is a maritime country in Southeast Asia located between two continents, Asia and Australia, and two oceans, Pacific Ocean and Indian Ocean. Indonesia's land area is 1,922,570 km², and its water area is 3,257,483 km². In addition, Indonesia is an archipelagic country with the largest number of islands in the world. Based on data from the Central Statistics Agency (BPS) published in 2017, Indonesia has 17,504 islands spread across 34 provinces [1]. Indonesia's ocean area is much wider than its land area, make it further strengthens Indonesia's position as the largest archipelagic country in the world. Riau Islands Province (Kepri) is an area with a sea area of 96% of its total area, 241,215 km2. Riau Islands Province has the largest number of islands with a total of 2,408 islands [1]. In archipelagic areas, solving transportation problems is a solution for improving logistics distribution networks and shipping marine products from certain regions. Sea transportation is not only influenced by the availability of means of transportation, routes, and fuel requirements, but also influenced by weather factors, such as wind speed and wave height. The energy sources widely used today, oil and coal, derived from fossils, which are shrinking in number. There have been many studies on other energy sources, classified as new and renewable energy, such as the use of water, solar radiation, wind, and ocean energy. Riau Islands is an area that is rich in potential renewable energy. In archipelagic country, studying the possibility of renewable energy, such as wind speed, becomes very important. The study of wind speed will support further analyses on transportation issues and studies of new and renewable energy potentials. Several algorithms have been applied on artificial neural networks to forecast wind speed and produce training models with certain accuracy. In this study, the Elman Recurrent Neural Network (ERNN) method used to forecast wind speed. Zhang [6] has developed an adaptive hybrid model for short-term wind speed forecasting as a result before the accuracy of the single model needs to improve. An adaptive hybrid model based on Variation Mode Decomposition (VMD), Fruit Fly Optimization Algorithm (FOA), Autoregressive Integrated Moving Average Model (ARIMA) and Deep Belief Network (DBN) is proposed [6]. This adaptive hybrid makes the accuracy of the proposed model better than the other models. Xiao [7] also proposed a combined model to improve accuracy in short-term wind speed prediction based on ARIMA-GARCH and Elman method, and these models significantly improved the accuracy. Zhang [8] also performed a study in short-time wind speed prediction using a combined model to obtain better accuracy. Data This study aims to predict short-term wind speed using data over a certain period of observations from the Hang Nadim BMKG station, Batam. Data is recorded every 3 (three) hours for 365 days, in 2019 and not all informatio is filled in or complete. To meet data processing rules, data imputation carried out using the mean, median, and mode values of the original data. A comprehensive data is needed in the prediction process to ensure the accuracy of the results. The application was built by applying the Elman Recurrent Neural Network algorithm for wind speed forecasting. This process used 2916 wind speed data in 2019. The reliability of the model carried out with variations in the distribution of the dataset. The dataset divided to the training set and testing set to the following scheme: 60%: 40%, 70%: 30%, and 80%: 20%. The learning rate parameter tested varies in value from 0.1 to 0.9, with an error tolerance of 0.001 and iterations of 50 and 100. MAPE The robustness of forecasting results of the three models used in this study will be evaluated using MAPE value as an error indicator. Intuitively, MAPE represents a mean absolute error between the prediction results and the actual data. Therefore, it is easier to evaluate the result and calculate the accuracy. Elman Recurrent Neural Network Elman Recurrent Neural Network (ERNN) first developed by Jeff Elman in 1990. ERNN has advantages in predicting time series data. The output from the hidden layer in ERNN performs a feedback process to itself through the recurrent layer, which is known as the context layer. This process strengthens the network to learn, recognize and generate trained data patterns. ERNN consists of four layers: input layer, hidden layer, context layer, and output layer. The figure below shows ERNN network architecture. 1. Initialize the weights between the input and hidden layers and the weights between the hidden output layers randomly. Specifies the values for the maximum epoch, learning rate and error tolerance. 2. The input data is sent to all hidden layer units, by multiplying the input value by the weight, and combined with the weight in the context layer, adding up the bias. 3. The activation function used in the output layer is a binary sigmoid function 4. The output of the hidden layer is multiplied by the weight and added by the bias. The network output calculated using the activation function. Results The following is the output of the data training process using the ERNN algorithm. Using scheme dataset division, 60%, 70%, and 80% training data; error tolerance: 0.001. The results of the system training process were carried using variations of training data 60%, 70%, and 80%. Experiment with learning rate 0.1 to 0.8, error tolerance 0.001, and epochs 50 and 100, obtained the best training results with the smallest MAPE error value of 11.751% and the highest accuracy of 88.249% on 60% training data experiment with learning rate 0.8, error tolerance 0.001 and the process stops at 94th iteration. Parameters obtained from system training with the smallest MAPE value or highest accuracy used to predict wind speed. The following is the output of the data testing process using the ERNN algorithm. Conclusions The best accuracy result of 71.38% is considerednotgood enough for this prediction process. The distribution of data in a fairly far range is analyzed to be the cause. Wind speed in archipelagic areas, such as data collection areas, varies greatly depending on the season. So the use of a fairly long period, one year of data, in this case, affects the results obtained. The distribution of data in the training process and the test process considered to have contributed to the magnitude of the MAPE score. The observation data and data division carried out in different seasons, with varying types of wind cycles, so that the parameters obtained in the training process provide an accuracy difference of up to approximately 17% in the test process. The results will be better if the data can be first clustered according to the season. The training and testing process carried out on the data in the same season. The number of input variables used considered affecting the results. This research uses time series data with 3 (three) input variables in the prediction process
v3-fos-license
2023-11-20T16:12:24.812Z
2023-11-01T00:00:00.000
265290383
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/24/22/16431/pdf?version=1700199580", "pdf_hash": "97ad1da287cebe344c84e2d345daed08f8ebd80e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1461", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "c74a235850c7a23cf2e3207091b4993b5dbbfb78", "year": 2023 }
pes2o/s2orc
Gas Chromatography–Mass Spectrometry Detection of Thymoquinone in Oil and Serum for Clinical Pharmacokinetic Studies Thymoquinone (TQ) is the primary component of Nigella sativa L. (NS) oil, which is renowned for its potent hepatoprotective effects attributed to its antioxidant, anti-fibrotic, anti-inflammatory, anti-carcinogenic, and both anti- and pro-apoptotic properties. The aim of this work was to establish a method of measuring TQ in serum in order to investigate the pharmacokinetics of TQ prior to a targeted therapeutic application. In the first step, a gas chromatography–mass spectrometry method for the detection and quantification of TQ in an oily matrix was established and validated according to European Medicines Agency (EMA) criteria. For the assessment of the clinical application, TQ concentrations in 19 oil preparations were determined. Second, two serum samples were spiked with TQ to determine the TQ concentration after deproteinization using toluene. Third, one healthy volunteer ingested 1 g and another one 3 g of a highly concentrated NS oil 30 and 60 min prior to blood sampling for the determination of serum TQ level. After the successful establishment and validation of the measurement method, the highest concentration of TQ (36.56 g/L) was found for a bottled NS oil product (No. 1). Since a capsule is more suitable for oral administration, the product with the third highest TQ concentration (No. 3: 24.39 g/L) was used for all further tests. In the serum samples spiked with TQ, the TQ concentration was reliably detectable in a range between 5 and 10 µg/mL. After oral intake of NS oil (No. 3), however, TQ and/or its derivatives were not detectable in human serum. This discrepancy in detecting TQ after spiking serum or following oral ingestion may be attributed to the instability of TQ in biomatrices as well as its strong protein binding properties. A pharmacokinetics study was therefore not viable. Studies on isotopically labeled TQ in an animal model are necessary to study the pharmacokinetics of TQ using alternative modalities. Introduction Nigella sativa L. (NS) belongs to a family of medicinal plants that are safe alternatives to allopathic drugs with fewer side effects [1,2].As a part of an overall holistic approach to health, it has long been used for traditional purposes in the Middle East, being native to Southern Europe, North Africa and Southwest Asia [3,4].The essential oil of NS is obtained from the seeds through cold pressing, and thymoquinone (TQ, IUPAC name: 2-methyl-5-(propan-2-yl)cyclohexa-2,5-diene-1,4-dione) constitutes up to 50% of the main active components in the oil [2,5].In our systematic review published on this topic, we pointed out that TQ has a wide therapeutic window with strong antioxidant, anti-inflammatory, anti-fibrotic, anti-/proapoptotic and anti-carcinogenic effects [4].This characterizes TQ as a promising drug candidate for various inflammatory and neoplastic diseases.There have been many studies published on the pharmacology of TQ, but there is sparse information regarding its pharmacokinetics [6].However, for proper use in a clinical study, the determination of pharmacokinetics is essential. Hence, our aim was to develop a method and evaluate the TQ content in commercially available NS oil preparations through gas chromatography-mass spectrometry (GC-MS).Subsequently, we sought to assess the method's applicability to a protein-rich aqueous matrix like human serum, as a crucial step for conducting pharmacokinetic studies involving healthy volunteers. Method Development and Validation Results Linearity was established between 0.03 and 40 g/L.The linear regression coefficient was r = 0.99 (Figure 1).The intra-assay precision was 7.3% (relative standard deviation, RSD).For the inter-assay precision, 7.2% (RSD) was calculated for 4.8 g/L and 7.6% (RSD) for 24.4 g/L.The lower limit of quantification (LLOQ) was 0.03 g/L (RSD 4%).The concentration of TQ remained stable under the mentioned storing conditions (Table 1).The RSD of the slopes in the addition curves after spiking six different oil samples with defined TQ concentrations was 6.6%.The accuracy of recovery was 0.5 to 7.2%.TQ showed a carryover of 0.001 g/L.This corresponds to 3.3% of the LLOQ.The accuracy value for dilution integrity at 2-fold dilution was 4.6%.Figure 2 shows the extracted ion chromatogram with the peak for TQ detected in NS oil No. 3 and IS-M (internal standard for method evaluation). Introduction Nigella sativa L. (NS) belongs to a family of medicinal plants that are safe alternatives to allopathic drugs with fewer side effects [1,2].As a part of an overall holistic approach to health, it has long been used for traditional purposes in the Middle East, being native to Southern Europe, North Africa and Southwest Asia [3,4].The essential oil of NS is obtained from the seeds through cold pressing, and thymoquinone (TQ, IUPAC name: 2methyl-5-(propan-2-yl)cyclohexa-2,5-diene-1,4-dione) constitutes up to 50% of the main active components in the oil [2,5].In our systematic review published on this topic, we pointed out that TQ has a wide therapeutic window with strong antioxidant, anti-inflammatory, anti-fibrotic, anti-/proapoptotic and anti-carcinogenic effects [4].This characterizes TQ as a promising drug candidate for various inflammatory and neoplastic diseases.There have been many studies published on the pharmacology of TQ, but there is sparse information regarding its pharmacokinetics [6].However, for proper use in a clinical study, the determination of pharmacokinetics is essential. Hence, our aim was to develop a method and evaluate the TQ content in commercially available NS oil preparations through gas chromatography-mass spectrometry (GC-MS).Subsequently, we sought to assess the method's applicability to a protein-rich aqueous matrix like human serum, as a crucial step for conducting pharmacokinetic studies involving healthy volunteers. Method Development and Validation Results Linearity was established between 0.03 and 40 g/L.The linear regression coefficient was r = 0.99 (Figure 1).The intra-assay precision was 7.3% (relative standard deviation, RSD).For the inter-assay precision, 7.2% (RSD) was calculated for 4.8 g/L and 7.6% (RSD) for 24.4 g/L.The lower limit of quantification (LLOQ) was 0.03 g/L (RSD 4%).The concentration of TQ remained stable under the mentioned storing conditions (Table 1).The RSD of the slopes in the addition curves after spiking six different oil samples with defined TQ concentrations was 6.6%.The accuracy of recovery was 0.5 to 7.2%.TQ showed a carryover of 0.001 g/L.This corresponds to 3.3% of the LLOQ.The accuracy value for dilution integrity at 2-fold dilution was 4.6%.Figure 2 shows the extracted ion chromatogram with the peak for TQ detected in NS oil No. 3 and IS-M (internal standard for method evaluation). Quantification of TQ in Different NS Oil Products With the validated method, 19 different oil products were analyzed for their TQ concentration (No. 1-19, Table 2).The highest concentration was found in oil No. 17 (c = 36.56g/L), followed by No. 19 (c = 27.92g/L) and No. 3 (c = 24.39g/L).All three products were produced from Ethiopian NS seeds.Oil No. 3 is a capsulated product and was therefore chosen for the test with human samples from healthy volunteers. Quantification of TQ in Different NS Oil Products With the validated method, 19 different oil products were analyzed for their TQ concentration (No. 1-19, Table 2).The highest concentration was found in oil No. 17 (c = 36.56g/L), followed by No. 19 (c = 27.92g/L) and No. 3 (c = 24.39g/L).All three products were produced from Ethiopian NS seeds.Oil No. 3 is a capsulated product and was therefore chosen for the test with human samples from healthy volunteers.As TQ quantification should be applied for pharmacokinetic studies, preliminary results for TQ in biomatrices were obtained. Therefore, TQ was spiked in human serum (at concentrations of 5 µg/mL and 10 µg/mL) and successfully detected via GC-MS (Figure 3).However, TQ was not de-tected in authentic human serum samples after oral intake of two (Figure 4, normal dose) and six No. 3 capsules (high dose), corresponding to 0.024 g and 0.072 g TQ per person. Quantification of TQ in Serum Samples and Detection of TQ Metabolites in Urine: Preliminary Results As TQ quantification should be applied for pharmacokinetic studies, preliminary results for TQ in biomatrices were obtained. Therefore, TQ was spiked in human serum (at concentrations of 5 µg/mL and 10 µg/mL) and successfully detected via GC-MS (Figure 3).However, TQ was not detected in authentic human serum samples after oral intake of two (Figure 4, normal dose) and six No. 3 capsules (high dose), corresponding to 0.024 g and 0.072 g TQ per person.In addition, TQ and/or its metabolites were not detected in urine via untargeted GC-MS and LC-MS/MS analysis [7,8] after application of a high-dose TQ.Furthermore, TQ showed significant matrix-dependent decay up to 80% within 240 min after spiking whole blood with 10 mg/mL TQ and within 30 min in serum. Discussion In this study, we developed and validated a GC-MS-based method to measure the concentration of TQ accurately and precisely in an oily matrix.With this method, we determined the TQ concentration in 19 oil products in order to select one with a high TQ concentration for further clinical pharmacokinetic studies in humans. High-performance liquid chromatography (HPLC), high-performance thin-layer chromatography, differential pulse polarographic and GC-MS methods have been reported for TQ quantification in black seed oil [9][10][11][12][13].It is desirable to optimize these methods for the detection of TQ from blood/serum in order to define its pharmacokinetic profile and to assess its properties in a preclinical setting.However, due to their high reactivity as fast redox cycling compounds as well as their easy adduction to electron-rich nucleophiles, the analysis of quinones is challenging [14,15]. Interestingly, we were able to detect and quantify TQ in spiked serum.However, TQ was not detected in authentic serum samples, even after the application of higher doses. One reason for this might be the observed instability of TQ in biomatrices, as was also described by Alkharfy et al. [16].In rabbits treated with 5 mg/kg TQ iv, the plasma concentration versus time curve showed a bi-exponential decline process [16].Another reason is probably low bioavailability due to the high affinity of TQ for proteins, as Quantification of TQ in Serum Samples and Detection of TQ Metabolites in Urine: Preliminary Results As TQ quantification should be applied for pharmacokinetic studies, preliminary results for TQ in biomatrices were obtained. Therefore, TQ was spiked in human serum (at concentrations of 5 µg/mL and 10 µg/mL) and successfully detected via GC-MS (Figure 3).However, TQ was not detected in authentic human serum samples after oral intake of two (Figure 4, normal dose) and six No. 3 capsules (high dose), corresponding to 0.024 g and 0.072 g TQ per person.In addition, TQ and/or its metabolites were not detected in urine via untargeted GC-MS and LC-MS/MS analysis [7,8] after application of a high-dose TQ.Furthermore, TQ showed significant matrix-dependent decay up to 80% within 240 min after spiking whole blood with 10 mg/mL TQ and within 30 min in serum. Discussion In this study, we developed and validated a GC-MS-based method to measure the concentration of TQ accurately and precisely in an oily matrix.With this method, we determined the TQ concentration in 19 oil products in order to select one with a high TQ concentration for further clinical pharmacokinetic studies in humans. High-performance liquid chromatography (HPLC), high-performance thin-layer chromatography, differential pulse polarographic and GC-MS methods have been reported for TQ quantification in black seed oil [9][10][11][12][13].It is desirable to optimize these methods for the detection of TQ from blood/serum in order to define its pharmacokinetic profile and to assess its properties in a preclinical setting.However, due to their high reactivity as fast redox cycling compounds as well as their easy adduction to electron-rich nucleophiles, the analysis of quinones is challenging [14,15]. Interestingly, we were able to detect and quantify TQ in spiked serum.However, TQ was not detected in authentic serum samples, even after the application of higher doses. One reason for this might be the observed instability of TQ in biomatrices, as was also described by Alkharfy et al. [16].In rabbits treated with 5 mg/kg TQ iv, the plasma concentration versus time curve showed a bi-exponential decline process [16].Another reason is probably low bioavailability due to the high affinity of TQ for proteins, as In addition, TQ and/or its metabolites were not detected in urine via untargeted GC-MS and LC-MS/MS analysis [7,8] after application of a high-dose TQ.Furthermore, TQ showed significant matrix-dependent decay up to 80% within 240 min after spiking whole blood with 10 mg/mL TQ and within 30 min in serum. Discussion In this study, we developed and validated a GC-MS-based method to measure the concentration of TQ accurately and precisely in an oily matrix.With this method, we determined the TQ concentration in 19 oil products in order to select one with a high TQ concentration for further clinical pharmacokinetic studies in humans. High-performance liquid chromatography (HPLC), high-performance thin-layer chromatography, differential pulse polarographic and GC-MS methods have been reported for TQ quantification in black seed oil [9][10][11][12][13].It is desirable to optimize these methods for the detection of TQ from blood/serum in order to define its pharmacokinetic profile and to assess its properties in a preclinical setting.However, due to their high reactivity as fast redox cycling compounds as well as their easy adduction to electron-rich nucleophiles, the analysis of quinones is challenging [14,15]. Interestingly, we were able to detect and quantify TQ in spiked serum.However, TQ was not detected in authentic serum samples, even after the application of higher doses. One reason for this might be the observed instability of TQ in biomatrices, as was also described by Alkharfy et al. [16].In rabbits treated with 5 mg/kg TQ iv, the plasma concentration versus time curve showed a bi-exponential decline process [16].Another reason is probably low bioavailability due to the high affinity of TQ for proteins, as described/confirmed by El-Najjar et al. [6].In this study, the average recovery of TQ from serum was 2.5% at 10 µg/mL TQ and 72% at 100 µg/mL [6].The authors stated that HPLC does not appear suitable for pharmacokinetic studies at low TQ concentrations because the extent of protein binding is high, and thus the concentration of free TQ might be below the detection limit [6]. Based on the results of El-Najjar et al., TQ may bind covalently and non-covalently to serum components, limiting the use of conventional analytical methods for its detection and quantification in plasma and the analysis of its bioavailability [6]. Alkharfy et al. [16] developed an HPLC assay to determine low concentrations of TQ in rabbit plasma.After intravenous (iv) application of TQ at a dose of 5 mg/kg in rabbits, quantification of TQ was possible.The limit of quantification was indicated as 0.408 µg/mL.The elimination half-life (T 1/2 ) of TQ was 99.71 ± 22.41 min based on a two-compartment pharmacokinetic model.In a consecutive work, Alkharfy et al. tested the iv and oral bioavailability of TQ, also in rabbits [17].Following an iv dose of 5 mg/kg, the T 1/2 was similar to that reported before (89.69 ± 12.82 min in a two-compartment model).The oral administration showed a slower absorption characteristic at a dose of 20 mg/kg (T 1/2 225.61 ± 9.08 min, one-compartment model).All in all, TQ was quickly eliminated from the plasma.The bioavailability of TQ was estimated as 58%.Similar to El-Najjar et al., Alkharfy et al. specified the percentage of TQ protein binding in rabbit and human plasma to be 99.19 ± 0.29 and 98.99 ± 0.32, respectively.Iqbal et al. [18] presented similar results in layer chickens.In their experiments, the limit of quantification was even indicated as 0.05 µg/mL.The elimination half-life after the application of 5 mg/kg TQ iv (non-compartmental pharmacokinetic) was 0.978 ± 0.205 h. Considering these studies however, it is essential to state that the non-detectability of TQ in our in vivo study might also have been due to the very low dosage we used compared to the described studies.The relative dose of ingested TQ per volunteer (70 kg) in our study was 0.3 mg/kg for the normal dose (0.024 g, dosing per intake) and 1 mg/kg for the high dose (0.072 g, recommended dose of two capsules three times a day).This is a very low fraction of the dosages used in the cited studies and seems to be insufficient considering the complex pharmacokinetic behavior of TQ in plasma described by El-Najjar et al. [6]. Moreover, intravenous administration of TQ circumvents the metabolic processes, rendering it more susceptible to detection in contrast to the oral route with subsequent metabolism.Additionally, the challenge of suboptimal absorption from the gastrointestinal tract into systemic circulation may hinder its efficacy when administered orally.A noteworthy publication authored by Ansar et al. proposes an innovative approach involving the formulation of TQ with nanostructured lipid carriers to enhance the bioavailability of oral preparations [19].This strategic advance appears to offer a promising solution to the challenges associated with the oral route of TQ administration. Even though metabolites such as glutathione conjugates have been described in the literature [20], corresponding TQ metabolites could not be detected in the preliminary experiments via untargeted GC-MS in serum and untargeted LC-MS/MS in urine.These results strengthen the hypothesis that the detection of TQ after oral administration is complicated by compound instability in biomatrices and/or additional unknown metabolic processes. Furthermore, the limit of quantification we achieved during our method validation process was much higher than the one achieved by Alkharfy et al. in plasma [16].The difference in the values is probably due to the fact that the measurements were performed in different matrices (oil and plasma, respectively) and with different methods. Following these results, subsequent investigations involve studies utilizing isotopically labeled TQ in an animal model to explore TQ pharmacokinetics in vivo, employing alternative modalities such as computed tomography. Experimental Design The protocol for the experimental design was adopted from Johnson-Ajinwo et al., 2014 [9] and Alkharfy et al., 2013 [16].First, GC-MS was established and validated for the detection of TQ in an oily matrix according to European Medicines Agency (EMA) criteria.With this method, the TQ concentration (c) in 19 commercially available and randomly chosen Egyptian and Ethiopian oil preparations was quantified.Then, after measuring TQ concentrations in spiked serum samples, the determination of TQ was carried out in two healthy volunteers.One was treated with 1 g and the other with 3 g of a highly concentrated NS oil-according to the measurement results-30 and 60 min prior to blood sampling. Chemicals and Reagents TQ, thymol (TM) and toluene (T) were purchased from Sigma Aldrich (Steinheim, Germany).N-hexane and chloroform for gas chromatography and 4-nitrophenol were obtained from Merck (Darmstadt, Germany).Gradient-grade methanol was purchased from Carl Roth (Karlsruhe, Germany). In order to improve analytical robustness, quantitative results were obtained after calibration using an internal standard (IS).Therefore, for method evaluation, 25 mg of 4-nitrophenol was dissolved in 2.5 mL of chloroform and diluted with 200 mL of n-hexane (IS-M).For TQ quantification in human serum, TM diluted with T in a concentration of 0.1 mg/L was proven to be suitable as IS (IS-S).All reagents used in the experiments were of analytical grade. Human Serum Samples Human serum-gel samples (collection tubes with a gel separator) were obtained from two native healthy volunteers 30 and 60 minutes (min) after oral intake of NS seed oil capsules (No. 3).For the serum samples spiked with 5 mg/L and 10 mg/L TQ, a dilution was prepared using the stock solution (see Section 4.4.2) and serum.Serum was separated via centrifugation at 2500× g for 10 min.Written informed consent was obtained from both participants.The study was approved by the local ethics committee (No. 2020-2033-BO).For the calibration solution, a mixture was prepared by vortexing 50 µL of linseed oil, 50 µL of the TQ stock solution (40 g/L), and 900 µL of n-hexane in a labeled Eppendorf reaction vessel.Subsequently, 100 µL of this mixture was vortexed with 900 µL of the internal standard (IS-M). The preparation of the quality control samples using two oil samples (oil No. 2 and No. 3) was conducted in accordance with the description provided in Section 4.6.3. Clinical Application A standard TQ stock solution in n-hexane was prepared at a concentration of 100 mg/L TQ and stored at −4 • C for a maximum of five weeks. The calibration solution was prepared by diluting the stock solution with serum to achieve a concentration of 10 mg/L of TQ.Deproteinization was carried out using further dilution with IS-S (1:2).All working solutions were freshly prepared for each day of experimentation. Clinical Application Serum samples were deproteinized by adding 100 µL of the IS-S containing the deproteinization solution T (1:2).Samples were vortexed for one minute and the tubes were then centrifuged at 11,000× g for 5 min at room temperature (RT).Approximately 100 µL of the supernatant was transferred to 2 mL screw-top glass vials with 200 µL glass inserts and silicon septa caps (Agilent, Santa Clara, CA, USA), and 5 µL was injected for analysis. GC-MS Analysis GC-MS analysis was conducted using a Shimadzu QP 2010 GC-MS system (Shimadzu, Kyoto, Japan) equipped with a Zebron capillary column (30 m × 0.32 mm × 1 µm; Phenomenex, Torrance, CA, USA).Helium (>99.99%) with a linear velocity of 41.7 cm/s was employed as the carrier gas.The injector was configured in the split injection mode (ratio 1:5) and maintained at a temperature of 280 • C, while the column flow rate was set at 1.27 mL/min.Chromatographic separation was successfully achieved within 25.4 min.The ion source was held at a temperature of 200 • C, and ionization was carried out in the electron impact mode with an ionization energy of 70 eV.Detection was performed in the selected ion monitoring (SIM) mode.The injection volume was standardized at 1 µL.TQ and IS were identified via reference spectra matching.Quantification of TQ, with a retention time of 8.5 min, was achieved by monitoring 164 m/z, while IS, with a retention time of 12 min, was monitored at 139 m/z.Analyte/IS peak area ratios were employed for internal calibration. For method evaluation, single-point calibration was applied using TQ at a concentration of 40 mg/mL. Method Validation The method validation was performed according to the current European Medicines Agency (EMA) guidelines [21].The validation of the developed method included linearity, precision, lower limit of quantification (LLOQ), stability, matrix effect, carry-over and dilution integrity. Precision The precision of the method was repeatedly determined using two concentrations of quality control samples (4.8 g/L and 24.4 g/L), which were processed freshly on each day of experimentation.For intra-assay precision, one concentration (24.4 g/L) was measured 5 times within a run, and for inter-assay precision, three single series per day were analyzed on two different days for both concentrations.The relative standard deviation (RSD) for the precision should not exceed 15% (20% for LLOQ). LLOQ The LLOQ was defined as the lowest concentration of TQ in an oil sample (precision of ≤20% (RSD)). Stability Analyses of the stability of three samples of working solution were performed.The stability of the samples was tested at RT (25 ± 1 • C), after freezing (−20 • C) and after storage in the refrigerator (4-8 • C).The concentrations of analytes in solutions stored for 15 and 30 days in Eppendorf tubes were compared with the concentration in fresh samples. Matrix Effect The matrix effect was determined by spiking six different TQ-free oil samples with three different concentrations (5, 10 and 20 g/L) of the compound.The RSD of the slopes in the addition curves must not exceed 15%. Carry-Over The carry-over was measured by injecting a blank sample after the highest calibration standard (40 g/L).It should be less than 20% of the LLOQ for the analyte. Dilution Integrity The dilution integrity was tested to quantify concentrations greater than the calibration interval.A sample was diluted 2-fold with a blank matrix (TQ-free linseed oil).The accuracy should be within ≤15%. Conclusions Based on our research, no previous effort has been made to quantify TQ in human serum following the oral administration of NS oil.Although a clinical pharmacokinetics study was not viable, our findings enhance the overall comprehension of TQ in a clinical context. Figure 1 . Figure 1.Linearity curve for concentrations between 0.03 g/L and 40 g/L with regression line. Figure 2 . Figure 2. Method validation: GC-MS, extracted ion chromatogram, with the peak for TQ detected in NS oil No. 3 after 8.55 min and IS-M after 12.05 min. Figure 4 . Figure 4. GC-MS, extracted ion chromatogram: TQ was not detected in serum after oral application (two capsules of NS oil No. 3).Peak indicates IS-S (internal standard for serum analyses) after 9.14 min. Figure 4 . Figure 4. GC-MS, extracted ion chromatogram: TQ was not detected in serum after oral application (two capsules of NS oil No. 3).Peak indicates IS-S (internal standard for serum analyses) after 9.14 min. Figure 4 . Figure 4. GC-MS, extracted ion chromatogram: TQ was not detected in serum after oral application (two capsules of NS oil No. 3).Peak indicates IS-S (internal standard for serum analyses) after 9.14 min. 4. 4 . Calibration Standards and Quality Control Samples 4.4.1.Method Validation A standard stock solution for TQ in n-hexane was prepared at a concentration of 40 g/L.This solution was stored at −4 • C for a maximum of five weeks. Table 1 . Stability of NS oil No. 3 in the refrigerator (4-8 • C) and after freezing (−20 • C): Demonstration of the deviation from the room temperature (RT) under storing conditions in percent (%). Table 1 . Stability of NS oil No. 3 in the refrigerator (4-8 °C) and after freezing (−20 °C): Demonstration of the deviation from the room temperature (RT) under storing conditions in percent (%).Figure 2. Method validation: GC-MS, extracted ion chromatogram, with the peak for TQ detected in NS oil No. 3 after 8.55 min and IS-M after 12.05 min. Table 2 . Mean TQ concentrations in g/L and relative standard deviation (RSD) in % in 19 different Nigella sativa (NS) oil products. Table 2 . Mean TQ concentrations in g/L and relative standard deviation (RSD) in % in 19 different Nigella sativa (NS) oil products. 2.3.Quantification of TQ in Serum Samples and Detection of TQ Metabolites in Urine: Preliminary Results
v3-fos-license
2020-12-10T09:05:22.958Z
2020-12-03T00:00:00.000
229425104
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-6412/10/12/1182/pdf", "pdf_hash": "2659ce5a270ff87499d8502a895455fe0c30e6cf", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1464", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "c7336eeb5e78f893fd28a3d9f624ebc8e2adfcea", "year": 2020 }
pes2o/s2orc
ESCA as a Tool for Exploration of Metals’ Surface : The main principles and development of electron spectroscopy for chemical analysis (ESCA) are briefly reviewed. The role of ESCA techniques (X-ray photoelectron spectroscopy and Auger electron spectroscopy) in the investigation of metallic surfaces is discussed, evidencing their importance and analytical potentiality. An overview is given of a series of recent experimental cases of ESCA application for the characterization of di ff erent metals and metallic alloys, illustrating the main results and various phenomena, such as the formation of impurity defects, corrosion, migration of constituent elements in various alloys, clustering in liquid alloy, etc., that can occur on the surface and the interface of investigated materials. These materials comprise the collection coins of noble metals, some metal alloys and Ni-based superalloys, nitride coatings on stainless steel, composite material with TiAlV alloy, treated austenitic steels, and graphene interface with polycrystalline metal foils. The present review could be particularly recommended for the newcomers to the research field of surface analysis and its application for various metals, their treatments, and possible modifications in operating conditions. Introduction From the very beginning of surface science around 1960, coinciding with the discovery of electron spectroscopy for chemical analysis (ESCA) [1], a great part of the first surface analysis studies has been dedicated to various metals [2]. This is easily understandable because the metals are stable in ultrahigh vacuum (UHV), and their surface is relatively clean (or can be easily cleaned) and is not modified under soft X-rays or electron beam. Therefore, during the initial boom of surface analysis-namely when the main experimental techniques were developed, the main principles were established, and spectroscopic catalogues were created-great attention of this research was given to the surface of metals, and in particular, the transition metals and noble ones. In this period, new scientific journals dedicated to surface analysis were also born, such as Journal of Electron Spectroscopy and Related Phenomena, etc. Classical examples of metals' surface studies can be found already in the first volume of the first journal mentioned [3,4]. Even later, when surface analysis became a common tool in the labs of materials characterization and when the ESCA handbooks of all chemical elements were available [5,6], the application for the surface of metals remained an important research field, as it was emphasized in the first text books on surface analysis [7,8]. Currently, when a great variety of surface-sensitive electron spectroscopies (see [9]) are available everywhere in the world, the most used ones remain X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES) that were born with a common term of ESCA. Of course, at present time, surface analyses are aimed at more sophisticated materials than elemental metals, but even for elemental metals these techniques are widely used for the simple and reliable control of surface purity in the fields of materials research and technological applications. It is namely for these reasons that we decided to prepare a short review, illustrating the importance and analytical capabilities of surface analysis for the exploration of metals, including also their modifications induced by operating conditions. In this review, we present the most interesting cases of experimental research carried out in our lab during the last few decades. These cases comprise the surface defects on noble metals (collection coins), some metal alloys and superalloys, nitride coatings on steel, composite material with TiAlV alloy, treatments of austenitic steel, and graphene growth on polycrystalline metals. The common denominator of all these cases is the application of ESCA, i.e., XPS and AES techniques, for materials' characterization. The main working equations of ESCA are very simple, and they are based on the principle of energy conservation. In the case of XPS, this equation is where BE is the binding energy of the elemental core level, hν is the photon energy of X-rays, KE is the kinetic energy of the emitted photoelectron, and WF is the work function of the spectrometer. In Auger effect, the electrons from three different atomic levels are involved, resulting in the finally excited Auger electron with kinetic energy equal to the following: where KE is the kinetic energy of emitted Auger electron; EL1 is the binding energy of the first atomic level, where the hole is created (by X-rays or electron beam); EL2 is the energy of the second level from which the electron is falling down to the lower level EL1; and EL3 is the energy of the third atomic level from which the Auger electron is ejected. Of course, in some chemical elements the last two levels of the Auger process can be located in the valence band, where a core-valence-valence (CVV) Auger peak is then observed. Typical examples of such elements with broad Auger CVV peaks are carbon and silicon. A schematic diagram of the final state in photoemission and Auger excitations is illustrated in Figure 1. Coatings 2020, 10, 1182 2 of 28 surface purity in the fields of materials research and technological applications. It is namely for these reasons that we decided to prepare a short review, illustrating the importance and analytical capabilities of surface analysis for the exploration of metals, including also their modifications induced by operating conditions. In this review, we present the most interesting cases of experimental research carried out in our lab during the last few decades. These cases comprise the surface defects on noble metals (collection coins), some metal alloys and superalloys, nitride coatings on steel, composite material with TiAlV alloy, treatments of austenitic steel, and graphene growth on polycrystalline metals. The common denominator of all these cases is the application of ESCA, i.e., XPS and AES techniques, for materials' characterization. The main working equations of ESCA are very simple, and they are based on the principle of energy conservation. In the case of XPS, this equation is where BE is the binding energy of the elemental core level, hν is the photon energy of X-rays, KE is the kinetic energy of the emitted photoelectron, and WF is the work function of the spectrometer. In Auger effect, the electrons from three different atomic levels are involved, resulting in the finally excited Auger electron with kinetic energy equal to the following: where KE is the kinetic energy of emitted Auger electron; EL1 is the binding energy of the first atomic level, where the hole is created (by X-rays or electron beam); EL2 is the energy of the second level from which the electron is falling down to the lower level EL1; and EL3 is the energy of the third atomic level from which the Auger electron is ejected. Of course, in some chemical elements the last two levels of the Auger process can be located in the valence band, where a core-valence-valence (CVV) Auger peak is then observed. Typical examples of such elements with broad Auger CVV peaks are carbon and silicon. A schematic diagram of the final state in photoemission and Auger excitations is illustrated in Figure 1. From the photoemission and/or Auger spectra, it is possible to identify the chemical elements because the energy of these peaks is characteristic for every element. In the case of superposition of some peaks from different elements, other peaks of the same elements can be used for identification. In most cases, the XPS also permits the chemical state of constituent elements to be identified due to the chemical shift of the photoemission peaks [10]. Both the techniques are surface sensitive because From the photoemission and/or Auger spectra, it is possible to identify the chemical elements because the energy of these peaks is characteristic for every element. In the case of superposition of some peaks from different elements, other peaks of the same elements can be used for identification. In most cases, the XPS also permits the chemical state of constituent elements to be identified due to the chemical shift of the photoemission peaks [10]. Both the techniques are surface sensitive because their information depth is limited by the mean free path of the electrons in the solid, which depends on kinetic energy and is typically from 1 to about 10 nm. The detailed description of XPS and AES techniques can be found in numerous text books (e.g., [7][8][9]) and even online (e.g., [11]). In the case of AES, the primary electron beam can be easily focused as with electron microscopy, and therefore it is possible not only to achieve a high lateral resolution in spectroscopy but also to acquire the chemical maps of the surface. This mode of operating is called Auger scanning microscopy (SAM). The first two generations of XPS spectrometers were equipped with standard soft X-ray sources (typically with Al and Mg anodes); because of this, the focusing of X-rays was impossible, and the lateral resolution of these instruments was limited to about 0.1-1 mm. Later on, with the introduction of monochromatized X-ray sources and electromagnetic input lenses, the lateral resolution of XPS was improved to about 1-3 microns, allowing also for the operation in XPS imaging mode, i.e., to acquire the surface chemical maps [12,13]. However, quite often this resolution can be too low for the investigation of the submicrometric features or patterns on the sample surface. A much higher lateral resolution of photoemission spectroscopy and imaging can be achieved by using the dedicated beamlines of synchrotron radiation. This technique, which is called scanning photoelectron microscopy (SPEM), enables us to investigate the surface chemical composition at a lateral resolution of about 100 nm [14,15]. It was successfully employed also for some of our experimental cases by using an ESCA microscopy beamline at the synchrotron Elettra in Trieste, Italy. The main features of XPS, AES, and SPEM techniques are summarized in the Table 1, including also the benefits and weak points of their practical applications. Experimental Techniques Two different spectrometers were used for the XPS characterization of investigated materials: an aged Escalab MkII (VG Scientific Ltd., East Grinstead, UK) and a modern one, Escalab 250Xi (Thermo Fisher Scientific Ltd., East Grinstead, UK). In both the instruments, the spectroscopy was carried out by concentric hemispherical analyzers operating in a constant pass energy (20 or 40 eV) mode. The first one was equipped with a double-anode (Al/Mg Kα) X-ray source and electrostatic input lens, collecting the signal from the sample area of about 10 mm (large-area mode) and variable to about 0.3 mm (small-area mode). The photoemission signals were registered by a 5-channeltron detector. The second apparatus was equipped with a monochromatized Al Kα source and a combined system of electrostatic/electromagnetic input lenses. In the spectroscopy mode, this system allowed the diameter of analyzed sample area to vary from 900 to 20 µm, and the photoemission signals were registered by a 6-channeltron detector. In the imaging XPS mode, the best lateral resolution of chemical maps was of about 3 µm, and the signals were registered by a multichannel plate with 128 channels. The charging of insulating samples was suppressed by using a combination of two neutralizing floods: low energy electrons from an in-lens gun and low energy Ar+ ions from an external gun. For the sample surface cleaning and XPS depth profiling in both the Escalabs, rastered Ar+ ion guns were used, i.e., the EX05 model in MkII and the EX06 in 250Xi. The base UHV pressure in the analysis chambers of both spectrometers was always kept below 10 −9 mbar. The experiments of AES/SAM were carried out by using a LEG200 electron gun installed on the analysis chamber of Escalab MkII. This excitation source provided the primary beam of electrons with an energy up to 10 keV and a minimum beam diameter of 200 nm. For all the samples, the current of electron beam was kept very low (4-10 nA) in order to avoid any sample surface damage by the electron beam. Seeking to increase the signal-to-noise ratio, all the Auger spectra and chemical maps were acquired in a constant retard ratio (1:2) mode of the analyzer. All experimental data were processed by the software Avantage v.5 (Thermo Fisher Scientific Ltd.). The peak fitting of photoemission spectra was performed by using the Shirley background, a Voigt peak-shape (mixed Gaussian-Lorentzian with variable ratio), and linked full widths at half maximum (FWHMs) for the same core level. Final calibration of the BE scale was done by fixing the main component of C 1s peak (aliphatic carbon) at 285.0 eV and controlling it, if the Fermi level in the valence band is positioned at BE = 0.0 eV. High resolution SPEM experiments were performed at the ESCA microscopy beamline of the Elettra synchrotron [14,15]. By using Fresnel zone plate optics, the X-ray beam from the synchrotron source was focused to a microprobe with a diameter of about 150 nm on the sample, which was raster-scanned with respect to the microprobe. Photoelectrons were collected by the SPECS-PHOIBOS 100 hemispherical analyzer and registered by a 48-channel electron detector. All the samples were investigated in both imaging and spectroscopy modes with a 0.2 eV energy resolution by using 500-700 eV photon energy. The overall lateral resolution was below 50 nm. Before the measurements, the samples were cleaned by Ar + ion sputtering at 2.0 keV energy. After the acquisition, the chemical maps were processed by the Igor v.6.3 software. "Gold Corrosion" in Collection Coins Can "gold corrosion" occur in gold coins? This question arose approximately two decades ago, when some owners of precious coins unexpectedly found the appearance of numerous stains on their gold coins. After many studies, even using the Pourbaix diagram, this enigma was successfully disclosed because of the application of surface analysis techniques. The chemical composition of these defects was determined, and their source was established. The study of surface analysis was performed on gold and silver collection coins supplied by the Kunsthistorisches Museum in Vienna (historical Austrian Ducat) and Austrian Mint (coins and their blanks). XPS, AES, and SAM techniques were combined to get qualitative and quantitative information about the surface defects. Their stains, analysed by a stereomicroscope, were generally composed by a dark central area surrounded by a larger outer area, whose colour varied from red to dark blue [16]. The chemical composition of every single stain was determined by XPS. All elements in the spot were quickly identified by the assignment of the peaks found in the survey scan spectrum Figure 2), whereas their chemical state and atomic concentration were determined by processing the resolved spectra of the main peaks presented in Figure 3 [17]. The obtained results promptly evidenced a strange composition of the stains on a pure (999.9) gold coin: A contamination with Ag and S was revealed. This was an astonishing finding for a pure gold coin, giving rise to the following questions: how and when had these impurities been added? The obtained results were confirmed by the multipoint AES analysis and SAM chemical maps acquired with a higher lateral resolution of approximately 200 nm, which are presented in Figure 4. The obtained results promptly evidenced a strange composition of the stains on a pure (999.9) gold coin: A contamination with Ag and S was revealed. This was an astonishing finding for a pure gold coin, giving rise to the following questions: how and when had these impurities been added? The obtained results were confirmed by the multipoint AES analysis and SAM chemical maps acquired with a higher lateral resolution of approximately 200 nm, which are presented in Figure 4. The obtained results promptly evidenced a strange composition of the stains on a pure (999.9) gold coin: A contamination with Ag and S was revealed. This was an astonishing finding for a pure gold coin, giving rise to the following questions: how and when had these impurities been added? The obtained results were confirmed by the multipoint AES analysis and SAM chemical maps acquired with a higher lateral resolution of approximately 200 nm, which are presented in Figure 4. The obtained results promptly evidenced a strange composition of the stains on a pure (999.9) gold coin: A contamination with Ag and S was revealed. This was an astonishing finding for a pure gold coin, giving rise to the following questions: how and when had these impurities been added? The obtained results were confirmed by the multipoint AES analysis and SAM chemical maps acquired with a higher lateral resolution of approximately 200 nm, which are presented in Figure 4. The analysis of Ag 3d, Ag LMM, and S 2p spectra gave some indications on their chemical state. As it can be seen in Figure 4, the Ag 3d spectrum was characterized by the typical doublet of the spinorbit splitting of the core level 3d (Ag 3d5/2-Ag 3d3/2), separated by 6.0 eV. The main Ag 3d5/2 peak was positioned at BE = 368.0 eV. However, it is well known that the Ag 3d signal is one of the few cases where the chemical shift is almost absent, i.e., it is impossible to identify the chemical state of Ag only from photoemission spectra. In these cases, it is necessary to calculate the modified Auger parameter α' by using a very simple formula: α' = BE (Ag 3d5/2) + KE (Ag LMM) [18]. The value of the Auger parameter can indicate the chemical state (metal, oxide, etc.) of the investigated element. In this case, it was α' = 725.2 -725.3 eV, which is the typical value for Ag + in the silver sulfides, specifically in Ag2S [5]. The analysis of the S 2p signal confirmed the presence of sulfides, since the S 2p3/2 peak was positioned at BE = 161.6-161.9 eV [5]. It is interesting to note that the XPS quantitative analysis identified four different scenarios, depending on the color of the spot, which are summarized in Figure 5: (1) grey stains-with Ag, O and S; (2) dark spots-with Ag, O, and S, but also with Au and Cu; (3) red spots-like the grey spots, but with different atomic concentration of the elements; and (4) clean surface-with Au, Cu, and a small amount of O. The analysis of Ag 3d, Ag LMM, and S 2p spectra gave some indications on their chemical state. As it can be seen in Figure 4, the Ag 3d spectrum was characterized by the typical doublet of the spin-orbit splitting of the core level 3d (Ag 3d 5/2 -Ag 3d 3/2 ), separated by 6.0 eV. The main Ag 3d 5/2 peak was positioned at BE = 368.0 eV. However, it is well known that the Ag 3d signal is one of the few cases where the chemical shift is almost absent, i.e., it is impossible to identify the chemical state of Ag only from photoemission spectra. In these cases, it is necessary to calculate the modified Auger parameter α' by using a very simple formula: α' = BE (Ag 3d 5/2 ) + KE (Ag LMM) [18]. The value of the Auger parameter can indicate the chemical state (metal, oxide, etc.) of the investigated element. In this case, it was α' = 725.2 -725.3 eV, which is the typical value for Ag + in the silver sulfides, specifically in Ag 2 S [5]. The analysis of the S 2p signal confirmed the presence of sulfides, since the S 2p 3/2 peak was positioned at BE = 161.6-161.9 eV [5]. It is interesting to note that the XPS quantitative analysis identified four different scenarios, depending on the color of the spot, which are summarized in Figure 5: (1) grey stains-with Ag, O and S; (2) dark spots-with Ag, O, and S, but also with Au and Cu; (3) red spots-like the grey spots, but with different atomic concentration of the elements; and (4) clean surface-with Au, Cu, and a small amount of O. Then, the different chemical composition of the stains was investigated by XPS depth profiling, which revealed the different thicknesses of the stains: from 5 to 6 nm for red ones to about 300 nm for dark blackish colored ones. Therefore, the variation of the color was principally related to a different thickness of contamination layer, where the thickness of Ag 2 S was always limited to the first 3-5 nm and the second sublayer of metallic Ag continued in depth. These results suggest that a thin, almost transparent, overlayer of sulphide was formed by the interaction of metallic Ag with the sulfur-containing contaminants in air (like H 2 S), whereas some bigger silver particles were mechanically embedded into the coin surface during the milling, rolling, or punching of the gold strips. Then, the different chemical composition of the stains was investigated by XPS depth profiling, which revealed the different thicknesses of the stains: from 5 to 6 nm for red ones to about 300 nm for dark blackish colored ones. Therefore, the variation of the color was principally related to a different thickness of contamination layer, where the thickness of Ag2S was always limited to the first 3-5 nm and the second sublayer of metallic Ag continued in depth. These results suggest that a thin, almost transparent, overlayer of sulphide was formed by the interaction of metallic Ag with the sulfurcontaining contaminants in air (like H2S), whereas some bigger silver particles were mechanically embedded into the coin surface during the milling, rolling, or punching of the gold strips. Hard Coatings of Nitrides Hard coatings, based on transition metal nitrides or carbides, are characterized by excellent mechanical properties, suited for steel protection and fabrication of cutting tools. Their performance is continuously improved by the optimization of the fabrication processes, the development of new deposition technologies, and the production of composite materials with enhanced physical and chemical properties. An important contribution to the development of these coatings can be given by the use of surface analysis, which enables us to find the best production conditions and to improve their quality. In this section, the results obtained by the XPS and AES investigations of the TiN-Ti composite and a multilayer CrN-Cr coating are presented. As it can be seen in Figure 6, the deconvolution of the Ti 2p spectrum shows the presence of multiple contributions due to the different chemical states of Ti: the components 3 and 4 located at BE = 458.5 and 456.5 eV were assigned to the chemical states of Ti 4+ and Ti 3+ bound to oxygen; the component 1 positioned at BE = 454.1 eV was related to metallic Ti (0); finally, the component 2 positioned at BE = 455.0 eV was assigned to the bonds of Ti-N and Ti-C [19,20]. Hard Coatings of Nitrides Hard coatings, based on transition metal nitrides or carbides, are characterized by excellent mechanical properties, suited for steel protection and fabrication of cutting tools. Their performance is continuously improved by the optimization of the fabrication processes, the development of new deposition technologies, and the production of composite materials with enhanced physical and chemical properties. An important contribution to the development of these coatings can be given by the use of surface analysis, which enables us to find the best production conditions and to improve their quality. In this section, the results obtained by the XPS and AES investigations of the TiN-Ti composite and a multilayer CrN-Cr coating are presented. As it can be seen in Figure 6, the deconvolution of the Ti 2p spectrum shows the presence of multiple contributions due to the different chemical states of Ti: the components 3 and 4 located at BE = 458.5 and 456.5 eV were assigned to the chemical states of Ti 4+ and Ti 3+ bound to oxygen; the component 1 positioned at BE = 454.1 eV was related to metallic Ti (0); finally, the component 2 positioned at BE = 455.0 eV was assigned to the bonds of Ti-N and Ti-C [19,20]. Naturally, the presence of oxides was caused by the oxidation of metallic Ti in air. After ion sputtering, they were almost removed as it is shown in Figure 7. Of course, the possible influence of the preferential sputtering of oxygen [21,22] to the reduction of oxides cannot be excluded, but in our case this effect was not considered, as this study aimed to determine the composition in the volume of Ti nitride after removal of the native oxides overlayer. By using XPS depth profiling, i.e., Figure 6. Ti 2p peak fitting of the Ti/TiN composite coating [19]. Naturally, the presence of oxides was caused by the oxidation of metallic Ti in air. After ion sputtering, they were almost removed as it is shown in Figure 7. Of course, the possible influence of the preferential sputtering of oxygen [21,22] to the reduction of oxides cannot be excluded, but in our case this effect was not considered, as this study aimed to determine the composition in the volume of Ti nitride after removal of the native oxides overlayer. By using XPS depth profiling, i.e., alternating cycles of ion sputtering and spectra acquisition, it is possible to investigate the changes of chemical composition until a depth of about 1 µm. From the depth profile shown in Figure 8, it is possible to observe how the content of oxides decreases in depth, whereas the contents of metallic Ti and nitrides increase. This trend was also confirmed by the depth profile of the N 1s signal, which was composed of two peaks positioned at BE = 397.0 and 399.5 eV and was assigned to the bonds of N-Ti and N-O in oxynitride compounds, most probably formed due to environmental contamination. The atomic ratio Ti/N = 3.5 was constant along the depth profiling. This excess of Ti content indicated the formation of composite TiN-Ti. Naturally, the presence of oxides was caused by the oxidation of metallic Ti in air. After ion sputtering, they were almost removed as it is shown in Figure 7. Of course, the possible influence of the preferential sputtering of oxygen [21,22] to the reduction of oxides cannot be excluded, but in our case this effect was not considered, as this study aimed to determine the composition in the volume of Ti nitride after removal of the native oxides overlayer. By using XPS depth profiling, i.e., alternating cycles of ion sputtering and spectra acquisition, it is possible to investigate the changes of chemical composition until a depth of about 1 µ m. From the depth profile shown in Figure 8, it is possible to observe how the content of oxides decreases in depth, whereas the contents of metallic Ti and nitrides increase. This trend was also confirmed by the depth profile of the N 1s signal, which was composed of two peaks positioned at BE = 397.0 and 399.5 eV and was assigned to the bonds of N-Ti and N-O in oxynitride compounds, most probably formed due to environmental contamination. The atomic ratio Ti/N = 3.5 was constant along the depth profiling. This excess of Ti content indicated the formation of composite TiN-Ti. Figure 7. Comparison of the Ti2p signals acquired before and after Ar + ion sputtering [19]. Figure 7. Comparison of the Ti2p signals acquired before and after Ar + ion sputtering [19]. Due to the limited depth of XPS depth profiling, the study of the multilayer coating CrN/Cr/CrN with the thicknesses of 1.5/1.0/1.5µ m was carried out only for the top layer of this coating [23]. In this layer, the signal of Cr 2p (Figure 9) was composed of a typical Cr 2p3/2-2p1/2 doublet, which was positioned at BE = 574.2 eV, and a large peak due to the contribution of multiplet splitting, centered at BE = 576.0 eV. The deconvolution of the N 1s spectrum evidenced the presence of two chemical Due to the limited depth of XPS depth profiling, the study of the multilayer coating CrN/Cr/CrN with the thicknesses of 1.5/1.0/1.5µm was carried out only for the top layer of this coating [23]. In this layer, the signal of Cr 2p ( Figure 9) was composed of a typical Cr 2p 3/2 -2p 1/2 doublet, which was positioned at BE = 574.2 eV, and a large peak due to the contribution of multiplet splitting, centered at BE = 576.0 eV. The deconvolution of the N 1s spectrum evidenced the presence of two chemical species: chromium nitride at BE = 397.1 eV and a component of oxynitrides at BE = 398.6 eV, probably due to the presence of a low amount of oxygen in the deposition chamber. The obtained BE values of N 1s and Cr 2p 3/2 (single component) indicated the formation of CrN, excluding the phase of Cr 2 N characterized by a noticeably higher value of BE [5]. This supposition was confirmed also by the determined atomic ratio of Cr/N nearly at 1.0. The XPS depth profile, depicted in Figure 10, showed that after removal of the surface contamination, the composition of CrN coating remained almost constant. Since the total thickness of the coating (~4 µm) was too high for XPS depth profiling until the substrate, it was stopped after the removal of~100 nm of CrN, and the cross section of the coating was further investigated. Due to the limited lateral resolution of XPS, the interfaces of CrN/Cr, Cr/CrN, and CrN/substrate were investigated by the AES/SAM technique. Due to the limited depth of XPS depth profiling, the study of the multilayer coating CrN/Cr/CrN with the thicknesses of 1.5/1.0/1.5µ m was carried out only for the top layer of this coating [23]. In this layer, the signal of Cr 2p (Figure 9) was composed of a typical Cr 2p3/2-2p1/2 doublet, which was positioned at BE = 574.2 eV, and a large peak due to the contribution of multiplet splitting, centered at BE = 576.0 eV. The deconvolution of the N 1s spectrum evidenced the presence of two chemical species: chromium nitride at BE = 397.1 eV and a component of oxynitrides at BE = 398.6 eV, probably due to the presence of a low amount of oxygen in the deposition chamber. The obtained BE values of N 1s and Cr 2p3/2 (single component) indicated the formation of CrN, excluding the phase of Cr2N characterized by a noticeably higher value of BE [5]. This supposition was confirmed also by the determined atomic ratio of Cr/N nearly at 1.0. The XPS depth profile, depicted in Figure 10, showed that after removal of the surface contamination, the composition of CrN coating remained almost constant. Since the total thickness of the coating (~4 µ m) was too high for XPS depth profiling until the substrate, it was stopped after the removal of ~100 nm of CrN, and the cross section of the coating was further investigated. Due to the limited lateral resolution of XPS, the interfaces of CrN/Cr, Cr/CrN, and CrN/substrate were investigated by the AES/SAM technique. Figure 11 shows the SEM image and the multipoint AES analyses carried out on the cross section of the sample. The AES spectra were acquired on different points, moving from the substrate (region 1) to the top of the coating (region 4). The substrate was characterized by the presence of Fe LMM peaks (KE = 594.0, 652.0 and 705.7 eV) and the low-intensity KLL peaks of C and O (see Figure 11b). Figure 11 shows the SEM image and the multipoint AES analyses carried out on the cross section of the sample. The AES spectra were acquired on different points, moving from the substrate (region 1) to the top of the coating (region 4). The substrate was characterized by the presence of Fe LMM peaks (KE = 594.0, 652.0 and 705.7 eV) and the low-intensity KLL peaks of C and O (see Figure 11b). In Regions 2 and 4, the peaks of Cr L 3 M 23 M 45 (KE = 530.6 eV) and N KLL (KE = 385.4 eV) were registered, whereas in Region 3, only a peak of Cr L 3 M 23 M 45 was present. In addition, the chemical maps were acquired by SAM, where the investigated area of the sample was represented by pixels of the peak-minus-background intensity of the selected Auger peak. The SAM images collected by using the peak-minus-background of the Cr L 3 M 23 M 45 and N KLL peaks are shown in Figure 12. The black points indicate the area without signal, whereas the lighter grayscale points indicate the area where the signals were detected. As it can be noticed, the layers are well-defined, and the interface is rather neat, suggesting the absence of diffusion phenomena during the deposition process. The coating thickness, estimated from the SEM/SAM images, was about 4.0 µm. Figure 10. XPS depth profile of the first CrN layer in composite coating. The average sputtering rate was equal to 0.3 nm·min −1 [23]. Figure 11 shows the SEM image and the multipoint AES analyses carried out on the cross section of the sample. The AES spectra were acquired on different points, moving from the substrate (region 1) to the top of the coating (region 4). The substrate was characterized by the presence of Fe LMM peaks (KE = 594.0, 652.0 and 705.7 eV) and the low-intensity KLL peaks of C and O (see Figure 11b). In Regions 2 and 4, the peaks of Cr L3M23M45 (KE = 530.6 eV) and N KLL (KE = 385.4 eV) were registered, whereas in Region 3, only a peak of Cr L3M23M45 was present. In addition, the chemical maps were acquired by SAM, where the investigated area of the sample was represented by pixels of the peak-minus-background intensity of the selected Auger peak. The SAM images collected by using the peak-minus-background of the Cr L3M23M45 and N KLL peaks are shown in Figure 12. The black points indicate the area without signal, whereas the lighter grayscale points indicate the area where the signals were detected. As it can be noticed, the layers are well-defined, and the interface is rather neat, suggesting the absence of diffusion phenomena during the deposition process. The coating thickness, estimated from the SEM/SAM images, was about 4.0 μm. Figure 11. SEM image (a) and AES spectra (b) acquired along the cross section of the multilayer coating [23]. Microchemical Composition of Ni-Based Superalloys Superalloys are a class of materials that find numerous applications in the metallurgical field, in particular when a high strength, superior oxidation, and corrosion resistance at temperatures above 700 °C are aquired. Many of superalloys properties are determined by their microstructure, and therefore it is quite important to predict the microstructural evolution during long-time operation, especially the coarsening and morphological changes of the γ' phase that take place at the operating temperature of 800-900 °C. These superalloys are composed of cuboidal γ' particles with submicrometric dimensions, embedded in the γ matrix. The chemical composition of two phases Microchemical Composition of Ni-Based Superalloys Superalloys are a class of materials that find numerous applications in the metallurgical field, in particular when a high strength, superior oxidation, and corrosion resistance at temperatures above 700 • C are aquired. Many of superalloys properties are determined by their microstructure, and therefore it is quite important to predict the microstructural evolution during long-time operation, especially the coarsening and morphological changes of the γ' phase that take place at the operating temperature of 800-900 • C. These superalloys are composed of cuboidal γ' particles with submicrometric dimensions, embedded in the γ matrix. The chemical composition of two phases could be different, but most of the previous experimental studies have been dedicated to the morphology and microstructure of superalloys, e.g., [24,25] and the references therein. Practically, the data on the chemical composition of the two phases in various superalloys are absent in the literature. However, the coarsening of γ' particles strongly depends on the difference of chemical composition between a disordered matrix and cuboidal particles. Since this change must occur at the microscale, the surface investigations of the microchemical structure of a biphasic (γ + γ') Ni-based CM186 superalloy were performed at a high lateral resolution by using the laboratory of scanning photoemission microscopy (SPEM) at the Elettra synchrotron (Trieste, Italy). This technique allows us to directly acquire the surface chemical maps of constituent elements and to determine the variation of their atomic concentrations, eventually induced by the creep tests. In order to prepare for SPEM investigations, the XPS spectra were collected and processed by using a standard XPS apparatus [26,27]. The spectral region, containing all 4f photoemission peaks of constituent elements together with the overlapping peaks of W 5p and Re 5p, is shown in Figure 13. The peak fitting analysis revealed that Re 4f7/2 and W 4f7/2 peaks were located at BE = 40.8 and 31.4 eV, corresponding to their metallic states, whereas the Ta 4f7/2 peak was characterized by two components at 22.6 eV and 25.1 eV, assigned to metallic and oxidized species [26], respectively. Finally, the peak of Hf 4f7/2 at BE = 16.5 eV was assigned to oxidized species [26]. The chemical maps were recorded in different zones of the samples before and after the creep test, shedding light on the compositional differences between γ and γ' phases. After the acquisition, each map was numerically processed in order to remove the contribution of surface morphology from the photoemission signals. It is worth noting that in SPEM the chemical images can be acquired without any chemical etching of the samples, which is the contrary of the standard microscopies (SEM, AFM, etc.) used for superalloys. Figure 14 shows some examples of obtained chemical maps. The Re 4f, W 4f, and Ta 4f images were acquired from the interdendritic zone on the as-received The peak fitting analysis revealed that Re 4f 7/2 and W 4f 7/2 peaks were located at BE = 40.8 and 31.4 eV, corresponding to their metallic states, whereas the Ta 4f 7/2 peak was characterized by two components at 22.6 eV and 25.1 eV, assigned to metallic and oxidized species [26], respectively. Finally, the peak of Hf 4f 7/2 at BE = 16.5 eV was assigned to oxidized species [26]. The chemical maps were recorded in different zones of the samples before and after the creep test, shedding light on the compositional differences between γ and γ' phases. After the acquisition, each map was numerically processed in order to remove the contribution of surface morphology from the photoemission signals. It is worth noting that in SPEM the chemical images can be acquired without any chemical etching of the samples, which is the contrary of the standard microscopies (SEM, AFM, etc.) used for superalloys. Figure 14 shows some examples of obtained chemical maps. The Re 4f, W 4f, and Ta 4f images were acquired from the interdendritic zone on the as-received sample. Figure 13. XPS spectrum of the 4f region acquired for the sample of CM186 superalloy [26]. The peak fitting analysis revealed that Re 4f7/2 and W 4f7/2 peaks were located at BE = 40.8 and 31.4 eV, corresponding to their metallic states, whereas the Ta 4f7/2 peak was characterized by two components at 22.6 eV and 25.1 eV, assigned to metallic and oxidized species [26], respectively. Finally, the peak of Hf 4f7/2 at BE = 16.5 eV was assigned to oxidized species [26]. The chemical maps were recorded in different zones of the samples before and after the creep test, shedding light on the compositional differences between γ and γ' phases. After the acquisition, each map was numerically processed in order to remove the contribution of surface morphology from the photoemission signals. It is worth noting that in SPEM the chemical images can be acquired without any chemical etching of the samples, which is the contrary of the standard microscopies (SEM, AFM, etc.) used for superalloys. Figure 14 shows some examples of obtained chemical maps. The Re 4f, W 4f, and Ta 4f images were acquired from the interdendritic zone on the as-received sample. The chemical maps of Re and Ta were complementary, namely, the bright zones in the Re map corresponds to the black zones in that of Ta and vice versa, whereas the tungsten was distributed homogeneously through the analyzed area, even if its content was slightly higher in the γ phase. The The chemical maps of Re and Ta were complementary, namely, the bright zones in the Re map corresponds to the black zones in that of Ta and vice versa, whereas the tungsten was distributed homogeneously through the analyzed area, even if its content was slightly higher in the γ phase. The lateral distribution of Re and Ta did not change in the crept sample (Figure 15), since they were concentrated in γ and γ phases, respectively. In comparison with the as-received sample, the distribution of W after creep appeared more uniform. The relative distribution of the main constituent elements between the γ and γ phases in the as-received and crept samples is displayed in Figure 16. Each data point is the average value of 5 measurements carried out on different points of the same phase. As it can be noticed, both the phases were characterized by the same amount of Ni, while the concentration of Co and Re was predominant in the γ phase. Coatings 2020, 10, 1182 13 of 28 lateral distribution of Re and Ta did not change in the crept sample ( Figure 15), since they were concentrated in γ and γ′ phases, respectively. In comparison with the as-received sample, the distribution of W after creep appeared more uniform. The relative distribution of the main constituent elements between the γ and γ′ phases in the as-received and crept samples is displayed in Figure 16. Each data point is the average value of 5 measurements carried out on different points of the same phase. As it can be noticed, both the phases were characterized by the same amount of Ni, while the concentration of Co and Re was predominant in the γ phase. After the creep test, their distribution remained almost the same. On the contrary, the amount of Al and Ta was predominant in the γ′ phase, remaining unchanged after the creep test. Significant differences were found for W and Hf, where the creep test induced a migration of these elements from the γ phase to γ′ phase. The obtained results evidenced that this diffusion process is responsible for the weakening of the disordered matrix during the creep. Diffusion Phenomena in the Ti6Al4V/SiCf Composite There are only a few analytical techniques capable of investigating the diffusion mechanism of the elements in a solid-state sample. Among them, the surface analysis techniques represent the most powerful tool of the investigation, especially in the proximity of the interface between different materials. In this section, we illustrate the multitechnique approach applied for the investigation of a composite material consisting of a Ti6Al4V matrix and SiC fibers [28][29][30][31][32]. After the creep test, their distribution remained almost the same. On the contrary, the amount of Al and Ta was predominant in the γ phase, remaining unchanged after the creep test. Significant differences were found for W and Hf, where the creep test induced a migration of these elements from the γ phase to γ phase. The obtained results evidenced that this diffusion process is responsible for the weakening of the disordered matrix during the creep. Diffusion Phenomena in the Ti 6 Al 4 V/SiCf Composite There are only a few analytical techniques capable of investigating the diffusion mechanism of the elements in a solid-state sample. Among them, the surface analysis techniques represent the most powerful tool of the investigation, especially in the proximity of the interface between different materials. In this section, we illustrate the multitechnique approach applied for the investigation of a composite material consisting of a Ti 6 Al 4 V matrix and SiC fibers [28][29][30][31][32]. To avoid the formation of brittle compounds like Ti 5 Si 3 at the interface matrix/fiber, each fiber was coated with a 3 µm thick graphite layer. However, at the high temperatures during the fabrication process and in-service life, some elemental diffusion could be induced, reducing the mechanical performance of the composite. Figure 17 shows the elemental distribution on the cross section of the sample. To avoid the formation of brittle compounds like Ti5Si3 at the interface matrix/fiber, each fiber was coated with a 3 µ m thick graphite layer. However, at the high temperatures during the fabrication process and in-service life, some elemental diffusion could be induced, reducing the mechanical performance of the composite. Figure 17 shows the elemental distribution on the cross section of the sample. The XPS chemical maps were acquired by collecting the intensity of the signals positioned at BE = 458.8 eV (Ti 2p3/2), BE = 529.0 eV (O 1s), BE = 99.9 eV (Si 2p), and the intensity of C 1s, where the contributions of graphite (BE = 284.6 eV) and carbide (BE = 283.0 eV) were separated. As it can be seen, the fibers were embedded in the Ti6Al4V matrix, which the surface contained oxidized Ti species due to the reaction with atmospheric oxygen. The layer of titanium oxides was promptly removed after a brief time of ion sputtering, reducing the Ti chemical state to metallic one. Unfortunately, the The XPS chemical maps were acquired by collecting the intensity of the signals positioned at BE = 458.8 eV (Ti 2p 3/2 ), BE = 529.0 eV (O 1s), BE = 99.9 eV (Si 2p), and the intensity of C 1s, where the contributions of graphite (BE = 284.6 eV) and carbide (BE = 283.0 eV) were separated. As it can be seen, the fibers were embedded in the Ti 6 Al 4 V matrix, which the surface contained oxidized Ti species due to the reaction with atmospheric oxygen. The layer of titanium oxides was promptly removed after a brief time of ion sputtering, reducing the Ti chemical state to metallic one. Unfortunately, the lateral resolution of the standard XPS imaging (>3 µm) was too low for us to investigate the diffusion processes that can occur at the interface matrix/fiber. Therefore, the investigation at a higher lateral resolution was performed by an AES multipoint analysis. SEM images and AES line scan spectra acquired for Samples 1 (as prepared) and 2 (heated for 1000 h at 600 • C) are displayed in Figures 18 and 19, respectively. The XPS chemical maps were acquired by collecting the intensity of the signals positioned at BE = 458.8 eV (Ti 2p3/2), BE = 529.0 eV (O 1s), BE = 99.9 eV (Si 2p), and the intensity of C 1s, where the contributions of graphite (BE = 284.6 eV) and carbide (BE = 283.0 eV) were separated. As it can be seen, the fibers were embedded in the Ti6Al4V matrix, which the surface contained oxidized Ti species due to the reaction with atmospheric oxygen. The layer of titanium oxides was promptly removed after a brief time of ion sputtering, reducing the Ti chemical state to metallic one. Unfortunately, the lateral resolution of the standard XPS imaging (>3 µ m) was too low for us to investigate the diffusion processes that can occur at the interface matrix/fiber. Therefore, the investigation at a higher lateral resolution was performed by an AES multipoint analysis. SEM images and AES line scan spectra acquired for Samples 1 (as prepared) and 2 (heated for 1000 h at 600 °C) are displayed in Figures 18 and 19, respectively. The obtained results revealed that the graphite layer acts as a good protection barrier, avoiding the diffusion of Si in the Ti matrix. However, as evidenced by the SEM analysis, the morphology of the graphite layer became irregular after a thermal treatment at 600 °C for 1000 h, despite its thickness remaining unchanged. This result can be explained by taking into consideration the reaction between carbon and atmospheric oxygen in producing CO. However, the carbon diffusion in the Ti matrix should also be considered. Since the samples have a curvy geometry, the resolution of standard XPS Figure 19. SEM image 80 × 80 µm 2 (a) and AES spectra (b) acquired on the cross section of Sample 2 across the carbon layer; analysis points are labelled 1, 2 and 3 [29]. The obtained results revealed that the graphite layer acts as a good protection barrier, avoiding the diffusion of Si in the Ti matrix. However, as evidenced by the SEM analysis, the morphology of the graphite layer became irregular after a thermal treatment at 600 • C for 1000 h, despite its thickness remaining unchanged. This result can be explained by taking into consideration the reaction between carbon and atmospheric oxygen in producing CO. However, the carbon diffusion in the Ti matrix should also be considered. Since the samples have a curvy geometry, the resolution of standard XPS and AES was not sufficient to characterize the chemical species at the interface. To solve this problem, the interface between the graphite and the metallic alloy was investigated by covering the Ti 6 Al 4 V and Ti 99.99+ foils with a thin layer of graphite. In order to simulate the diffusion of carbon, the samples were heated in vacuum for 8 h at 500 • C ( Figure 20). The XPS depth profiles demonstrated the diffusion of elemental carbon in the metallic matrix, forming a thin layer (about 10 nm) of carbides ( Figure 21). The obtained results revealed that the graphite layer acts as a good protection barrier, avoiding the diffusion of Si in the Ti matrix. However, as evidenced by the SEM analysis, the morphology of the graphite layer became irregular after a thermal treatment at 600 °C for 1000 h, despite its thickness remaining unchanged. This result can be explained by taking into consideration the reaction between carbon and atmospheric oxygen in producing CO. However, the carbon diffusion in the Ti matrix should also be considered. Since the samples have a curvy geometry, the resolution of standard XPS and AES was not sufficient to characterize the chemical species at the interface. To solve this problem, the interface between the graphite and the metallic alloy was investigated by covering the Ti6Al4V and Ti 99.99+ foils with a thin layer of graphite. In order to simulate the diffusion of carbon, the samples were heated in vacuum for 8 h at 500 °C ( Figure 20). The XPS depth profiles demonstrated the diffusion of elemental carbon in the metallic matrix, forming a thin layer (about 10 nm) of carbides ( Figure 21). From the SPEM analyses [32], carried out on the composite samples, it was concluded that the formation of carbides included not only TiC, but also the interstitial-substitutional (i-s) pairs of C-Al and C-V, present in the α phase of the matrix near the fibers. From the SPEM analyses [32], carried out on the composite samples, it was concluded that the formation of carbides included not only TiC, but also the interstitial-substitutional (i-s) pairs of C-Al and C-V, present in the α phase of the matrix near the fibers. Microchemical Structure of the PbBi Liquid Alloy The development of a new generation of nuclear reactors has involved many aspects of material science. One of them was the investigation of the microchemical inhomogeneities occurring at high temperature in a liquid Pb-Bi eutectic (LBE) alloy. LBE finds its application in the nuclear reactor as a coolant and spallation source of MYRRHA, an accelerator-driven system. Therefore, it is quite important to investigate any changes of the microchemical structure of LBE that may induce corrosion and embrittlement phenomena in the structural materials. The microstructure of the LBE alloy was evaluated using high-temperature X-ray diffraction (HT-XRD) [33], whereas the microchemical composition was investigated by SPEM [34][35][36]. In this section, we focus our attention on the surface analysis. Generally, by SPEM, only the solid samples can be analyzed; thus, in order to simulate the clustering formation, we used a rapid cooling (quenching) of liquid alloy starting from different temperatures and assumed that the microchemical composition of the liquid was preserved on the surface of the obtained solid LBE alloy. The selected temperatures for quenching were 126 • C (eutectic temperature) and 200, 300, 400, 518, and 700 • C. The surface chemical maps were acquired by measuring the intensity of the Pb 4f 7/2 and Bi 4f 7/2 peaks, positioned at BE = 137.0 and 156.0 eV, respectively. Before collecting the maps, the carbon and oxygen contaminations were removed, operating a short cycle of Ar+ ion sputtering. Although the Pb and Bi native oxides were not completely removed, they were neglected because they are meaningless for this discussion. For convenience, the chemical maps were displayed, indicating the Pb/Bi atomic ratio (AR), which is more representative to the elemental distribution. Taking a reference value of the nominal atomic ratio of the eutectic alloy Pb/Bi = 0.8, three pixel colors were used to evidence the three different cases: (i) blue-lack of Pb with AR < 0.6, (ii) red-excess of Pb with AR > 1.0, and (iii) yellow-near a nominal ratio of 0.6 < AR < 1.0. After the acquisition, each image was processed by applying the following procedure of the Igor software: (1) elimination of morphology effects from Pb and Bi maps by using the correction (peak minus background)/background and (2) the superposition of obtained maps and conversion to the maps of atomic ratio AR. Obtained maps of the AR (100 × 100 µm 2 or 50 × 50 µm 2 ) processed by MATLAB software are presented in Figure 22. As it can be noted, a strong inhomogeneity was observed. Depending on the quenching temperature, Pb and Bi atoms formed the clusters of different dimensions enriched in Bi and/or Pb. At an eutectic temperature, the surface of the sample was characterized by the presence of micrometer clusters enriched in Bi (~90% of Bi), immersed in the alloy with eutectic composition. Increasing the quenching temperature, the elemental distribution and atomic concentration in the clusters were changed. The clusters size was reduced to a few microns (1-5 µm) as the consequence of higher thermal agitation and these clusters were alternatively enriched in Pb and Bi, and the surface distribution of the alloy with an eutectic composition 0.6 < AR < 1.0 was also changed. The cross-section mapping of the sample that was quenched at 518 • C (see Figure 22f) demonstrates how the cooling process was freezing the sample surface in a structure quite similar to the liquid alloy, while the interior of the sample experienced a different temperature gradient, giving rise to the big clusters enriched in Bi. In order to quantify and compare the elemental distribution in different samples, a statistical calculation of the cumulative area CA was applied: where n is the total number of selected pixels p i that have AR i in the chemical map. gradient, giving rise to the big clusters enriched in Bi. In order to quantify and compare the elemental distribution in different samples, a statistical calculation of the cumulative area CA was applied: where n is the total number of selected pixels pi that have ARi in the chemical map. Figure 23 shows the plot of cumulative area (CA) versus the quenching temperature (QT), where the curves were calculated for AR1, AR2, and AR3. At the melting temperature (126 • C), the CA value of AR2 was approximately 2.5%, indicating a very low concentration of Pb, whereas the CA values of AR1 and AR3 were almost similar at 52% and 45%, respectively. Increasing QT, the CA of AR2 was augmenting almost linearly until over 80% at QT = 600 • C, then suddenly falling down below 10% at QT = 700 • C. The comparison of these curves with the phase transition determined by HT-XRD investigations confirmed that the structural modification is also accompanied by the change of the number of clusters enriched in Pb (AR2). As regards the curves of AR1 and AR3, they substantially showed a complementary trend with respect to the AR2 one. of AR1 and AR3 were almost similar at 52% and 45%, respectively. Increasing QT, the CA of AR2 was augmenting almost linearly until over 80% at QT = 600 °C, then suddenly falling down below 10% at QT = 700 °C. The comparison of these curves with the phase transition determined by HT-XRD investigations confirmed that the structural modification is also accompanied by the change of the number of clusters enriched in Pb (AR2). As regards the curves of AR1 and AR3, they substantially showed a complementary trend with respect to the AR2 one. Austenitic Steels Austenitic stainless steels are known as materials with a high corrosion resistance in different environments. Because of their low hardness, however, they cannot be used in several industrial applications, unless after modifications through thermochemical surface treatments. Carburizing, nitriding, and carbo-nitriding are common examples of the heat treatments that are used to increase the hardness of stainless steels. These processes need to reach a temperature higher than 550 °C, which could cause the local microstructural changes in the austenitic steel phase, such as the precipitation of Cr carbides. Since these precipitates can reduce the corrosion resistance of the steel, it is necessary to adopt some heat treatment at a lower temperature. A good alternative is the kolstering process, which can harden the austenitic steels without compromising their resistance to corrosion. Although kolstering is a good low-temperature treatment, it is unfortunately very long lasting and expensive. It involves a pretreatment of the steel in an HCl atmosphere at about 250 °C to remove the Cr2O3 layer from the surface. Then, the stainless steel is treated at 450 °C in a gaseous atmosphere of CO, H2, and N2 for a duration about 30 h. Very promising results close to those of kolstering were obtained through a plasma carburizing process at low temperature. In the study presented in [37], the plasma was generated by microwaves operating up to 200 mbar as described in detail in [38], while the temperature and pressure were set to about 420 °C and 80 mbar, respectively, for the whole treatment duration of about 6 h. The chamber gas mixture was formed by CH4 (variable percentage) in H2. The main advantage of this treatment is the reduction of the process time, and consequently this is more convenient also for process costs. XPS and AES techniques allow for the study of the steel surface before and after these treatments. In particular, these techniques permit us to examine the chemical composition of the superficial hardened layer. In this way, it is possible to identify the best process condition, for example, by changing some parameters of the treatment. In this study, the percentage of CH4, added to H2 in the gas mixture was varied from 2% to 10%. Austenitic Steels Austenitic stainless steels are known as materials with a high corrosion resistance in different environments. Because of their low hardness, however, they cannot be used in several industrial applications, unless after modifications through thermochemical surface treatments. Carburizing, nitriding, and carbo-nitriding are common examples of the heat treatments that are used to increase the hardness of stainless steels. These processes need to reach a temperature higher than 550 • C, which could cause the local microstructural changes in the austenitic steel phase, such as the precipitation of Cr carbides. Since these precipitates can reduce the corrosion resistance of the steel, it is necessary to adopt some heat treatment at a lower temperature. A good alternative is the kolstering process, which can harden the austenitic steels without compromising their resistance to corrosion. Although kolstering is a good low-temperature treatment, it is unfortunately very long lasting and expensive. It involves a pretreatment of the steel in an HCl atmosphere at about 250 • C to remove the Cr 2 O 3 layer from the surface. Then, the stainless steel is treated at 450 • C in a gaseous atmosphere of CO, H 2 , and N 2 for a duration about 30 h. Very promising results close to those of kolstering were obtained through a plasma carburizing process at low temperature. In the study presented in [37], the plasma was generated by microwaves operating up to 200 mbar as described in detail in [38], while the temperature and pressure were set to about 420 • C and 80 mbar, respectively, for the whole treatment duration of about 6 h. The chamber gas mixture was formed by CH 4 (variable percentage) in H 2 . The main advantage of this treatment is the reduction of the process time, and consequently this is more convenient also for process costs. XPS and AES techniques allow for the study of the steel surface before and after these treatments. In particular, these techniques permit us to examine the chemical composition of the superficial hardened layer. In this way, it is possible to identify the best process condition, for example, by changing some parameters of the treatment. In this study, the percentage of CH 4 , added to H 2 in the gas mixture was varied from 2% to 10%. The results of microhardness tests and XRD measurements [1] have established that the sample treated with 2% of CH 4 was the one with the best results in terms of hardness (700 HV) and corrosion resistance, without the presence of any precipitates of Cr carbides. For a better understanding of these results, all the samples were investigated by surface analysis. An AES line scan over the cross section, shown in Figure 24a, revealed the presence of an additional carbon layer with a thickness of about 2-3 µm (lighter zone) above the hardened layer of 20 µm. As it is possible to see from Figure 24b, the Auger signals of C KLL, O KLL, Cr LMM, and Fe LMM were detected along the whole cross section. In the first point, corresponding to the zone near the surface, the amount of carbon is the highest, and the concentrations of Fe and Cr are low. The ratio of the signals intensity (Fe LMM)/(C KLL) is equal to 0.5. Instead, at the point closest to the bulk, the amount of carbon returns to the nominal value of the alloy, and the ratio Fe/C is equal to 0.9. The intensity ratio of (Fe LMM)/(C KLL) for the entire line scan is shown in Figure 25. treated with 2% of CH4 was the one with the best results in terms of hardness (700 HV) and corrosion resistance, without the presence of any precipitates of Cr carbides. For a better understanding of these results, all the samples were investigated by surface analysis. An AES line scan over the cross section, shown in Figure 24a, revealed the presence of an additional carbon layer with a thickness of about 2-3 µ m (lighter zone) above the hardened layer of 20 µ m. As it is possible to see from Figure 24b, the Auger signals of C KLL, O KLL, Cr LMM, and Fe LMM were detected along the whole cross section. In the first point, corresponding to the zone near the surface, the amount of carbon is the highest, and the concentrations of Fe and Cr are low. The ratio of the signals intensity (Fe LMM)/(C KLL) is equal to 0.5. Instead, at the point closest to the bulk, the amount of carbon returns to the nominal value of the alloy, and the ratio Fe/C is equal to 0.9. The intensity ratio of (Fe LMM)/(C KLL) for the entire line scan is shown in Figure 25. From the value of the D parameter, which is the distance between the most positive maximum and the most negative minimum of the first derivative of C KLL spectrum [39], it is possible to establish that the samples subjected to carburization with a gas mixture composed of 2% of CH4 and H2 have a ultrathin outer layer of graphitic nature (C-C bond with a majority of planar sp 2 hybridization). Because of the presence of this additional hard graphitic layer, which was not present in the other samples treated with higher percentages of CH4, it is possible to conclude that 2% of CH4 Figure 24. SEM image (24 × 24 µm 2 ) of the sample treated by plasma at CH 4 2% in H 2 (a), and Auger spectra (b) recorded in points 1-6 are marked in the SEM image [37]. treated with 2% of CH4 was the one with the best results in terms of hardness (700 HV) and corrosion resistance, without the presence of any precipitates of Cr carbides. For a better understanding of these results, all the samples were investigated by surface analysis. An AES line scan over the cross section, shown in Figure 24a, revealed the presence of an additional carbon layer with a thickness of about 2-3 µ m (lighter zone) above the hardened layer of 20 µ m. As it is possible to see from Figure 24b, the Auger signals of C KLL, O KLL, Cr LMM, and Fe LMM were detected along the whole cross section. In the first point, corresponding to the zone near the surface, the amount of carbon is the highest, and the concentrations of Fe and Cr are low. The ratio of the signals intensity (Fe LMM)/(C KLL) is equal to 0.5. Instead, at the point closest to the bulk, the amount of carbon returns to the nominal value of the alloy, and the ratio Fe/C is equal to 0.9. The intensity ratio of (Fe LMM)/(C KLL) for the entire line scan is shown in Figure 25. From the value of the D parameter, which is the distance between the most positive maximum and the most negative minimum of the first derivative of C KLL spectrum [39], it is possible to establish that the samples subjected to carburization with a gas mixture composed of 2% of CH4 and H2 have a ultrathin outer layer of graphitic nature (C-C bond with a majority of planar sp 2 hybridization). Because of the presence of this additional hard graphitic layer, which was not present in the other samples treated with higher percentages of CH4, it is possible to conclude that 2% of CH4 From the value of the D parameter, which is the distance between the most positive maximum and the most negative minimum of the first derivative of C KLL spectrum [39], it is possible to establish that the samples subjected to carburization with a gas mixture composed of 2% of CH 4 and H 2 have a ultrathin outer layer of graphitic nature (C-C bond with a majority of planar sp 2 hybridization). Because of the presence of this additional hard graphitic layer, which was not present in the other samples treated with higher percentages of CH 4 , it is possible to conclude that 2% of CH 4 is the best gas mixture process condition. In this way, the hardened surface of the austenitic steel is comparable to the one obtained with the kolstering treatment. Another interesting discovery of austenitic steels that has been reported in the papers [40,41], concerns the microstructural modification in the steel with a high content of N (about 0.8 wt.%) induced by heating. Although nitrogen stabilizes the austenitic phase and increases the corrosion resistance, it is important to note that N is soluble only for quantities less than 0.4 wt.% (both in the liquid and solid phase). After exceeding this value, the discontinuous precipitations of chromium nitride are formed in the steel in the range of temperature between 700 and 900 • C. The transformation that occurs during heat treatments in that temperature range is the following: where γ s is the N-supersaturated austenitic phase (initial phase of the steel), γ is the austenitic transformed phase which appears as a lamellar structure, and Cr 2 N is the chromium nitride precipitates. A SEM image with the corresponding schematic structure of the austenitic steel is shown in Figure 26. Coatings 2020, 10, 1182 20 of 28 is the best gas mixture process condition. In this way, the hardened surface of the austenitic steel is comparable to the one obtained with the kolstering treatment. Another interesting discovery of austenitic steels that has been reported in the papers [40,41], concerns the microstructural modification in the steel with a high content of N (about 0.8 wt.%) induced by heating. Although nitrogen stabilizes the austenitic phase and increases the corrosion resistance, it is important to note that N is soluble only for quantities less than 0.4 wt.% (both in the liquid and solid phase). After exceeding this value, the discontinuous precipitations of chromium nitride are formed in the steel in the range of temperature between 700 and 900 °C. The transformation that occurs during heat treatments in that temperature range is the following: where γs is the N-supersaturated austenitic phase (initial phase of the steel), γ is the austenitic transformed phase which appears as a lamellar structure, and Cr2N is the chromium nitride precipitates. A SEM image with the corresponding schematic structure of the austenitic steel is shown in Figure 26. As it is explained in much detail in the cited papers [40,41], there were some experimental evidences, such as XRD reflection peaks as well as the values of the microhardness and lattice parameter in the transformed and untransformed zones, which suggested the presence of a net flow of nitrogen from the untransformed N-supersaturated γs zones to γ, along the precipitation process. Therefore, XPS and AES techniques could be used to establish final evidence of this phenomenon. As it is possible to see from Figure 26a, the grains size of γs is about 100 µ m, while the dimension of the transformed areas is much smaller, at about 10 µ m. A traditional XPS apparatus is not adequate for us to study the chemical composition of the transformed zones with sufficient resolution because it can analyze only surface areas between 0.1 and 1 mm. For this reason, it was necessary to use a scanning photoelectron microscopy (SPEM) operating in both imaging and spectroscopy modes. Indeed, this type of analysis can use an X-ray microprobe with a diameter less than 100 nm. By using the SPEM technique, it was possible to determine the chemical composition and spatial distribution of the elements in the lamellae and interlamellar spaces. In Figure 27, a spatially resolved XPS image of the transformed zone before and after the topographical correction is shown. As it is explained in much detail in the cited papers [40,41], there were some experimental evidences, such as XRD reflection peaks as well as the values of the microhardness and lattice parameter in the transformed and untransformed zones, which suggested the presence of a net flow of nitrogen from the untransformed N-supersaturated γ s zones to γ, along the precipitation process. Therefore, XPS and AES techniques could be used to establish final evidence of this phenomenon. As it is possible to see from Figure 26a, the grains size of γ s is about 100 µm, while the dimension of the transformed areas is much smaller, at about 10 µm. A traditional XPS apparatus is not adequate for us to study the chemical composition of the transformed zones with sufficient resolution because it can analyze only surface areas between 0.1 and 1 mm. For this reason, it was necessary to use a scanning photoelectron microscopy (SPEM) operating in both imaging and spectroscopy modes. Indeed, this type of analysis can use an X-ray microprobe with a diameter less than 100 nm. By using the SPEM technique, it was possible to determine the chemical composition and spatial distribution of the elements in the lamellae and interlamellar spaces. In Figure 27, a spatially resolved XPS image of the transformed zone before and after the topographical correction is shown. The information obtained from these images and from the microscopy was in good agreement with traditional XPS measurements. From the SPEM images of Cr 3p signal, it was found that in the transformed zone, Cr is concentrated in the lamellae, whereas it is uniformly distributed in low concentration in the untransformed region. In an opposite way to Cr distribution, a Fe-enrichment in the untransformed zone and impoverishment in the lamellae were revealed from Fe 3p images. These analyses indicate a migration of Cr, which is mainly accumulated in the Cr2N precipitates across the interface between γ and γs. Furthermore, from the Auger spectra shown in Figure 28, the Cr/N atomic ratio was calculated. It was found to be equal to 2.9 and 5.9 for the transformed and untransformed zones, respectively. This result confirms the nitrogen enrichment in the transformed zones during the heat treatment. Moreover, another phenomenon was also explained. In fact, from these analyses it was possible to hypothesize that the precipitation of Cr2N takes place as long as the flow of nitrogen from the untransformed to the transformed area is present. Finally, when γ and γs zones have the same concentration of N, the precipitation process is stopped, even if not all the cells of the steel were transformed. The information obtained from these images and from the microscopy was in good agreement with traditional XPS measurements. From the SPEM images of Cr 3p signal, it was found that in the transformed zone, Cr is concentrated in the lamellae, whereas it is uniformly distributed in low concentration in the untransformed region. In an opposite way to Cr distribution, a Fe-enrichment in the untransformed zone and impoverishment in the lamellae were revealed from Fe 3p images. These analyses indicate a migration of Cr, which is mainly accumulated in the Cr 2 N precipitates across the interface between γ and γ s . Furthermore, from the Auger spectra shown in Figure 28, the Cr/N atomic ratio was calculated. It was found to be equal to 2.9 and 5.9 for the transformed and untransformed zones, respectively. This result confirms the nitrogen enrichment in the transformed zones during the heat treatment. Moreover, another phenomenon was also explained. In fact, from these analyses it was possible to hypothesize that the precipitation of Cr 2 N takes place as long as the flow of nitrogen from the untransformed to the transformed area is present. Finally, when γ and γ s zones have the same concentration of N, the precipitation process is stopped, even if not all the cells of the steel were transformed. The information obtained from these images and from the microscopy was in good agreement with traditional XPS measurements. From the SPEM images of Cr 3p signal, it was found that in the transformed zone, Cr is concentrated in the lamellae, whereas it is uniformly distributed in low concentration in the untransformed region. In an opposite way to Cr distribution, a Fe-enrichment in the untransformed zone and impoverishment in the lamellae were revealed from Fe 3p images. These analyses indicate a migration of Cr, which is mainly accumulated in the Cr2N precipitates across the interface between γ and γs. Furthermore, from the Auger spectra shown in Figure 28, the Cr/N atomic ratio was calculated. It was found to be equal to 2.9 and 5.9 for the transformed and untransformed zones, respectively. This result confirms the nitrogen enrichment in the transformed zones during the heat treatment. Moreover, another phenomenon was also explained. In fact, from these analyses it was possible to hypothesize that the precipitation of Cr2N takes place as long as the flow of nitrogen from the untransformed to the transformed area is present. Finally, when γ and γs zones have the same concentration of N, the precipitation process is stopped, even if not all the cells of the steel were transformed. Graphene on Polycrystalline Metals The last few decades of material science will be remembered as the years of the graphene revolution. In fact, although the theoretical predictions can be traced starting from the 19th century, the experimental evidences occurred only in the 2004. After that, throughout the scientific community there was a continuous race to discover the new fields of application in order to exploit the full potential of this 2D material. At the same time, it was essential to develop an industrial method of synthesis that could guarantee a large-scale production of graphene. Recently, the research for the development of microelectronic devices, transparent conductive films, and in general different type of sensors with graphene has focused on the growth of graphene via the chemical vapor deposition (CVD) on polycrystalline metal substrates. These substrates act as excellent catalysts for the epitaxial growth of graphene. Some of them are also cheap and are easily removable, when it is necessary to transfer the layer of graphene on the device where it has to operate [42,43]. Numerous studies, extensively reviewed in [42][43][44][45], have been dedicated to the growth and characterization of graphene on various metals. Among many analytical techniques for the characterization of graphene, the mostly attractive ones are Raman spectroscopy, atomic force, and transmission electron microscopies, XPS and AES. However, in many papers including those on XPS, only the photoemission spectrum of C 1s has been used for graphene characterization, even if it does not allow the main peaks of graphene and graphite to be differentiated [44,46] without the angle-resolved analysis of low intensity σ and π bands accompanying the main peak [45]. A more useful and easier approach is the unequivocal identification of graphene from the analysis of C KVV spectrum combined with the main photoemission peaks of substrate and C 1s [46]. This approach, combined with Raman spectroscopy, allows us to obtain the information on the uniformity of graphene layer over a large area. In the same manner, these analyses permit us to determine the graphene thickness, which can often differ from the monolayer. The parameters, as well as the morphology and the thickness, depend on the type of growth mechanism of graphene. In the case of the CVD technique, two different growth mechanisms can take place: the decomposition of hydrocarbon gas at high temperature or the segregation of C atoms on the metal surface during the cooling phase. For example, in the study in [47], the graphene was synthesized on the substrates of various polycrystalline metals. The growth was carried out by the CVD technique in a mixture of CH 4 -H 2 gas at 1000 • C, with different times of exposure to the gas: 2, 4, and 6 min for the Cu substrate; and only 2 min for the Ni-Cu alloy (20 wt.% of Cu) and pure Ni film on Si substrate. It was possible to make a preliminary test of graphene quality by Raman spectroscopy. At first, the disorder degree of the deposited films can be estimated from the intensity of the D-band (1350 cm −1 ). Then, the ratio of the G-peaks band (1582 cm −1 ) was calculated due to the presence of graphite or a multilayer system of graphene with respect to the typical signal of graphene G'-band (2700 cm −1 ). The Raman spectra of graphene deposited on Cu foils are shown in Figure 29. As it was observed from the value of the IG'/IG ratio, the sample exposed for 6 min to the gas mixture at 1000 • C appeared to be the most promising. This result was also confirmed by photoemission measurements. Because it is not possible to distinguish between the graphite and graphene (both peaks are positioned at BE of about 284.5 eV) from the C 1s photoemission spectra, the Auger spectra of C KLL were also acquired. In fact, from the calculation of the D parameter, i.e., the distance between the absolute maximum and the absolute minimum of the first derivative of C KLL spectrum [39], it is possible to identify the presence of graphene [46]. Therefore, the value of D parameter was determined from the C KLL spectra induced by an X-ray source (XAES) and then it was compared with the same parameter obtained by using excitation with an electron gun (AES). The typical spectra of C 1s and C KLL regions are shown in Figure 30, whereas all the results of the XPS characterization are summarized in the Tables 2-4. As it was observed from the value of the IG'/IG ratio, the sample exposed for 6 min to the gas mixture at 1000 °C appeared to be the most promising. This result was also confirmed by photoemission measurements. Because it is not possible to distinguish between the graphite and graphene (both peaks are positioned at BE of about 284.5 eV) from the C 1s photoemission spectra, the Auger spectra of C KLL were also acquired. In fact, from the calculation of the D parameter, i.e., the distance between the absolute maximum and the absolute minimum of the first derivative of C KLL spectrum [39], it is possible to identify the presence of graphene [46]. Therefore, the value of D parameter was determined from the C KLL spectra induced by an X-ray source (XAES) and then it was compared with the same parameter obtained by using excitation with an electron gun (AES). The typical spectra of C 1s and C KLL regions are shown in Figure 30, whereas all the results of the XPS characterization are summarized in the Tables 2-4. Table 3, it is possible to conclude that the best graphene sample was obtained by the deposition of 6 min: The obtained values of the D parameter were D XAES = 14.1 eV (diamond-like) and D AES = 22.1 eV (graphitic) (see Table 4). As it was explained in detail in the previous work [46], these values definitely indicate the presence of graphene. From the XPS measurements at the grazing angle, it was also estimated that the thickness of graphene film was equal to a few monolayers. In this way, a further example of the application of surface spectroscopic techniques demonstrated their versatility and potentiality in recent fields of scientific research and industrial development, such as the large-scale production of graphene. Summary The importance and potentiality of ESCA techniques for the exploration of metallic surfaces was illustrated by reviewing the main principles of these techniques and seven experimental cases of our research. The main techniques comprised in ESCA, i.e., X-ray photoemission and Auger electron spectroscopies, were successfully employed for the investigation of different metallic surfaces, and their modifications were induced by different treatments or operating conditions. In addition, the high resolution SPEM technique was applied for the exploration of submicrometric features of surface chemical composition in some of investigated materials. Various phenomena on the metallic surfaces were revealed: the formation of impurity defects on collection coins, the microchemical composition and corrosion of stainless steel coated by Cr and Ti nitrides, modifications of microchemical composition in biphasic Ni-based superalloys, carbon diffusion at high temperature in the interface of Ti6Al4V/SiCf composite, microchemical inhomogeneity of liquid PbBi alloy, surface modification of austenitic steels by plasma carburizing, and nitrogen migration at high temperature, with an influence of polycrystalline metal substrates (Cu, Ni, and NiCu alloy) on the growth of graphene. One more recent example of an advantageous ESCA application for the study of Cr segregation in martensitic stainless steel is reported in the present issue of this journal [48].
v3-fos-license
2022-11-19T16:15:48.819Z
2022-11-01T00:00:00.000
253661590
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-3397/20/11/719/pdf?version=1668612893", "pdf_hash": "044bc4424918e61ff651002607d621fe26dea692", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1467", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "044bc4424918e61ff651002607d621fe26dea692", "year": 2022 }
pes2o/s2orc
Preparation and Hepatoprotective Activities of Peptides Derived from Mussels (Mytilus edulis) and Clams (Ruditapes philippinarum) Low molecular weight (<5 kDa) peptides from mussels (Mytilus edulis) (MPs) and the peptides from clams (Ruditapes philippinarum) (CPs) were prepared through enzymatic hydrolysis by proteases (dispase, pepsin, trypsin, alcalase and papain). Both the MPs and the CPs showed excellent in vitro scavenging ability of free radicals including OH, DPPH and ABTS in the concentration range of 0.625–10.000 mg/mL. By contrast, the MPs hydrolyzed by alcalase (MPs-A) and the CPs hydrolyzed by dispase (CPs-D) had the highest antioxidant activities. Furthermore, MPs-A and CPs-D exhibited protective capabilities against oxidative damage induced by H2O2 in HepG2 cells in the concentration range of 25–800 μg/mL. Meanwhile, compared with the corresponding indicators of the negative control (alcohol-fed) mice, lower contents of hepatic MDA and serums ALT and AST, as well as higher activities of hepatic SOD and GSH-PX were observed in experiment mice treated with MPs-A and CPs-D. The present results clearly indicated that Mytilus edulis and Ruditapes philippinarum are good sources of hepatoprotective peptides. Introduction Mussels (Mytilus edulis) and clams (Ruditapes philippinarum) are two low-cost economical marine bivalve shellfish. According to statistics from the Food and Agriculture Organization of the United Nations (FAO) [1], the annual global production of Mytilus edulis and Ruditapes philippinarum reached 1108.3 and 4266.2 million tons in 2020. Currently, most Mytilus edulis and Ruditapes philippinarum are sold as either fresh or dried products. Hence, there is an urgent need to develop high-value-added products, and further increase their economic value. Seafood, such as bivalve shellfish, is a delicacy consumed all over the world. Meanwhile, they may also serve as a rich source of health-beneficial ingredients including proteins, peptides, essential amino acids, omega-3 long-chain polyunsaturated fatty acids, minerals and vitamins [2][3][4]. Many studies have demonstrated that the striped mussel (Mytilus edulis) contains a high percentage of protein (60-65% of dry weight) and essential amino acids (35-45% of total amino acids) [5,6]. The striped clam (Ruditapes philippinarum) also contains a high percentage of protein (65-75% of dry weight) and essential amino acids (35-45% of total amino acids) [7,8]. Among the above-mentioned bioactive ingredients, proteins have shown the strongest potential for the commercial exploitation of functional foods or dietary supplements up until now. To date, the studies of bioactive peptides such as antioxidant peptides, ACE inhibitory peptides, anticancer peptides and anticoagulant peptides from Mytilus Edulis and Ruditapes philippinarum hydrolyzed by various proteases have been widely reported [9][10][11]. For example, Wang et al. reported that an antioxidant peptide was successfully isolated from the hydrolysate of blue mussels (Mytilus edulis) by neutrase, which displayed good radical scavenging activity. In the model system of linoleic acid, the peptide had a significant effect on anti-lipid peroxidation [9]. Qiao et al. reported that an anticoagulant peptide was produced from the hydrolysate of Mytilus edulis by trypsin. The excellent anticoagulant activity was probably attributed to its high-affinity interaction with thrombin [10]. As for clams (Ruditapes philippinarum), Song et al. reported that peptides (RBPs) with higher ACE inhibition function were produced by fermentation of Ruditapes philippinarum inoculated with Bacillus natto [11]. Furthermore, by decreasing the proportion of Firmicutes and Bacteroidetes and increasing the relative abundance of certain genera, such as Ruminococcaceae_UCG-014, RBPs could improve the intestinal microbiota. In addition, Kim et al. reported that a novel anticancer peptide extracted from clams (Ruditapes philippinarum) by chymotrypsin could effectively induce apoptosis in prostate, breast and lung cancer cells except normal hepatocytes [12]. However, to the best of our knowledge, the preparation of hepatoprotective peptides from Mytilus edulis and Ruditapes philippinarum has not been reported. It has been widely reported that acute ethanol administration could increase oxidative stress, decrease motor coordination, decrease the respiratory rate and impair protein metabolism [13]. Especially, excessive alcohol intake can cause liver injury, which may aggravate gradually and possibly lead to alcoholic liver disease (ALD). In general, ALD is generally characterized by liver injury and numerous inflammatory cytokines infiltration reactions, which will lead to more serious liver disease or pathological evolution [14]. With the aggravation of the disease, ALD may further lead to serious liver disease-related morbidity and mortality [15]. Therefore, nutritional intervention in the early stage of acute liver injury is of crucial significance for body health. So far, researchers have successfully isolated and prepared hepatoprotective peptides from red shrimp, crucian carp, freshwater clam and other raw materials, which could effectively inhibit acute induced liver injury [16][17][18][19]. For example, Jiang et al. reported that based on inhibiting the NF-κB signal responses and reducing the expression of the inflammatory factors (IL-1β, IL-6, IFN-γ and TNF-α), the low molecular weight peptides (SCHPs-F1) from red shrimp (Solenocera crassicornis) head significantly ameliorate the cyclophosphamide-induced hepatotoxicity [19]. Shi et al. reported that a peptide with the sequence Gly-Leu-Hyp-Gly-Glu-Arg (GLpGER) extracted from the swim bladder hydrolysate of crucian carp (Carassius auratus) could alleviate acute alcoholic liver injury, it could restore liver alcohol dehydrogenase (ADH) activity, maintain the normal morphology of hepatocytes and decrease the serum alanine aminotransferase and aspartate aminotransferase levels [16]. Je et al. reported that serum markers of liver injury in rats, including alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase and lactate dehydrogenase, were significantly (p < 0.05) increased after alcohol administration for 4 weeks. Nevertheless, pepsin-hydrolyzed bioactive peptides derived from the pectoral fin of salmon (Oncorhynchus) resulted in a significant (p < 0.05) reduction in the above indicators. The results indicated that such peptides could provide a hepatoprotective effect on the liver damaged by alcohol, which was also confirmed by the evaluation of liver histopathology [20]. Currently, in vitro models of cell injury and animal models of acute liver injury are widely used in the related studies of active peptides with hepatoprotective effects. Since hydrogen peroxide (H 2 O 2 ) is readily converted to hydroxyl radicals, one of the most destructive free radicals, it is an important cause of intracellular oxidative damage [21]. Moreover, the hydroxyl radical is generated from nearly all sources of oxidative stress and can diffuse freely in and out of the cells and the tissues [22]. Therefore, H 2 O 2 can trigger apoptosis in hepatocytes and many researchers usually choose it to establish a human hepatocellular carcinomas (HepG2) cells injury model. In addition, experimental animal models of alcoholic liver injury, particularly rodents, have been widely used to mimic human alcoholic liver injury because they are suitable for most experiments, and have the advantages of being economical and shortening experimental periods compared with other animal models, although rodents (mainly mice and rats) cannot exhibit the full spectrum of disease in human alcoholic liver injury as primates do [14,23,24]. In consequence, many researchers are more likely to choose mice as experimental animals to establish an animal model of alcohol-induced acute liver injury. In general, the simultaneous use of the above two models could more effectively evaluate the hepatoprotective activity or potential mechanism of functional ingredients such as bioactive peptides, which will further facilitate the preparation and development of hepatoprotective peptides. Given this, the in vitro and the in vivo hepatoprotective effects of peptides from Mytilus edulis and Ruditapes philippinarum were evaluated in this study. The obtained experimental data will provide a theoretical basis for the utilization of hepatoprotective peptides from Mytilus edulis and Ruditapes philippinarum as novel sources of ingredients for value-added nutritious foods. It will effectively improve the economic value of low-cost economical marine bivalve shellfish. As shown in Figure 1, CPs from dispase and pepsin had the strongest free radical scavenging activity against OH. By contrast, CPs from dispase and papain had the strongest free radical scavenging activity against DPPH, and CPs from dispase and alcalase had the strongest free radical scavenging activity against ABTS. The above results of OH, DPPH and ABTS radical scavenging experiments clearly indicated that MPs hydrolyzed by alcalase and CPs hydrolyzed by dispase had the highest antioxidant activities. Many studies have reported that antioxidant activity correlated with hepatoprotective activity [31,32]. Therefore, MPs hydrolyzed by alcalase (MPs-A) and CPs hydrolyzed by dispase (CPs-D) were chosen to evaluate the cytoprotective effects on HepG2 cells damaged by H 2 O 2 oxidation. , as a commonly used oxidant, has been used to induce oxidative stress leading to cell death in a variety of experimental models such as cell models [33]. In this study, cell viability was determined by the methyl thiazolyl tetrazolium (MTT) assay, which was used to evaluate the degree of cell death. As shown in Figure 2A, the addition of H 2 O 2 to the cell culture medium caused cell death. H 2 O 2 with concentrations ranging from 800 µmol/L to 1600 µmol/L induced a decrease in cell viability in a concentration-dependent manner. Especially, under the H 2 O 2 concentration of 1000 µmol/L, appropriate cell viability of 50% was obtained. Therefore, 1000 µmol/L of H 2 O 2 was chosen to induce oxidative damage in HepG2 cells in the following experiments. In addition, HepG2 cells were exposed to different concentrations of MPs-A and CPs-D in order to evaluate whether the peptides could damage the cells. As shown in Figure 2B, it was obvious that MPs-A and CPs-D did not cause any apparent cytotoxic effects on HepG2 when the concentrations ranged from Greek letters (α-ε) are significantly different at p < 0.05. Hepatoprotective Effect of MPs-A and The above results of OH, DPPH and ABTS radical scavenging experiments clearly indicated that MPs hydrolyzed by alcalase and CPs hydrolyzed by dispase had the highest ity of 50% was obtained. Therefore, 1000 μmol/L of H2O2 was chosen to induce oxidative damage in HepG2 cells in the following experiments. In addition, HepG2 cells were exposed to different concentrations of MPs-A and CPs-D in order to evaluate whether the peptides could damage the cells. As shown in Figure 2B, it was obvious that MPs-A and CPs-D did not cause any apparent cytotoxic effects on HepG2 when the concentrations ranged from 25 μg/mL to 800 μg/mL. Therefore, this concentration range of MPs-A and CPs-D would be used in subsequent experiments. Protective Effects of MPs-A and CPs-D on H2O2-Induced Oxidative Damage in HepG2 Cells The cytoprotection against H2O2-induced oxidative damage exerted by MPs-A and CPs-D is evaluated in this section. As shown in Figure 3, cell death induced by H2O2 was effectively inhibited by the addition of MPs-A and CPs-D. Both types of peptides showed excellent in vitro hepatoprotective activity in the concentration range of 25-800 μg/mL. The different tendencies between MPs-A and CPs-D were closely related to the molecular weight, the amino acid composition and the arrangement of the peptide chains, etc. By contrast, 25 μg/mL of MPs-A and 800 μg/mL CPs-D exhibited the strongest hepatoprotective activity. Compared with in vitro cell experiments, the in vivo biological evaluation can give more accurate information. Therefore, further animal experimentation is required to comprehensively evaluate the in vivo hepatoprotective activity of MPs-A and CPs-D. Protective Effects of MPs-A and CPs-D on H 2 O 2 -Induced Oxidative Damage in HepG2 Cells The cytoprotection against H 2 O 2 -induced oxidative damage exerted by MPs-A and CPs-D is evaluated in this section. As shown in Figure 3 Effects of MPs-A and CPs-D on Body Weight Gain, Liver Index and Serum Indexes As shown in Table 1, body weight gain, liver index and serum indexes were measured to appraise the protective effect of MPs-A and CPs-D on alcohol-damaged mice. It was obvious that the liver index and the level of serum indexes including alanine transaminase (ALT), aspartate transaminase (AST), total cholesterol (TC) and triglyceride (TG) induced by alcohol (13 mL/kg BW) in alcohol control (AC) group were significantly higher than those in the water control (WC) group. Whereas the body weight gain in the AC group was lower than those in the WC group. The treatment of MPs-A and CPs-D signif- Hepatoprotective Effects of MPs-A and CPs-D on Acute Alcohol-Induced Liver Injury in Mice 2.3.1. Effects of MPs-A and CPs-D on Body Weight Gain, Liver Index and Serum Indexes As shown in Table 1, body weight gain, liver index and serum indexes were measured to appraise the protective effect of MPs-A and CPs-D on alcohol-damaged mice. It was obvious that the liver index and the level of serum indexes including alanine transaminase (ALT), aspartate transaminase (AST), total cholesterol (TC) and triglyceride (TG) induced by alcohol (13 mL/kg BW) in alcohol control (AC) group were significantly higher than those in the water control (WC) group. Whereas the body weight gain in the AC group was lower than those in the WC group. The treatment of MPs-A and CPs-D significantly inhibited the alcohol-induced increase in ALT, AST, TC and TG in serum, and also increased the body weight gain. Based on the values of the above-mentioned indicators, the high-dose MPs-A group (MH, 600 mg/kg BW MPs) was found to exhibit excellent hepatoprotective effects in the groups treated with different doses of MPs-A. Similarly, the high-dose CPs-D group (CH, 600 mg/kg BW CPs) was found to exhibit excellent hepatoprotective effects in the groups treated with different doses of CPs-D. Table 1. Effects of peptides from mussels (Mytilus edulis) hydrolyzed by alcalase (MPs-A) and clams (Ruditapes philippinarum) hydrolyzed by dispase (CPs-D) on body weight gain at the 10th day, liver index and serum indexes. Group Body Weight Gain on the 10th Day (g) Liver Index (mg/g) Serum Index Effects of MPs-A and CPs-D on Hepatic MDA, GSH-PX and SOD As shown in Figure 4, malondialdehyde (MDA) content, glutathione peroxidase (GSH-PX) and superoxide dismutase (SOD) activities were detected to evaluate the protective effect of MPs-A and CPs-D on alcohol-induced injury in the liver. Obviously, the content of MDA in the alcohol control (AC) group was higher than that in the water control (WC) group. The treatment of MPs-A and CPs-D significantly inhibited the alcoholinduced increase in hepatic MDA, and also increased the activities of hepatic GSH-PX and SOD. Based on the values of the above-mentioned indicators, the high-dose MPs-A group (MH, 600 mg/kg BW MPs) was found to exhibit excellent hepatoprotective effect in the groups treated with different doses of MPs-A. Similarly, the high-dose CPs-D group (CH, 600 mg/kg BW CPs) was found to exhibit excellent hepatoprotective effects in the groups treated with different doses of CPs-D. SOD. Based on the values of the above-mentioned indicators, the high-dose MPs-A group (MH, 600 mg/kg BW MPs) was found to exhibit excellent hepatoprotective effect in the groups treated with different doses of MPs-A. Similarly, the high-dose CPs-D group (CH, 600 mg/kg BW CPs) was found to exhibit excellent hepatoprotective effects in the groups treated with different doses of CPs-D. Liver Histological Analysis The hepatoprotective effect of MPs-A and CPs-D was further confirmed via histopathological examination. The liver histopathological sections (scale bar: 100 μm) stained with hematoxylin and eosin (H&E) ( Figure 5A) indicated that the liver cells of the water control (WC) group were structurally intact, and the hepatic lobules were discernable. Nevertheless, hepatocellular swelling, loss of cell boundaries, fatty accumulation and explosive accumulation of inflammatory factors in the hepatic lobules were observed in the alcohol control (AC) group (fatty accumulation and lobular inflammation are marked with black arrows and circles, respectively). As expected, the administration of MPs-A and CPs-D showed effective protection against alcohol-induced liver injuries in a dosedependent manner, which tended to ameliorate hepatic steatosis indicated by reducing hepatocyte edema, inflammatory cell infiltrates and the fat droplets in liver tissue. The results of liver histopathological sections (scale bar: 100 μm) stained with Oil Red O (ORO) ( Figure 5B) were consistent with the results of hematoxylin and eosin (H&E) staining (lipid droplets are marked with black arrows). Numerous lipid droplets were observed in the alcohol control (AC) group and the administration of MPs-A and CPs-D reduced the lipid droplets in a dose-dependent manner. Liver Histological Analysis The hepatoprotective effect of MPs-A and CPs-D was further confirmed via histopathological examination. The liver histopathological sections (scale bar: 100 µm) stained with hematoxylin and eosin (H&E) ( Figure 5A) indicated that the liver cells of the water control (WC) group were structurally intact, and the hepatic lobules were discernable. Nevertheless, hepatocellular swelling, loss of cell boundaries, fatty accumulation and explosive accumulation of inflammatory factors in the hepatic lobules were observed in the alcohol control (AC) group (fatty accumulation and lobular inflammation are marked with black arrows and circles, respectively). As expected, the administration of MPs-A and CPs-D showed effective protection against alcohol-induced liver injuries in a dose-dependent manner, which tended to ameliorate hepatic steatosis indicated by reducing hepatocyte edema, inflammatory cell infiltrates and the fat droplets in liver tissue. The Based on the above observation, the high-dose MPs-A group (MH, 600 mg/kg BW MPs-A) was found to exhibit excellent hepatoprotective effects in the groups treated with different doses of MPs-A. Similarly, the high-dose CPs-D group (CH, 600 mg/kg BW CPs-D) was found to exhibit excellent hepatoprotective effects in the groups treated with different doses of CPs-D. Discussion Numerous pieces of evidence have shown that oxidative stress plays a major role in acute alcohol-induced liver injury by regulating lipid, protein, DNA and RNA levels and its effects on cellular dysfunction [34]. In other words, biological compounds with antioxidant activities provide a protective effect on the liver against free radical and alcohol-induced injuries. Therefore, the peptides from mussels (Mytilus edulis) (MPs) and clams (Ruditapes philippinarum) (CPs) hydrolyzed by five proteases (dispase, pepsin, trypsin, alcalase and papain) were prepared to evaluate their possible antioxidant and free radical scavenging activities in this study, which served as a crucial indicator of their underlying hepatoprotective activity. The results of hydroxyl radical (OH), 2,2-diphenyl-1-picrylhydrazyl (DPPH) and 2,2 -azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) radical scavenging experiments clearly indicated that MPs and CPs exhibited excellent free radical scavenging activities in a dose-dependent manner. Similarly, many studies have also shown that the bioactive peptides of marine origin exhibit significant free radical scavenging activities [35][36][37][38][39][40]. For example, He et al. reported that the false abalone (Volutharpa ampullacea perryi) was hydrolyzed with different enzymes to extract antioxidant peptides. The results indicated that trypsin hydrolysates have the best biological activity and the strongest scavenging ability for ABTS radicals compared to pepsin, alcalase, neutrase and flavourzyme [38]. In addition, Upata et al. reported that enzymatic protein hydrolysate from jellyfish (Lobonema smithii) had high free radical scavenging activities, and the production of jellyfish hydrolysate using flavourzyme had the highest antioxidant activity [39]. According to the catalytic effect on the peptide chain, proteases are distinguished as endopeptidases and exopeptidases. Exopeptidases are so-called due to their site of action is only at the ends of peptide chains. In other words, they remove terminal amino acids only [40,41]. By contrast, endopeptidases cleave proteins at certain points along the chain and do not usually attack its end. Especially, alcalase and dispase are two typical endopeptidases that have been extensively used in the preparation of protein hydrolysates with strong antioxidant activities due to their broad specificity [25,42]. Indeed, the peptides hydrolyzed by alcalase and dispase exerted the strongest antioxidant activities in this study. Similarly, many studies have also shown that the bioactive peptides of marine origin hydrolyzed by alcalase and dispase exhibit superior antioxidant activities [9,43,44]. For example, Wang et al. reported that compared with the pepsin-hydrolyzed scallop (Patinopecten yessoensis) protein hydrolysates (SPH) and the dispase-hydrolyzed SPH, an Electron Spin Resonance (ESR) assay indicated that SPH hydrolyzed by alcalase had the best free radical scavenging effect based on a higher ratio of antioxidant amino acids (35.25%) and better solubility [43]. In addition, Wang et al. reported that peptides prepared from blue mussels (Mytilus edulis) were hydrolyzed by 4 varieties of proteases including alcalase, papain, pepsin and dispase, and the dispase-hydrolyzed peptides displayed the highest DPPH radical scavenging activity in comparison [9]. H 2 O 2 can be transformed into hydroxyl radicals and oxygen free radicals, which have toxic effects on hepatocytes [45,46]. Therefore, the model of H 2 O 2 -induced oxidative damage in hepatocytes is often used for preliminary evaluation of hepatoprotective active substances. In this study, based on the above results of antioxidant activities of MPs and CPs, MPs hydrolyzed by alcalase (MPs-A) and CPs hydrolyzed by dispase (CPs-D) were chosen to evaluate their cytoprotective effects against oxidative damage in human hepatocellular carcinomas (HepG2) cells caused by H 2 O 2 . The result of the viability of HepG2 cells clearly indicated that cell damage induced by H 2 O 2 was significantly alleviated by the addition of MPs-A and CPs-D, both peptides showed excellent in vitro hepatoprotective activity in the concentration range of 25-800 µg/mL. Similarly, many studies have also shown that the bioactive peptides of marine origin have a cytoprotective function on HepG2 cell damage [47][48][49]. For example, Xu et al. reported that hydrolysate was prepared from Asian clams (Corbicula fluminea) hydrolyzed by trypsin, and the peptide component separated from the low molecular weight part (<5 kDa) showed a significant protective effect on HepG2 cells with H 2 O 2 -induced oxidative damage. Such a positive effect was mainly attributed to the radical scavenging capability [50]. In addition, Hu et al. reported that the antioxidant peptides from grass carp (Ctenopharyngodon idellus) scale gelatin had a protective effect against the HepG2 cells' oxidative damage induced by H 2 O 2 , which significantly promoted HepG2 cells growth and inhibited cell apoptosis [51]. In recent years, the model of alcohol-induced liver injury has been widely used to evaluate the hepatoprotective effect of active substances. It is widely considered that alcoholic liver injury is mediated by a variety of factors including accumulation of fat, oxidative damage, proinflammatory cytokines, increased collagen deposition and activation of various nonparenchymal cells [52]. Therefore, through the analysis of biochemical indicators in serum and liver, as well as the observation of hepatic histopathological sections, the hepatoprotective effects of MPs-A and CPs-D on acute alcohol-induced liver injury in mice were appraised in this study. This research confirmed that alcohol-fed mice (AC group) do not seem to gain as much weight as the control mice (WC group) in spite of alcohol feeding leading to increased liver index. Strikingly, MPs-A and CPs-D feeding effectively increase the gain in body weight resulting in a greater decrease in the liver index. Similarly, many studies have also shown that the bioactive peptides of marine origin have the effect of increasing the gain in body weight and reducing the liver index [53][54][55]. For example, Park et al. reported that compared with the body weight gain of the negative control mice treated with oral ethanol, the corresponding indicator was increased in the experimental mice treated with oral krill (Euphausia superba) protein hydrolysates. The result indicated that such hydrolysates may have a protective effect against alcohol-induced toxicity [54]. In addition, Gao et al. reported that the liver index of the negative control mice treated with oral alcohol was significantly higher than that of the water control group. In contrast, a high dose of peptides from oral oysters (Crassostrea gigas) hydrolyzed by alcalase could reduce the above-mentioned increase in liver index caused by alcohol exposure [55]. It has been widely accepted that the body protects itself from the oxidative stress induced by abundant drinking through enzymatic antioxidations, which are closely related to glutathione peroxidase (GSH-PX), superoxide dismutase (SOD), aspartate aminotransferase (AST) and alanine aminotransferase (ALT). GSH-PX exploits the thiol-reducing capacity of GSH to reduce oxidized lipids and proteins, thus contributing to H 2 O 2 catabolism and detoxification of endogenous metabolic peroxides and hydroperoxides [56,57]. By contrast, SOD can inhibit the destruction of cell structure by free radicals because it can terminate the free radical chain reactions by scavenging superoxide radicals, hence it is one of the vital antioxidant enzymes in vivo [58]. As for ALT and AST, they can be released into the blood in large quantities and can be detected easily in serum when liver damage occurs, which are crucial indicators to evaluate the degree of liver injury [59]. Therefore, AST, ALT, GSH-PX and SOD activities are important indicators of oxidative stress in the progress of acute alcohol-induced liver injury [13,60]. In the present study, the serum ALT and AST activities were increased in the alcohol-intoxicated mice, and the supplementation with MPs-A and CPs-D could markedly reduce the serum ALT and AST levels caused by oral alcohol. Meanwhile, the hepatic SOD and GSH-PX activities were significantly reduced in the alcohol-intoxicated mice, and the treatment of MPs-A and CPs-D could effectively upregulate these two indicators. Similarly, many studies have also shown that the bioactive peptides of marine origin exerted the hepatoprotective effect by regulating the activities of the above-mentioned enzymes [61][62][63]. For example, Wang et al. reported that intragastric administration of alcohol could increase the activities of AST and ALT and decrease the activities of SOD and GSH-PX. Those changes were reversed by the co-administration of oyster (Crassostrea talienwhanensis) peptide (<3500 Da) [61]. In addition, Li et al. reported that tilapia (Oreochromis spp.) skin collagen polypeptide could effectively decrease the serum levels of ALT and AST caused by oral D-galactose, as well as increase the activities of hepatic SOD and GSH-PX [63]. Except for AST, ALT, GSH-PX and SOD, malondialdehyde (MDA) is also an important indicator of liver injury. MDA is a major reactive aldehyde resulting from the biofilm peroxidation process [59], which has been generally used as an indicator of tissue damage such as acute liver injury by a series of chain reactions [64]. In addition, liver injury can also contribute to intrahepatic diffusion of fatty acids, resulting in increased triacylglycerol (TG) and total cholesterol (TC) content in blood [31]. Obviously, the present study indicated that the treatment of MPs-A and CPs-D could effectively downregulate the serum levels of TG and TC and the hepatic MDA, which further confirmed that MPs-A and CPs-D could effectively inhibit fatty acid oxidation in the liver. Similarly, many studies have also shown that bioactive peptides of marine origin could inhibit intrahepatic diffusion of fatty acids and reduce the content of hepatic MDA [61,65]. For example, Lin et al. reported that peptides were prepared from salmon (Oncorhynchus keta) skin under the catalysis of complex protease (3000 U/g protein: 7% trypsin, 65% papain and 28% alkaline proteinase). Such peptides exhibited hepatoprotective effects on acute alcohol-induced liver injury in mice, including reducing the levels of TC, TG and MDA in liver hepatic and serum [65]. In addition, Wang et al. reported that oyster (Crassostrea talienwhanensis) peptide (<3500 Da) could significantly reduce the levels of MDA and TG compared to the alcohol-fed mice, further suggesting that the peptides had an effect on protecting the liver by inhibiting the oxidative stress and inflammatory response [61]. The results of serum and liver indicators measurements were also supported by histopathological observations. The result of the liver histopathological sections stained with hematoxylin and eosin (H&E) and Oil Red O (ORO) indicated that the liver tissues of alcohol-fed mice (AC group) exhibited severe pathological changes, such as extreme cellular swelling, loss of cell boundaries, inflammatory cell infiltration and fat droplet accumulation in the hepatic lobule, demonstrate that severe liver injury caused by heavy alcohol intake. The administration of MPs-A and CPs-D significantly alleviated the above phenomenon in a dose-dependent manner. Similarly, many studies have also shown that the bioactive peptides of marine origin could effectively alleviate the pathological and histological changes in the liver [55,66]. For example, Gao et al. reported that after oral administration of the peptides obtained from oyster (Crassostrea gigas) muscle hydrolyzed by alcalase to mice, the pathological changes (disordered liver cords, swollen hepatocytes, severe fat droplet accumulation and inflammatory cell infiltration) caused by alcohol could effectively be alleviated in a dose-dependent manner [55]. In addition, Bkhairia et al. reported the degenerative changes (sinusoidal congestion, hemorrhages, confluent necrosis and massive inflammatory cell infiltration around the perivenular area) involved in paracetamol-induced hepatic damage in rats. Fortunately, the peptides obtained from Golden grey mullet (Liza aurata) hydrolyzed by endogenous alkaline enzyme exhibited significant improvement of the above-mentioned changes [66]. Many conjectures have been proposed to explain the mechanisms of alcohol-induced hepatocyte injury, which is also helpful to clarify the hepatoprotective activities of various functional components. The most approbatory conjecture is the oxidative stress theory, which is involved in numerous diseases including alcoholic liver disease. During alcohol metabolism, reactive oxygen species, hydroxyethyl radicals and nitric oxide (NO) could contribute to oxidative stress associated with alcohol-induced liver injury [65,67]. Moreover, metabolizing alcohol in the liver would result in a series of abnormal physiological states, including an imbalanced state of redox, the oxidative stress state of the endoplasmic reticulum, and abnormal lipid metabolism of hepatocytes [58]. Consequently, swelling of hepatocytes, hepatic inflammation and fat droplet accumulation suggest hepatocyte injury by alcohol. Current experimental results demonstrated that compared with the corresponding indicators in negative control mice, higher contents of hepatic MDA, ALT and AST, as well as lower activities of hepatic SOD and GSH-PX were observed in experiment mice treated with MPs-A and CPs-D. This clearly indicated that anti-oxidative stress may involve in the hepatoprotective effect of MPs-A and CPs-D. Furthermore, serum TG and TC levels were gradually increased after oral administration of alcohol, which confirmed that excessive alcohol consumption could lead to a disturbance in lipid metabolism. Taken together, the results in this study indicated that by alleviating oxidative stress and lipid metabolism disturbance, MPs-A and CPs-D could effectively protect against acute alcoholic liver injury. Animals Seventy-two Kunming mice (18-22 g, male) were purchased from Liaoning Changsheng Biotechnology Co., Ltd. (Benxi, China). The animals were accommodated under standard environmental conditions (12 h:12 h L:D cycle at 25 ± 2 • C) with free access to standard food pellets and tap water ad libitum throughout the experimental period. All mice were acclimatized for a week prior to the experiments and were cared for and treated humanely. At the end of the experiment, the mice were sacrificed by CO 2 asphyxiation in a covered container attached to a CO 2 tank. In order to minimize the suffering and pain of the mice, the related experimental procedures were approved by the Animal Ethics Committee of Dalian Polytechnic University (DPU) and conducted in accordance with the Guidelines for Use and Care of Laboratory Animals of DPU. Preparation of Peptides from Mussels (Mytilus edulis) and Clams (Ruditapes philippinarum) Hydrolyzed by Five Proteases After separating into shells and meat by hand, the meats of mussels (Mytilus edulis) and clams (Ruditapes philippinarum) were crushed directly into minced meats. According to the result of our previous study [43], five kinds of proteases including pepsin, dispase, alcalase, trypsin and papain were selected as hydrolytic enzymes. Briefly, distilled water with pH adjusted (HCl or NaOH) for five proteases (pepsin-250 U/mg, pH 2.0; dispase-50 kU/g, pH 7.0; alcalase-200 kU/g, pH 8.0; trypsin-4 kU/g, pH 8.0 and papain-2 kU/mg, pH 6.0) were added to the minced meat in a ratio of 1:3 (meat/water, w/v, g/mL). Then, pepsin, dispase, alcalase, trypsin or papain was added to the system in a ratio of 1:0.06 (meat/protease, w/w, g/g). The hydrolysis reaction was carried out at optimum temperature (pepsin and trypsin-37 • C; dispase, alcalase and papain-50 • C) for 5 h. After 5 h, the reaction mixtures were placed in boiling water for 10 min, which was used to inactivate the proteases and also to terminate the enzymatic hydrolysis reaction. The obtained hydrolysates were cooled to room temperature and then centrifuged at 4 • C with 5000× g for 15 min. The supernatants were collected, which were subsequently placed in dialysis bags (intercept molecular weight of 5 kDa). Finally, the liquids in which the bags were immersed were lyophilized. Thus, the peptides from mussels (Mytilus edulis) (MPs) and clams (Ruditapes philippinarum) (CPs) hydrolyzed by five proteases were obtained, which were stored at −80 • C until further use. Antioxidant Activities of MPs and CPs MPs and CPs were dissolved in deionized water to prepare solutions with concentrations of 0.625, 1.250, 2.500, 5.000 and 10.000 mg/mL. Meanwhile, ascorbic acid was used as the positive control [43]. OH radical scavenging activity (%) = (As − A)/(Ac − A) × 100 where As represents the absorbance of samples, Ac represents the absorbance readings from the reaction system without H 2 O 2 and A represents the absorbance readings from the reaction system without samples. DPPH Radical Scavenging Activity The DPPH radical scavenging activity assay used the method described by previous reports with slight modifications [68,69]. Briefly, 0.5 mL of MPs or CPs was mixed with DPPH (200 µM, 0.5 mL) and incubated in the dark at room temperature for 30 min. Subsequently, the absorbance was read by using a microplate reader (infinite M200, TECAN, Switzerland) at 517 nm. DPPH radical scavenging activity (%) = (A + Ae − As)/A × 100 where As represents the absorbance of samples, Ae represents the absorbance readings from samples substituted with 95% ethanol and A represents the absorbance readings from the reaction system without any sample. ABTS Radical Scavenging Activity The ABTS radical scavenging activity assay used the method described by previous reports with slight modifications [68,69]. Briefly, the ABTS radical reagent solution was prepared at 7 mM with potassium persulphate (2.45 mM). The mixture was incubated in the dark and at room temperature for 16 h. The ABTS radical solution was diluted in 5 mM phosphate buffered saline (PBS) pH 7.4, to an absorbance of 0.70 ± 0.02 at 734 nm. Then, 0.5 mL MPs or CPs was adjusted using ABTS solution and incubated in the dark for 10 min. The absorbance was read by using a microplate reader (infinite M200, TECAN, Switzerland) at 734 nm. ABTS radical scavenging activity (%) = (A − As)/A × 100 where As represents the absorbance of the reaction system with samples, A represents the absorbance of the reaction system without samples. Cell Viability Assay After the HepG2 cells reached 70-90% confluence, they were harvested using trypsin and seeded into 96-well plates for 24 h at a density of 1 × 10 4 cells/mL (100 µL per well). Subsequently, 50 µL of methyl thiazolyl tetrazolium (MTT) solution (5 mg/mL) was added to each well, and the plates were placed in a dark area to incubate at 37 • C. After 4 h, the supernatant was carefully discarded and 200 µL of dimethyl sulfoxide (DMSO) was added to each well. The plate was then shaken for 15 min at room temperature and the cell viability was estimated by using a microplate reader (infinite M200, TECAN, Switzerland) to read the absorbance at 490 nm. Cell viability (%) = (As − A)/(Ac − A) where As represents the absorbance value of sample-treated cells, Ac represents the absorbance value of non-treated cells, A represents the absorbance value with no cells. The mice were randomly divided into 9 groups with 8 mice in each group. The experiment period lasted for 10 days and was divided into two phases. In the first phase (up to 7 days), the mice underwent daily intragastric administration with MPs-A and CPs-D. The doses for the 9 groups were as follows: WC (water control), isometric distilled water; AC (alcohol control), isometric distilled water; GC (GSH control), 150 mg/kg body weight (BW) GSH in distilled water; ML (low-dose MPs-A), 150 mg/kg BW MPs-A in distilled water; MM (medium-dose MPs-A), 300 mg/kg BW MPs-A in distilled water; MH (high-dose MPs-A), 600 mg/kg BW MPs-A in distilled water; CL (low-dose CPs-D), 150 mg/kg BW CPs-D in distilled water; CM (medium-dose CPs-D), 300 mg/kg BW CPs-D in distilled water; CH (high-dose CPs-D), 600 mg/kg BW CPs-D in distilled water. In the second phase (from 8 to 10 days), after 30 min of daily intragastric administration of water, MPs-A or CPs-D, aside from the WC group, which was given isometric distilled water, the groups were given 56 degrees of liquor daily (13 mL/kg BW). Body weight was measured at fixed times each day during the experiment. Finally, retro-orbital blood samples were collected into tubes. After 2 h, all blood samples were centrifuged at 200× g for 15 min at 4 • C, and the resulting supernatants, designated serums, were carefully removed using a pipette. Subsequently, the mice were sacrificed by CO 2 asphyxiation in a covered container that was attached to a CO 2 tank. The livers were surgically removed and individually weighed. Furthermore, the liver index was calculated as the mass ratio of liver weight to body weight. Meanwhile, the above livers were used for histopathological analysis and preparation of 10% (solid/solution, m/v, g/mL) liver homogenate was prepared by homogenizing in 0.9% (solid/solution, m/v, g/mL) saline precooled in ice. Determination of Serum and Hepatic Biomarkers Enzyme-linked immunosorbent assays (ELISAs) were used for the detection of serum biomarkers including total cholesterol (TC), triacylglycerol (TG), aspartate aminotransferase (AST) and alanine aminotransferase (ALT) as well as the hepatic biomarkers including malondialdehyde (MDA), glutathione peroxidase (GSH-PX) and superoxide dismutase (SOD). In addition, the protein content of liver tissues was measured by using a total protein (TP) quantitative assay kit. Follow the manufacturer's instructions and read the relevant data on a microplate reader (infinite M200, TECAN, Switzerland). TC, TG, AST, ALT, MDA, GSH-PX and SOD were expressed as mmol/L, mmol/L, U/L, U/L, nmol/mg port, U/mg port and U/mg port, respectively. Histopathologic Analysis Liver tissues were fixed in 4% (solid/solution, w/v, g/mL) paraformaldehyde and embedded in paraffin wax. The paraffin sections (4 µm thick) were cut and each section was stained with the hematoxylin and eosin (H&E) technique. The other liver tissues were washed immediately with ice-cold PBS and embedded in the embedding compound at optimum cutting temperature (OCT). Liver tissues were cryosectioned (4 µm thick) and each section was stained with the Oil Red O (ORO) technique. The stained areas were observed with a microscope (Nikon Eclipse E100, Tokyo, Japan). Statistical Analysis The assay was carried out in triplicate. All data were expressed as the mean ± standard deviation. The results were statistically analyzed using one-way analysis of variance (ANOVA) followed by Dunnett's test, with p < 0.05 considered significant. All statistics were performed using IBM SPSS Statistics version 26 (IBM Corp., Armonk, NY, USA). Conclusions In conclusion, the peptides from mussels (Mytilus edulis) (MPs) hydrolyzed by alcalase (MPs-A) and the peptides from clams (Ruditapes philippinarum) (CPs) hydrolyzed by dispase (CPs-D) had the highest antioxidant activities. Furthermore, the MPs-A and the CPs-D exhibited hepatoprotective effects in the H 2 O 2 -induced cell injury model and the mice model of acute liver injury. The present results clearly indicated that Mytilus edulis and Ruditapes philippinarum are good sources of hepatoprotective peptides. This will effectively improve the economic value of low-cost economical marine bivalve shellfish.
v3-fos-license
2022-02-19T16:11:12.219Z
2022-02-17T00:00:00.000
246960016
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://hess.copernicus.org/preprints/hess-2022-26/hess-2022-26.pdf", "pdf_hash": "b923b0a582d203e7b1365c26ebe39d3c2675457c", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1468", "s2fieldsofstudy": [], "sha1": "179871318a8f0fbd4c8175d8e56447e3c211fc6a", "year": 2022 }
pes2o/s2orc
Comment on hess-2022-96 1) It is mentioned in the text that the key processes of the FarmCan model are the P, ET, PET, SM and RZSM. Also, as key climatic variables that link the water cyclic are the PET and SM, and that that the total energy of ET is more dependendable on SM. Please mention that the most important process that links the atmospheric water to the surface one, is the humidity (and all the related one, such as specific humidity, dew-point, etc.). This process is often either misused or forgotten; however, it is main link that drives the water-cycle (please see details and importance in a recent global analysis in https://hess.copernicus.org/articles/24/3899/2020/). Also, please consider mentioning how the FarmCan model takes into account changes in humidity, or whether it solely predicts precipitation. 1) It is mentioned in the text that the key processes of the FarmCan model are the P, ET, PET, SM and RZSM. Also, as key climatic variables that link the water cyclic are the PET and SM, and that that the total energy of ET is more dependendable on SM. Please mention that the most important process that links the atmospheric water to the surface one, is the humidity (and all the related one, such as specific humidity, dew-point, etc.). This process is often either misused or forgotten; however, it is main link that drives the water-cycle (please see details and importance in a recent global analysis in https://hess.copernicus.org/articles/24/3899/2020/). Also, please consider mentioning how the FarmCan model takes into account changes in humidity, or whether it solely predicts precipitation. 2) It is mentioned in the analysis, that daily precipitation is predicted based on the Multi-Source Weighted-Ensemble Precipitation (MSWEP). Please mention that while these meteorological models are powerful in predicting changes in temperature, they often perfom very poor in precipitation (for example see such discussion, references and examples in https://www.tandfonline.com/doi/full/10.1080/02626667.2010.513518). 3) The so-called Hurst phenomenon (https://ascelibrary.org/doi/10.1061/TACEAT.0006518; power-law type of the autocorrelation function across lags and scales as comapred to the zero autoco-correlation of the white-noise) seems not to be taken into account in the analysis. This phenomenon (also known as long-term persistence or long-range dependence) is found in all key hydrological-cycle processes including the ones applied by the authors (see review, references, and results in https://www.mdpi.com/2306-5338/8/2/59). The Hurst phenomenon has been shown to explain a vast portion of the variability observed in these hydrological-cycle processes. Its existence is one of reasons that is difficult (or even impossible) to predict a hydrometeorological process' value beyond a specific time-window (or else called time-window of predictability; https://www.tandfonline.com/doi/full/10.1080/02626667.2015.1034128). For example, in this work, the authors propose a 14-days. Finally, please note that the authors have not probably identify this phenomenon, since they only use data of 5 years of lengh, whereas the impact of the Hurst phenomenon takes place in the long-term scales (e.g., in more than 10-30 years). Therefore, it is expected that if a predictive model does not take it into account, in the long run it would end up underestimating the correlation of precipitation, evaportanspiration, etc. 4) Besides the Hurst phenomenon, which is responsible for the long-term auto-correlation function of each hydro-meteorological process, there is also the short-term autocorrelation structure, which is far from zero (i.e., in the case of independent variables). However, in the analysis, the authors mention that their applied method of Randon-forest can de-correlate the trees, and tackle the 'noise' sensitivity of the prediction. However, please note that even without the existence and impact of the Hurst phenomenon, the existence of a strong short-term auto-correlation function (i.e., at small lags and scales) cannot be easily get rid off by non-linear transformations. Therefore, the appearance of 'noise' is probably due to this impact, since all the processes applied by the authors at FarmCan (e.g., precipitation, evapotranspiration, PET, etc.) are shown to have a strong short-term auto-correlation function (for example, in https://www.mdpi.com/2306-5338/8/4/177/htm, in Figure 12, even after a 10 month period the correlation function of PET, as expressed through the climacogram, exhibits a value more than 0.5). Please consider estimating the auto-correlation functions (for several lags) of all processes inlcluded in FarmCan, so more light is shed in its impact to the prediction values and so as to further discuss this issue. Minor comments: 1) In the Introduction, the water-food nexus is mentioned as an important impact of climatic variability; however, the water-food-energy nexus is more appropriate in my opinion (there are many works in literature about this triangle; see for example discussion in a recent one: https://www.mdpi.com/2673-4060/2/2/11/htm). 2) For the FarmCan model is mentioned that (ii) establishes a methodology to forecast PET, SM, and RZSM using P prediction. How about ET? Also, how is possible to derive the SM and RZSM value from the precipitation prediction? These two questions are not very clear for me in the text, please consider giving more information. 3) In the text it is mentioned that the assumption of an evenly distributed soil moisture across depth is used. Please consider giving some examples of how this assumption may affect the result and validity of the FarmCan prediction. 4) Please consider replacing (for the P, PET, ET, RZSM, and SM) 'variables' with 'processes', since all these processes are found to have strong auto-correlation structures and therefore, they cannot be mentioned as stochastic variables but rather as stochastic processes (the word 'variables' is used when there is absence of correlation, i.e., a whitenoise behaviour; please see definitions and discussion in http://www.itia.ntua.gr/en/docinfo/2000/). Sincerely,
v3-fos-license
2023-09-06T13:11:51.462Z
2023-08-30T00:00:00.000
267268153
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b0fb05916c0a49345933e1a7cbebac34fe0ebd74", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1469", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "sha1": "5215cf95f57197abf5374ab6fed9b6073d8c041e", "year": 2024 }
pes2o/s2orc
Molecular mechanisms underlying phenotypic degeneration in Cordyceps militaris: insights from transcriptome reanalysis and osmotic stress studies Phenotypic degeneration in Cordyceps militaris poses a significant concern for producers, yet the mechanisms underlying this phenomenon remain elusive. To address this concern, we isolated two strains that differ in their abilities to form fruiting bodies. Our observations revealed that the degenerated strain lost the capacity to develop fruiting bodies, exhibited limited radial expansion, increased spore density, and elevated intracellular glycerol levels. Transcriptome reanalysis uncovered dysregulation of genes involved in the MAPK signaling pathway in the degenerate strain. Our RT-qPCR results demonstrated reduced expression of sexual development genes, along with upregulation of genes involved in asexual sporulation, glycerol synthesis, and MAPK regulation, when compared to the wild-type strain. Additionally, we discovered that osmotic stress reduced radial growth but increased conidia sporulation and glycerol accumulation in all strains. Furthermore, hyperosmotic stress inhibited fruiting body formation in all neutralized strains. These findings indicate dysregulation of the MAPK signaling pathway, the possibility of the activation of the high-osmolarity glycerol and spore formation modules, as well as the downregulation of the pheromone response and filamentous growth cascades in the degenerate strain. Overall, our study sheds light on the mechanisms underlying Cordyceps militaris degeneration and identifies potential targets for improving cultivation practices. www.nature.com/scientificreports/objective of our study is to enhance our understanding of the molecular mechanisms underlying phenotypic degeneration in fungi and to identify potential targets for preventing or mitigating its effects. Identification of phenotypic degeneration indicators in C. militaris The degenerate strain of C. militaris is characterized by the loss or reduction of fruiting body formation, which becomes apparent after approximately two months.Consequently, we aimed to identify rapid and reliable indicators of degeneration that are easily measurable and quantifiable.In our experiments, we found that radical expansions and spore density can be conveniently assessed after several days of culture on potatoes dextrose agar (PDA) plates.Therefore, we propose to use these two characteristics as indicators of phenotypic degeneration in future studies of C. militaris. We isolated two variations of the C. militaris fungus from the same batch of cultivation and conidial origin.One of these isolates, named Ywt (Fig. 1a), is capable of forming fruiting bodies, while the other, named Ydga (Fig. 1b), cannot.To confirm the genetic identity and relationship between the two variants, we conducted a BLAST search and phylogenetic analysis using nucleotide sequences of the internal transcribed spacer (ITS) region of nuclear ribosomal DNA.Our analysis revealed that the ITS sequences of Ywt and Ydga (Supplementary Table S1) were identical to those of most, if not all, C. militaris specimens deposited in the National Library of Medicine, National Center for Biotechnology Information (NCBI).Moreover, the neighbor joining tree analysis demonstrated that Ywt and Ydga were closely related to other reported C. militaris strains (Supplementary Fig. S1). To gain a better understanding of the relationship between our C. militaris strains and other known strains, we compared them to the DM1066 and JLCY-LI819 strains through phylogenetic analysis.The analysis showed that Ydga and Ywt were closely related to a common ancestor of the four strains, with a small interior branch length, while DM1066 and JLCY-LI819 were more distantly related (Fig. 1c).This suggests that Ydga and Ywt likely share an identical genetic background.S2-S3).SD refers to standard deviation.Sample ID indicates the strain name and the number of culture days.Significance was determined using a one-tailed t-test for two independent means, with an alpha level of 0.05. To evaluate whether phenotypic degeneration affects hyphal development and conidiogenesis, we compared the radical expansion and spore density of the Ywt strain with those of the Ydga strain at three different time points: 6, 12, and 16 days of culturing.At the 6-day culture mark, the appearance of Ydga colonies was similar to that of Ywt, except for slower hyphal growth (Fig. 1d vs. e).However, at the 12-and 16-day cultures, the hyphae of Ydga appeared irregular and fluffy, whereas those of Ywt were smooth and ring-shaped at the colony edge (Fig. 1d-g).Statistical analysis revealed that Ydga exhibited approximately two times slower radical expansion compared to Ywt at all examined time points (Fig. 1J, Supplementary Table S2), indicating a lower growth rate and a defect in hyphal development.Additionally, statistical analysis showed that Ydga had almost ten times higher spore density than Ywt at all examined time points, suggesting increased conidial formation (Fig. 1K, Supplementary Table S3). Overall, these findings suggest that retardation of hyphal development and an increase in sporulation may persist as characteristic features of phenotypic degeneration, which can be easily observed after several days of culturing on PDA plates. Transcriptome analysis reveals dysregulated MAPK signaling pathway in the culture degeneration of C. militaris In order to investigate the underlying molecular mechanisms of culture degeneration in C. militaris, we conducted a transcriptome analysis by comparing the gene expression profiles of a degenerate strain with those of a wild-type strain.Our analysis identified 880 downregulated genes and 1034 upregulated genes (False Discovery Rate < 0.05) in the degenerate strain.A detailed list of differentially expressed genes is presented in the file 'DEgenes.SR-Hoang et al. ' , which is available for reference.Among the downregulated genes, we observed considerable enrichment in gene ontology terms associated with ABC transporters, MAPK signaling pathway, and amino sugar and nucleotide sugar metabolism (Fig. 2A, Supplementary Table S4), while no biological pathway was found to be significantly enriched among the upregulated genes. Given the importance of the MAPK signaling pathway in regulating cellular processes such as sexual development and stress responses, which are known to be affected by degeneration, we focused our attention on this pathway.To validate our findings, we performed RT-qPCR analysis to examine the expression levels of key genes involved in the MAPK pathway and associated with the phenotypes of our C. militaris degenerate strain.Specifically, we investigated genes related to sexual development (Ste12, Mcm1), asexual sporulation (Brla, AbaA), glycerol synthesis (Gcy1, Gpd, and Gpp), as well as MAPK regulators (Ste20, Cla4) in the Ywt and Ydga strains.S5-S6).SD refers to standard deviation.Gene symbols indicates the tested genes in the corresponding strains.Significance was determined using a one-tailed t-test for two independent means, with an alpha level of 0.05.www.nature.com/scientificreports/Our analysis revealed that compared to the wild-type strain Ywt, the degenerate strain Ydga exhibited lower expression of sexual development genes (Ste12 and Mcm1), but showed higher expression of asexual sporulation genes (Brla and AbaA) (Fig. 2B, Supplementary Table S5).Furthermore, the degenerate strain exhibited increased expression of MAPK mediators (Ste20 and Cla4) and glycerol-synthesizing enzymes (Gcy1 and Gpp), which are typically activated in response to hyperosmotic conditions, while the expression of the basal glycerolsynthesizing enzyme (Gpd) did not show a subtantial difference (Fig. 2C, Supplementary Table S5).These findings were consistent with the higher intracellular glycerol content observed in the degenerate strain compared to the neutralized strain (Fig. 2D, Supplementary Table S6).Collectively, our results suggest dysregulation of the MAPK signaling pathway in the degenerate strain, which likely contributes to the observed phenotypic changes, including reduced radial growth, loss of fruiting body formation, increased conidia sporulation, and elevated intracellular glycerol levels.In summary, our study provides insights into the potential involvement of the MAPK signaling pathway in the phenotypic degeneration of C. militaris. The HOG module may play a role in C. militaris phenotypic degeneration The dysregulated transcription of the MAPK signaling pathway components and the increased intracellular glycerol content observed in the degenerate strain of C. militaris led us to investigate the potential role of stress in the phenotypic degeneration, particularly focusing on the HOG module.We exposed both the wild-type strain (Ywt) and the degenerate strain (Ydga) of C. militaris to various MAPK activators and assessed their effects on radial growth, conidia sporulation, and glycerol accumulation. Our results showed that Congo red, a stressor that affects cell wall integrity (CWI), had similar effects on both strains, leading to reduced radial growth and conidia sporulation (Fig. 3a-c, Supplementary Table S7).This suggests that the CWI branch is not strongly involved in the phenotypic degeneration of C. militaris.However, we observed differential effects of oxidative and osmotic stressors on the two strains.Hydrogen peroxide (H 2 O 2 ), an oxidative stress inducer, and N-AcetylCysteine (NAC), an antioxidant agent, substantially suppressed radial expansion in both strains (Fig. 3a-d, Supplementary Table S8).Interestingly, H 2 O 2 significantly increased spore www.nature.com/scientificreports/density in the wild-type strain (Ywt) but not in the degenerate strain (Ydga), while NAC did not have a substantial effect on sporulation in either strain (Fig. 3e, Supplementary Table S8).The most notable effects on both strains were observed when they were exposed to osmotic stressors such as KCl, NaCl, or sorbitol.These stressors considerably reduced radial growth but increased conidia sporulation and intracellular glycerol accumulation in both strains (Fig. 3a,f-i, Supplementary Table S9-S11).These findings suggest that osmotic stress may play a significant role in the phenotypic degeneration of C. militaris.Additionally, we observed positive correlations between intracellular glycerol concentrations and spore density, as well as negative correlations with radial expansion in Ywt (Fig. 3j) and Ydga (Fig. 3k).These observations indicate the presence of cross-regulatory mechanisms among the high-osmolarity glycerol (HOG), filamentous growth (FG), and spore formation (SF) modules.Collectively, our findings suggest that osmotic stress, potentially involving the HOG cascade, contributes to the phenotypic degeneration of C. militaris. Hyperosmotic stress induces phenotypic degeneration in neutralized C. militaris strains To investigate whether the involvement of the HOG cascade in C. militaris phenotypic degeneration is a widespread phenomenon, we studied the effects of hyperosmotic stress on four C. militaris strains, including three neutralized strains (Nf, Wt, and Ywt) and one degenerate strain (Ydga).We confirmed the origin and phylogenetic relationship of the Nf and Wt strains with the Ywt, Ydga and JLCY-LI819 strains by ITS DNA sequencing.The resulting phylogenetic tree indicated that these strains share a common ancestor and that the neutralized strains (Ywt, Wt and Nf) belong to the same clade (Fig. 4a, for ITS sequences see Supplementary Table S1).Our findings demonstrated that the addition of potassium chloride (Kcl) to PDA media decreased radial growth by approximately 50% in all examined strains (Fig. 4b-upper panel, Fig. 4c, Supplementary Table S12).Additionally, hyperosmotic stress increased spore production (Fig. 4d, Supplementary Table S12) and glycerol accumulation (Fig. 4e, Supplementary Table S13) in all strains.Interestingly, hyperosmotic stress inhibited fruiting body formation by approximately 40% in all neutralized strains, while the degenerated strain failed to produce fruiting bodies under any condition (Fig. 4A-lower panel, Fig. 4f, Supplementary Table S14). Furthermore, we observed a positive correlation between glycerol concentration and spore density, and a negative correlation between glycerol concentration and radial growth area and fruiting body weight (Fig. 4g).www.nature.com/scientificreports/Taken together, our results suggest that in C. militaris degenerate strains, the high-osmolarity glycerol (HOG) and spore formation (SF) modules may be activated, while the pheromone response (PR) and filamentous growth (FG) cascades may be deactivated, thereby contributing to the observed phenotypic degeneration. Discussion Cordyceps militaris, a nutritionally and medicinally important ascomycete mushroom, faces significant challenges in long-term cultivation due to strain degeneration, which negatively impacts its commercial production.This study aimed to investigate the molecular mechanisms underlying the observed phenotypic degeneration in C. militaris. The degenerate strain of C. militaris in this study exhibited several characteristics associated with degeneration.It lost the ability to produce fruiting bodies, displayed reduced radical expansion, and exhibited increased conidia density and intracellular glycerol content.To understand the underlying molecular changes, we analyzed transcriptome data and identified dysregulation of several molecular pathways, including the MAPK signaling pathway.The MAPK pathway is involved in the regulation of various cellular activities and has been associated with degeneration processes in other organisms.Through RT-qPCR analysis, we confirmed altered expression levels of key genes involved in the MAPK pathway, which correlated with the observed phenotypic characteristics of our degenerate C. militaris strain.To further investigate the specific branches of the MAPK pathway involved in degeneration, we subjected C. militaris strains to different stressors.Our results indicated that hyperosmotic stress induced phenotypic degeneration in neutralized C. militaris strains, and the activation of the HOG module may contribute to this degeneration. Given that the loss or reduction of fruiting body formation is currently the sole indicator of C. militaris degeneration, we aimed to identify easily quantifiable morphological phenotypes that manifest within a shorter timeframe compared to fruiting body formation, which typically takes two months.For this purpose, we isolated two variations of C. militaris from the same cultivated batch: one capable of producing fruiting bodies and the other unable to do so on solid media.After culturing the strains on PDA plates, we compared their radical expansion and found that the degenerate strain exhibited approximately 50% reduced radical expansion compared to the wild-type strain.While a previous study by Wellhan et al. 23 reported slightly slower radical growth rates in a degenerate strain, they did not investigate its ability to form fruiting bodies, making it unclear whether their "degenerate" strain truly exhibited degeneration.In our study, we also observed a significantly higher conidia density in the degenerate strain, which contradicted the findings of Meiyu et al. 24 who reported a lower total number of conidia per dish in their degenerate strain by day 20 of culture.These discrepancies may stem from differences in the strains studied, the timing of sample collection, and variations in methodologies employed.Moreover, our findings indicate that the retardation of hyphal development and an increase in sporulation could be persistent features of phenotypic degeneration.This notion is supported by our observations that the degenerate strain (Ydga) displayed smaller circle areas and higher spore densities compared to the neutralized strain (Ywt) at three distinct time points during the developmental process. To gain insights into the signaling pathways involved in the phenotypic degeneration of C. militaris, we conducted a reanalysis of transcriptome data generated by Yin et al. 22 .Our analysis identified approximately 2000 differentially expressed genes.Among the downregulated genes, we observed significant enrichment of pathways related to ABC transporters, MAPK signaling, and amino sugar and nucleotide sugar metabolism.In contrast, Yin et al. 22 identified over 2000 differentially expressed genes, including those involved in toxin biosynthesis, energy metabolism, DNA methylation, and chromosome remodeling.This discrepancy may have arisen from differences in the bioinformatics pipelines and/or the datasets used for analysis. Considering the significance of the MAPK signaling pathway in regulating diverse cellular processes, including sexual development and stress responses, and its association with phenotypic switching in other fungal species [7][8][9]25 , we focused our attention on this pathway. Tovalidate the expression levels of crucial genes involved in the MAPK pathway and their potential correlation with the observed phenotypes in our degenerate C. militaris strain, we employed RT-qPCR.Our RT-qPCR results showed consistent downregulation of sexual development genes (Ste12 and Mcm1) and upregulation of conidiation genes (BrlA and AbaA) in the degenerate strain (Ydga) compared to the wild-type strain (Ywt).Additionally, the expression of MAPK-activator (Ste20 and Cla4) and glycerol synthesis (Gpp and Gcy1) genes was significantly higher in Ydga compared to the wild-type strain (Ywt). In our study, we discovered that the C. militaris orthologs of the yeast transcription factors STE12 and MCM1 exhibited downregulation.STE12, which is regulated by the PR module and involved in sexual development in response to pheromone signals, interacts with TEC1 to control filamentous growth under starvation conditions and collaborates with MCM1 to regulate the expression of pheromone-inducible genes necessary for proper sexual development [26][27][28][29] .However, the specific role of STE12 and STE12-like proteins in fruiting body formation varies between species 26 .MCM1 function is required for fruiting body development in the homothallic ascomycete Sordaria macrospora, as well as growth, conidiogenesis, cell wall integrity, and the cell cycle in the filamentous insect-pathogenic fungus Beauveria bassiana 30,31 .These findings suggest that the downregulation of Ste12 and Mcm1 orthologous genes in the C. militaris degenerate strain Ydga may contribute to growth and fruiting body retardation. The central regulatory pathway for conidiogenesis, involving three sequentially controlled transcription factors (BRLA, ABAA, and WETA), is conserved in Ascomycete fungi 32 .In Aspergillus, BRLA is the essential activator of asexual sporulation, and its deletion results in failure to develop vesicle structures and instead produces elongated, bristle-like aerial stalks 33 .The deletion of brlA also abolishes the production of other conidiationspecific genes such as AbaA, WetA, VosA and RodA 34 .AbaA is required for the differentiation of phialides, and its loss of function results in the production of cylinder-like terminal cells with no conidia being formed.WetA functions in completing conidiogenesis, and its loss results in colorless conidia with defective spore walls 33 .This Vol:.( 1234567890 www.nature.com/scientificreports/regulatory network, which is regulated by the Slt2-MAPK/RNS1, Fus3-MAPK and Hog1-MAPK cascades, also regulates asexual development in Metarhizium, and their loss of function impairs conidiogenesis 35 .These data suggest that upregulation of the BrlA and AbaA transcripts in the C. militaris degenerate strain Ydga may lead to an increase in its conidia density.Glycerol synthesis and accumulation are known to play a significant role in responding to hyperosmotic and oxidative stresses 36 .In various organisms, such as yeast and Aspergillus nidulans, specific enzymes, such as GPD and GLD (orthologous to GCY1), respectively, are critical for adapting to hyperosmotic stress 37,38 .Similarly, in Cryptococcus neoformans, the enzyme GPP2 is essential for responding to different stresses 39 .In our study, we observed upregulated transcription levels of Gcy1 and Gpp in the degenerate strain Ydga, while the difference in the expression of Gpd was not significant.These findings indicate that the increased glycerol content in the Ydga strain may be partially dependent on GLD and GPP enzymes rather than GPD.Furthermore, the Ydga strain showed increased expression of Ste20 and Cla4 transcripts, which are associated with glycerol biosynthesis enzymes in yeast.Collectively, these results suggest that the elevated glycerol levels observed in the Ydga strain might be attributed to enhanced expression of genes involved in glycerol synthesis and accumulation. The degenerate strain Ydga exhibited higher glycerol concentrations and increased levels of glycerol biosynthesis enzymes, suggesting the activation of the HOG module in this strain.Supporting this notion, our experiments conducted under hyperosmotic conditions demonstrated that all three neutralized C. militaris strains experienced phenotypic degeneration, characterized by inhibited fruiting body and hyphal development, along with elevated conidia density and glycerol accumulation.Moreover, the growth of C. militaris radicals and the expression of the Fus3 and Hog1 genes were suppressed in a concentration-dependent manner under hyperosmotic conditions 40 .The findings from our study suggest that dysregulation of the MAPK signaling pathway and increased glycerol content may play a role in the phenotypic degeneration of C. militaris.Furthermore, the upregulation of conidiation-related genes and glycerol biosynthesis genes, typically activated in response to hyperosmotic conditions, along with the downregulation of sexual development genes, may contribute to the observed abnormalities in radical growth, sporulation, and fruiting body development.The positive correlation between intracellular glycerol and spore density, as well as the negative correlation with radial expansion and fruiting body development, provide additional support for these findings. It is important to note some limitations of our study.First, we only focused on two morphological characteristics as indicators of phenotypic degeneration, and further investigations should consider additional indicators and criteria.Second, our findings may not be representative of all C. militaris strains, as we examined a single degenerate strain.Third, transcriptome analysis provides insights into gene expression changes but may not reflect protein expression or activity, necessitating further confirmation using alternative approaches.Fourth, while we focused on the MAPK signaling pathway, other molecular pathways may also be involved, warranting additional exploration.Finally, the stressors used in our study may not fully replicate natural environmental conditions, emphasizing the need for investigations under ecologically relevant conditions. In summary, our study sheds light on the molecular mechanisms that contribute to phenotypic degeneration in C. militaris.The dysregulation of the MAPK signaling pathway and increased glycerol content observed in the degenerate strain indicate their potential involvement in this process.These findings carry implications for comprehending fungal physiology and morphology, improving the production of bioactive compounds, and developing early detection and monitoring methods for degenerative strains.However, further research is necessary to address the limitations and broaden our understanding of phenotypic degeneration in filamentous fungi. Fungal strain and media Strains of neutralized C. militaris (Nf and Wt) were gifted by Vu Duy Nhan and Le Hai Yen from the Laboratory of Macro Fungi Technology, Institute of Microbiology and Biotechnology, while strains of neutralized (Ywt) and degenerate (Ydga) C. militaris were isolated in our laboratory.The strains were cultured on potato dextrose agar (PDA) media containing 200 g/L potato, 20 g/L glucose, and 2 g/L agar, or potato dextrose broth (PDB) media containing 200 g/L potato and 20 g/L glucose.Fruiting body media (FBM) was prepared by mixing 35 g of brown rice with 70 mL of liquid media containing 200 g/L potato, 20 g/L glucose, and 100 g/L silkworm pupae. Phenotypic analysis For the assessment of colony size and conidia density, the C. militaris strains were cultivated on PDA medium.A total of 10 µL containing 10 6 conidia were placed at the center of a PDA Petri dish and incubated for the specified duration under natural daily dark-light cycles.The colony diameters were measured, and their corresponding areas were calculated using the formula {(diameter/2) 2 × 3.14}, considering the colonies as circular in shape.The conidia were harvested in the sterile 0.01% Triton X-100 (MERK, cat# 9036-19-5) and filtered through milk filters (Lamtor Ltd, Bolton, UK) to remove hyphal fragments.The conidia were counted with a hemocytometer and their density was calculated using the formula [(conidia concentration × number of mL of the filtered suspension)/circles of areas of corresponding colony]. To measure fruiting body weight, the C. militaris conidia were cultured in PDB media at 130 rpm and 25 °C for 3 days.The inoculated media were then added to the FBM and incubated in the dark for 2 weeks, followed by 6 weeks in a 12/12 dark-light cycle with 90% humidity.The fruiting bodies were collected, dried at 80 °C, and weighed. To assess hyperosmotic stress, the C. militaris strains were cultured on PDA, PDB, or FBM supplemented with 0.4 M KCl, 0.4 M NaCl, or 1 M sorbitol. Glycerol measurement The C. militaris hyphae were collected from 6-day-old cultures on PDA plates, and their weight was recorded.Then, 1.2 mL of 50% EtOH was added to each tube, and the suspension was incubated in an ultrasonic bath with sonication at 70 °C for 15 min.One milliliter of the supernatant was then transferred to new tubes after centrifugation at 14,000 rpm (Eppendorf K-5418R).A total of 1.2 mL of a 10 mM sodium periodate solution (Merck, cat# 7790-28-5) was added to the suspension, and the mixture was shaken for 30 s.Then, 1.2 mL of a 0.2 M acetylacetone (Merck, cat# 123-54-6) solution was added to the former solution and kept in a water bath at 70 °C for 10 min.The sample absorbance was measured with a UV-Vis-NIR spectrophotometer (Cary 5000, version 3.00, Agilent, Scan Version 6.2.0.1588), and the glycerol concentrations were estimated using the formula Y = 0.0055*Ab-0.0012.[Y = glycerol concentration; Ab = absorbance at 413 nm.The formula was build based on a twofold serious dilution of glycerol (> / = 99% purification, Merck, cat # 56-81-5).The glycerol content (microgram glycerol/mg hyphal weight) was calculated by dividing the total glycerol content of each sample by its corresponding hyphal weight. Genomic DNA and RNA extraction The C. militaris conidia were inoculated in PDB media for 3 days at 130 rpm and 25 °C.The hyphal pellets were washed with DEPC-treated water, and genomic DNA was extracted using the Monarch Genomic DNA Purification kit (New England Biolab, cat# T3010S).Total RNA was purified using the E.Z.N.A fungal RNA mini kit (Omega Bio-TeK, SKU R6840-01)-was used after TRIzol lysis (Thermo Fisher, cat # 15596026), and cDNA synthesis was performed using the ProtoScript® II First Strand cDNA Synthesis Kit (New England Biolab, cat# 6560S). Phylogenetic analysis PCR amplification was performed using the primers ITS1 and ITS4 (Supplementary Table S15), and target DNA was amplified by OneTaq® 2X Master Mix with Standard Buffer (New England Biolab, cat# M0482S) under a thermal cycle of 94 °C for 5 min, followed by 35 cycles of 94 °C for 45 s, 52 °C for 1 min, 72 °C for 1.5 min, and a final extension at 72 °C for 5 min.The amplification products were purified with Qiaquick PCR Purification kits (Qiagen, cat # 28104) and then used directly for sequencing.Sequencing was performed using an ABI 3500 Series Genetic Analyzer (Thermo Fisher) and BigDye™ Terminator v3.1 Cycle Sequencing Kit (ThermoFisher, cat # 4337455).BioEdit software 41 was utilized to examine the DNA sequence quality and accuracy as well as select unambiguous bases.Multiple sequences were aligned, and BLAST search and phylogenetic analysis were performed using MEGA version 11.0.13 42.The FASTA sequences of the C. militaris DM1066 43 and JLCY-LI819 44 strains were also included in the phylogenetic analysis with the minimum-evolutionary method 45 and interiorbranch test 46 .The neighbor joining tree 47 embedded on the NCBI website was used to explore the genetic relationships among Ywt, Ydga and other C. militaris strains. Transcriptome analysis The transcriptome datasets from BioProject # PRJNA393201 48 with BioSample # YCCZ1-YCCZ6 22,49 were downloaded from the NCBI website using the SRA-tool kit 50 .The fastq data were checked using FastQC 51 , aligned to the C. militaris CM01 reference genome 52 using HISAT2 53 , and counted using HTSeq-count 54 .Differential expression analysis was performed using EdgeR 55 , and gene ontology enrichment analysis was performed using the ClusterProfiler package in R 56 .All BioSamples, except for YCCZ3, had over 20 million reads with an overall alignment rate of at least 86%.YCCZ3 was excluded from further analysis.The expression patterns of the remaining samples were evaluated using hclust and plot functions in R. The plot showed that YCCZ5 (DG3) and YCCZ2 (DG1) clustered together and were farther from YCCZ1 (WT), while YCCZ4 (DG2) and YCCZ6 (DG4) were in the middle, between WT and the cluster of DG1 and DG3 (Supplementary Fig. S2).DG1 is the second generation of WT and can still develop fruiting bodies.Therefore, we considered DG3 as the "true" C. militaris degenerate strain because it was the fifth passage of the WT and had lost its ability to form fruiting bodies 22 .We compared it with the WT sample to identify differentially expressed genes. RT-qPCR To validate the dysregulated genes identified by Next Generation Sequencing and associated with phenotypic degeneration, we performed RT-qPCR using primers designed with PrimerQuest from the idtdna.comwebsite (Supplementary Table S15).Each RT-qPCR reaction had a total volume of 20 µL, containing cDNA, relevant primers, and the SYBR Green Real-time PCR Master Mix (Takara).Applied Biosystems™ 7500 Real-Time PCR Systems (Thermo Fisher) were used for RT-qPCR.Osh5 (CCM_00742-oxysterol binding protein, putative) was used as an internal control (reference gene) for normalization of the target gene expression and to correct for variation between samples.The thermal cycle for RT-PCR was as follows: 94 °C for 1 min, followed by 40 cycles of 94 °C for 45 s, 53 °C for 60 s, and 72 °C for 20 s.Melting curve analyses were performed at the end of each PCR reaction to ensure that only specific products were amplified.The comparative 2CT method 57 was employed to calculate relative expression levels among the target genes. Statistical analysis We reported the ranges, means, and standard deviations (SD) of the replicates for all the data.The graphs were generated using the ggplot2 package in R, and means and SD were manually added to improve visibility.To assess statistical significance, a one-tailed Student's t-test with an alpha level of 0.05 was conducted between the control and experimental groups using the ggsignif package in R. For the comparison of radical expansion, spore Figure 1 . Figure 1.Characteristics of C. militaris phenotypic degeneration.(a vs. b) Comparison of fruiting body development between the neutralized strain (Ywt) and the degenerate strain (Ydga), illustrating that the former successfully develops fruiting bodies (a), the latter fails to do so (b).(c) Phylogenetic tree depicting the relationship among different C. militaris strains.Representative images of C. militaris isolates (Ywt vs. Ydga) cultured on PDA plates for 6 (d vs. e), 12 (f vs. g), and 16 (h vs. i) days, providing visual evidence of the phenotypic differences.Statistical analysis revealed considerable variations in circle areas (j) and spore densities (k) between the two C. militaris variants (Ywt vs. Ydga).The data are presented as ranges, means, and standard deviations, with a sample size of n = 5. *** represents a significant difference (actual p values are presented in supplementary TableS2-S3).SD refers to standard deviation.Sample ID indicates the strain name and the number of culture days.Significance was determined using a one-tailed t-test for two independent means, with an alpha level of 0.05. Figure 2 . Figure 2. Dysregulation of the MAPK signaling pathway in C. militaris phenotypic degeneration.(a) Biological pathways considerably enriched in downregulated genes.(b,c) RT-qPCR expression analysis of genes involved in the MAPK pathway and associated with phenotypic degeneration, including genes related to sexual development, sporulation, and glycerol synthesis in the Ywt and Ydga strains.(d) Comparison of intracellular glycerol contents between Ywt and Ydga.The data are presented as ranges, means, and standard deviations, with a sample size of n = 5 for intracellular glycerol contents and n = 3 biological replicates for RT-qPCR.*, **, *** represents a significant difference (actual p values are presented in supplementary TableS5-S6).SD refers to standard deviation.Gene symbols indicates the tested genes in the corresponding strains.Significance was determined using a one-tailed t-test for two independent means, with an alpha level of 0.05. Figure 3 . Figure 3. Effects of MAPK Activators on Radial Expansion, Sporulation, and Intracellular Glycerol in C. militaris.(a) Representative images of C. militaris Ywt and Ydga cultured in PDA with or without various MAPK activators.Congo red, a cell wall integrity (CWI) activator, considerably inhibited both radial expansion (b) and sporulation (c) in both strains of C. militaris.(d) H 2 O 2 , an oxidative stress inducer, and NAC, an antioxidant agent, substantially suppressed radial expansion in both strains.(e) H 2 O 2 considerably increased spore density in Ywt strain but not in Ydga strain, while NAC had no substantial effect on sporulation in either strain.KCl and NaCl, two osmotic stressors, considerably inhibited radial expansion (f) but promoted sporulation (g) in both strains.Osmotic stressors substantially increased intracellular glycerol concentrations in Ywt (h) and Ydga (i).Glycerol concentrations exhibited a positive correlation with spore density but a negative correlation with circle areas in Ywt (J) and Ydga (K).CR = PDA + 200 μg/ml Congo Red.H 2 O 2 = PDA + 0.04% H 2 O 2 ; NAC = PDA + 200 mM N-AcetylCysteine.NaCl = PDA + 0.4 M NaCl; KCl = PDA + 0.4 M KCl; Sor = PDA + 1 M Sorbitol.The data are presented as ranges, means, and standard deviations, with a sample size of n = 5. *, **, *** represents a significant difference (actual p values are presented in supplementary TableS9-S11).SD refers to standard deviation.Culture Conditions indicates the strain name and the corresponding media.Significance was determined using a one-tailed t-test for two independent means, with an alpha level of 0.05. Figure 4 . Figure 4. Induction of Phenotypic Degeneration in Neutralized C. militaris Strains by Hyperosmotic Conditions.(a) Phylogenetic tree depicting the relationship among C. militaris strains.(b) Representative images illustrating the repression of radial growth (upper panel) and fruiting body development (lower panel) under hyperosmotic culture conditions.Hyperosmotic conditions considerably suppressed radial expansion (c) and fruiting body development (f) while promoting increased spore densities (d) and intracellular glycerol concentrations (e) in all neutralized C. militaris strains.(g) Glycerol concentrations exhibited a positive correlation with spore density but a negative correlation with circle areas and fruiting body weights.PDA = Potato dextrose agar; DC = Fruiting Body Media (FBM); KCl = PDA or FBM + 0.4 M KCl.The data are presented as ranges, means, and standard deviations, with a sample size of n > / = 7. *, **, *** represents a significant difference (actual p values are presented in supplementary TableS12-S14).SD refers to standard deviation.NS refers to Nonsignificant.Culture Conditions indicates the strain name and the corresponding media.Significance was determined using a one-tailed t-test for two independent means, with an alpha level of 0.05. https://doi.org/10.1038/s41598-024-51946-3www.nature.com/scientificreports/
v3-fos-license
2019-03-17T13:08:13.184Z
2017-01-01T00:00:00.000
73543440
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.4172/2161-0460.1000386", "pdf_hash": "c8afe078aab825eb26c6eb299b9e053ef63d2635", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1470", "s2fieldsofstudy": [ "Medicine" ], "sha1": "460b09dea649e2cfc9ee2f96f5388091c7ba405a", "year": 2017 }
pes2o/s2orc
A Review of Clinical Approach to a Patient with Tremor Disorder Tremor is a rhythmical, involuntary oscillatory movement of a body part produced mostly by alternating contractions of reciprocally innervated muscles [1]. It is the most common type of movement disorder encountered in the movement disorder clinic [2]. The tremor syndrome may vary from an enhanced normal physiological response to the presenting manifestation of an underlying severe neurological disorder, like Parkinson’s disease, stroke and others the list of which is exhaustive. Introduction Tremor is a rhythmical, involuntary oscillatory movement of a body part produced mostly by alternating contractions of reciprocally innervated muscles [1]. It is the most common type of movement disorder encountered in the movement disorder clinic [2]. The tremor syndrome may vary from an enhanced normal physiological response to the presenting manifestation of an underlying severe neurological disorder, like Parkinson's disease, stroke and others -the list of which is exhaustive. Tremor disorder may result in diverse and disparate effects on patients. It impacts on several domains of quality of life, from physical to psychosocial, in a large proportion of patients [3]. Thus, we should have sound knowledge on tremor syndrome and lower threshold to offer treatment to decrease an under-recognized impact of tremor on the quality of life [4]. The misdiagnosis of tremor syndromes is common and often underestimated problem which can cause misleading results in clinical trials [5]. At the level of clinical practice, misdiagnosis may lead to suboptimal treatment, poor response to treatment and dissatisfaction to the patient. The contributing factors of this problem are limited literature, confusing approach and inadequate diagnostic tools to distinguish different tremor syndromes and find out its etiologies [6]. This review is our endeavor to outline the simplified approach to tremor disorders. Methods We extensively searched Electronic databases MEDLINE/Pub Med, Google Scholar, IMSEAR(Index Medicus for South-East Asia Region) and ScopeMed searched with Mesh (Medical Subject Headings) terms "tremor", "clinical features", "path physiology" and "treatment" from earliest possible date. Articles in any language especially those published in recent years were given preference. Epidemiology Essential tremor (ET) is the most prevalent movement disorders, which present as an abnormal tremor in humans [7]. The incidence of ET rises with age [8,9]. The prevalence of essential tremor was 14% whereas Parkinson's disease was 3% among elderly population of the community [10]. Wenning et al. in a Bruneck study cohort reported the prevalence of tremor among the population of 50-89 years age group was 14.5% [11]. Literature regarding the prevalence of other types of tremor syndromes is limited. Classification Tremor is a manifestation of the various diseases. We should have sound knowledge on clinical features and examination findings to diagnose the exact etiology of the tremor. A simplified approach to Tremor disorders is shown in Figure 1, where Tremor is classified on the basis of the position of the body parts at which it is present [1,12]. Abstract Tremor disorder is the most common movement disorder encountered in neurology and general practitioner's clinic. The misdiagnosis of a tremor syndrome is prevalent, leading to erroneous reporting and treatment of this condition. This is our endeavor to give a simplified approach to outline Tremor disorders by reviewing the published literature on it. We searched Electronic databases MEDLINE/PubMed, Google Scholar, Cochrane library and ScopeMed with Mesh (Medical Subject Headings) terms "Tremor", "Clinical features", "Pathophysiology" and "treatment" from earliest possible date. Articles in any language especially those published in recent years were given preference. Tremor is classified into rest tremor and action tremor based on its presence on a different position of body parts. Action tremor is further categorized into postural, kinetic and intentional tremor. Resting tremor is classically seen in Parkinson's disease. Essential tremor is the most common cause of postural tremor. Electromyography and accelerometer though helpful are complex tools to diagnose the tremor disorder. Tremor disorders should be approached systematically. We should proceed from the classification of tremor based on its presence in a different position of body parts. We need to corroborate clinical history, examination findings, and appropriate investigation report to reach the final etiological diagnosis of the tremor disorder. Phenomenological classification of tremor syndrome Resting tremor: Resting tremor is evident when our body parts are relaxed and completely supported against gravity (e.g. with the hands in the lap). It is present during sitting, lying down, and relaxed position. It attenuates when the body part is in the movement while doing activities. Classically it is seen in the Parkinson's disease; however, it may be a manifestation of a severe essential tremor, Wilson's disease and rubral tremor. Postural tremor: Postural tremor is conspicuous when limbs are voluntary maintained in an anti-gravity position (e.g. arms outstretched). It decreases when the body parts are supported. The various conditions associated with Postural tremor are listed in Table 1. Kinetic tremor: Kinetic tremor appears while making a voluntary movement. It is appreciated during activities like eating, writing etc. This tremor is of special concern as it can hamper the daily activities of the patient. Essential tremor, Cerebellar tremor, dystonic tremor and primary writing tremor are the common conditions where kinetic tremor is seen. Kinetic tremor is further classified into a simple, intentional and task-specific tremor. Intention tremor: Intentional tremor commonly known as cerebellar tremor is coarse tremor with a frequency of below 5 Hz, appears when precision is required to touch a target [1]. It progressively worsens during the movement and reaches its maximal intensity near the target. The limb shakes side-to-side perpendicular to the line of travel. Classically it is seen in Cerebellar disease of any etiology, where it is associated with other cerebellar symptoms. However, one-third of essential tremor had also intentional tremor [13]. Etiological classification of tremor syndrome Essential tremor: Essential tremor is a bilateral, largely symmetric postural or kinetic tremor involving hands and forearms that are visible and persistent, and in which there is no other explanation for the tremor. (1) About half of essential tremor patients had a positive family history [14]. It may involve the voice, head and rarely the legs. The usual frequency of Essential tremor usually is 5-10 Hz and it has no latency to onset. Symptom severity often increases over time, but the progression is very slow [15]. The majority of patients do not show accompanying neurologic signs or symptoms, but occasionally instability or more distinct cerebellar signs may be found during the examination, especially in long-standing tremor [16]. This tremor aggravated with stress and attenuated with intake of alcohol. Parkinson's disease tremor: In Parkinson's disease, Hughes et al. reported that seventy-five percent patient had rest tremor during the course of the disease [17]. Parkinsonian rest tremor is characterized by distal predominant, asymmetrical in onset, gradually progressive, supination-pronation, pill rolling type of tremor [18]. It attenuates during activities and sleep. It is associated with the tremor of lips, jaw and lower limbs. Resting tremor of Parkinson's disease is usually combined with postural and kinetic tremor [1]. Drug-induced tremor: Drug-induced tremor is usually symmetrical. It follows the temporal pattern with the initiation of drugs and decreases with the cessation of culprit drug. It is the diagnosis of exclusion and other causes of tremor should be ruled out [19]. Risk factors for drug-induced tremor are advancing age, Poly-pharmacy, underlying structural brain disease, anxious state and renal failure. The etiology of tremor in a certain subset of cases could be attributed to medications the individuals have been administered for certain underlying medical disorders as well. Hereby, in Table 2, we present a list of a number of drugs notorious for causing tremors [12,20,21]. Neuropathic tremor: Neuropathic tremor is also known as Essential Tremor-like. It is characterized by the postural or kinetic distal predominant symmetrical tremor of [22]. It is manifested in inherited or acquired large fiber predominant peripheral neuropathies. Its frequency ranges from 2.8 Hz to 5.5 Hz [23]. Chronic inflammatory demyelinating polyneuropathy may be associated with this tremor, where it is associated with the IgG4 NF155 antibody [24]. The development of these tremors occurs sub-acutely within weeks to months. On neurologic examination, other signs of peripheral neuropathy may be present. Serum electrophoresis, electrophysiological studies, cerebrospinal fluid analysis and sometimes nerve biopsy, can help us come to a diagnosis [25]. Dystonic tremor Tremor is part and parcel of primary dystonia [26]. Dystonic tremor is a focal, postural or kinetic tremor in an individual with dystonia. This tremor may occur in the exact same part of the body as the dystonia or in a different area altogether [27]. Both the frequency and amplitude are often irregular and variable. Subtle symptoms, in terms of mild blepharospasm, voice change of spasmodic dysphonia, or slight torticollis, may be seen as important clues by the clinician. Responsiveness to sensory tricks (gestures antagonistic ques) and exhibition of null point (position of the body with no tremor) indicates a dystonic tremor [28]. Task-specific tremor, such as tremor that only occurs when writing (primary writing tremor) or when performing other specific tasks, may be a form of dystonic tremor [29]. Botulinum toxin injections can ameliorate dystonia and dystonic tremor and are accepted as the treatment of choice [30]. Psychogenic tremor Psychogenic tremor is characterized by its sudden onset and its association with a stressful life event. It may manifest as a combination of resting, postural, or intention tremors. It begins with the involvement of arms which is followed by involvement of the head and the legs. It shows a continuous or intermittent pattern with fluctuating frequency and amplitude. The majority of the patients have a maximal disability (46%) at its onset [31]. Although various criteria are proposed, the [32,33]. Entrainment sign and the co-activation sign are hallmarks to diagnose the psychogenic tremor [34]. Entrainment sign requires the patient to maintain a tapping rhythm in an uninvolved body part at a different frequency than the suspicious tremor which automatically changes the frequency of involved part to the tapping enforced frequency. Co-activation sign is the presence the increased tone of the involved limb during its passive movement and decrease of the tremor with the decrease in muscle tone. This tremor has unpredictable course and usually attenuate with sedatives. Orthostatic tremor Orthostatic tremor is characterized by high frequency (13)(14)(15)(16)(17)(18) tremor occurring in the legs of a person when erect and causes postural instability. Women are affected slightly more frequently, than men [35]. The mean age of onset of orthostatic tremor is the sixth decade. Most cases are sporadic. The syndrome can be primary or secondary and may be associated with a variety of disorders, most commonly Parkinsonism. Postural and kinetic tremor is most common, characterized by unsteadiness on standing. The symptoms improve markedly on sitting or walking. At times, the urge to sit down or to move can be so strong that patients often avoid situations where they have to stand still for a period of time, such as when queuing. The high tremor frequency leads to a partial fusion of the single muscle contractions, and it can be easier to listen to the contractions through a stethoscope applied to thigh or calf muscles. The sound has been compared with that of a helicopter [36]. Treatment options include clonazepam, primidone [37], gabapentin and benzodiazepines. Clinical examination We should perform a thorough clinical examination of the patients with tremor disorder to find the subtle associated neurological findings. Resting tremor should be observed when the patient's affected body parts aren't voluntarily activated and when they are supported against gravity. The most appropriate time is when the patient is concentrating on other tasks, e.g. walking or during a conversation. The postural tremor is examined asking the patient to hold the upper extremities in an outstretched position. The amplitude of postural tremor can be appreciated by putting a piece of paper on the outstretched hands [1]. Intention tremor, which is evident on "Goal-Directed Movements" which increased in amplitude when the subject approaches a goal. It can be elicited in goal-directed activities, such as finger-to-nose, heel to shin, and toe-to-finger movements. Observing a patient while drawing (e.g. Archimedes spirals) or writing is often helpful: Action tremor is increased during writing or drawing, and a task-specific tremor may become obvious. In Parkinson's disease, there is usually no tremor during writing, but other signs can be seen, such as increasing micrographia and slow movements. Archimedes spiral drawing is helpful to differentiate the essential tremor from Parkinson's disease tremor [37,38]. Pouring water from one cup into other shows the degree of disability due to kinetic tremor in a practical situation [21]. The important clues about an underlying neurologic disorder in patients presenting with a tremor can be found during the examination of the cranial nerves, speech, gait, balance, and muscle tone. On ocular examination, the presence of nystagmus may suggest cerebellar disease and Kayser-Fleischer rings are specific for impaired copper homeostasis. Several movement disorders affect the fine-tuned movements of the tongue, where possible abnormal findings include fasciculation or slowness of tongue movements can be observed. Slow and irregular speech with increased separation of syllables or explosive sounds may indicate cerebellar dysarthria. Voice tremor can be appreciated in Essential tremor [39]. On Gait examination, typical Parkinsonian slow shuffling or wide-based ataxic cerebellar gait may be noted. Muscular rigidity in combination with a tremor at rest is typical for Parkinson's disease, whereas spasticity may develop in Multiple sclerosis. Electromyography Electromyography (EMG) is a simple tool that can be useful for the diagnosis of tremor syndrome [40]. Electromyography provides additional useful information about the activity of muscles involved in the generation of tremor. EMG activity may be recorded using needle, wire electrodes, or more typically surface electrodes overlying active muscles [41]. The EMG can provide information about motor unit recruitment and synchronization [42,43]. It can also elucidate the relationship between involved muscles and tremulous movements, revealing whether antagonist muscles (such as flexors and extensors of the wrist) are working at the same time or alternately to produce tremor, which helps to differentiate dystonic tremor from other types of tremor. To utilize the EMG most appropriately in tremor analysis, the signal has to be processed by rectification and integration or smoothing to place its frequency profile into the tremor range [42]. The objective and detailed findings of a tremor analysis test are most helpful when the clinical picture is complicated or when clinical signs are subtle [44]. EMG analysis may be required to differentiate Parkinsonian tremor from essential tremor where than treatment of the two entity is distinct. EMG analyze side-to-side frequency relationship, EMG topography, reflex responses, tremor amplitude ratios during different clinical tasks. Accelerometer It is the most common and gold standard method used to electronically evaluate a tremor. A linear tri-axial accelerometer measures frequency and magnitude of the oscillatory cycles of the tremor. Its frequency represents the dominant frequency of the tremor. The findings of accelerometer help us to differentiate Parkinson's disease tremor from essential tremor. The tremor frequency of below 5.5 Hz suggests Parkinson's disease whereas the tremor frequency above 6 Hz suggest Essential tremor [45]. Though it is an objective measure of frequency and amplitude of tremor, it is expensive, complex and time-consuming. Most importantly, it does not measure the functional disability resulting from the tremor. Joundi et al. [45] reported that iPhone accelerometer is comparable to sophisticated EMG analysis and may be useful in day to day clinical practice. Future direction This review highlight the need of future research to determine utility of Archimedes spiral and the effectiveness of different mobile based application, which is more practical even for resource limited settings in diagnosis tremor syndrome. Conclusion Tremor disorders should be approached systematically. The tremor should be initially classified into rest or action tremor. Reversible and benign condition like enhanced physiological tremor should be ruled out first by taking detail history regarding the stress, caffeine use, drugs intake etc. Though the essential tremor is the most common movement disorder in general population, other organic cause of tremors like Parkinson's disease, Wilson disease, or vascular disease should be excluded first before reaching to diagnosis of essential tremor. We should perform detail neurological examination to find out the subtle associated findings. Drawing Archimedes spiral is a cost-effective and can differentiate the different tremor syndrome. Accelerometer and Electromyography are complex, time-consuming and expensive thus relying on their report for tremor diagnosis seems impracticable in day to day clinical practice.
v3-fos-license
2016-05-12T22:15:10.714Z
2011-03-08T00:00:00.000
16943020
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.pagepress.org/journals/index.php/idr/article/download/idr.2011.e3/2722", "pdf_hash": "f3523fc67be3c5b3f6e4769d860fca31ad560cd2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1471", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f5cf8fc01f94a8c9f140d0183af4fcf89137c9b2", "year": 2011 }
pes2o/s2orc
Usefulness of the polymerase chain reaction dot-blot assay, used with Ziehl-Neelsen staining, for the rapid and convenient diagnosis of pulmonary tuberculosis in human immunodeficiency virus-seropositive and -seronegative individuals There are scarce data regarding the value of molecular tests, when used in parallel with classical tools, for the diagnosis of tuberculosis (TB) under field conditions, especially in regions with a high burden of TB-human immunodeficiency virus (HIV) co-infection. We evaluated the usefulness of the polymerase chain reaction dot-blot assay (PCR) used in parallel with Ziehl-Neelsen staining (ZN) for pulmonary tuberculosis (PTB) diagnosis, in a TB-HIV reference hospital. All sputum samples from 277 patients were tested by ZN, culture, and PCR. Performances were assessed individually, in parallel, for HIV status, history of anti-TB treatment, and in different simulated TB prevalence rates. Overall, the PTB prevalence was 46% (128/277); in HIV-seropositive (HIV+) individuals, PTB prevalence was 54% (40/74); the ZN technique had a lower sensitivity (SE) in the HIV+ group than in the HIV-seronegative (HIV−) group (43% vs. 68%; Fisher test, P<0.05); and the SE of PCR was not affected by HIV status (Fisher test; P=0.46). ZN, in parallel with PCR, presented the following results: i) among all PTB suspects, SE of 90%, specificity (SP) of 84%, likelihood ratio (LR)+ of 5.65 and LR− of 0.12; ii) in HIV− subjects: SE of 92%, LR− of 0.10; iii) in not previously treated cases: SE of 90%, LR− of 0.11; iv) in TB, prevalence rates of 5–20%; negative predictive values (NPV) of 98–99%. ZN used in parallel with PCR showed an improvement in SE, LR−, and NPV, and may offer a novel approach in ruling out PTB cases, especially in not previously treated HIV− individuals, attended in hospitals in developing nations. Introduction Tuberculosis (TB) is one of the most important health problems in the world, with 1.8 million deaths reported each year. 1 Direct smear examination with Ziehl-Neelsen (ZN) staining for the diagnosis of pulmonary tuberculosis (PTB), as employed in most low-income countries, is cheap and easy to use, but its low sensitivity is a major drawback. 2 In Brazil, ZN is the recommended method both for TB diagnosis and treatment control, and sputum culture in solid medium is only indicated in PTB-suspect cases, such as those with: i) ZN-negative results; ii) paucibacillary and extrapulmonary specimens; iii) therapeutic failure with suspicion of drug resistance; and iv) individuals infected by HIV. 3 Rapid TB diagnosis has become crucial, especially for diagnosis involving clinical specimens from subjects with atypical presentation, where direct microscopy presents low sensitivity and culture can delay diagnosis by three to six weeks. 1 Important advances in molecular techniques, which rapidly identify mycobacterial DNA in sputa, may overcome these obstacles. 2 In developing countries, in-house polymerase chain reaction assays (PCR) for the amplification of Mycobacterium tuberculosis (MTB) DNA, using the IS6110 insertion as a PCR target, could be a quick diagnostic test for TB and offers the potential of a sensitive, specific, and rapid diagnostic tool for ruling out pulmonary tuberculosis (PTB). However, PCR methods in respiratory specimens present some caveats: i) reaction inhibitors; ii) lower sensitivity in paucibacillary specimens; and iii) high costs. The majority of previous studies have evaluated in-house and automated PCR and reported PCR sensitivities ranging from 77% to 95% and PCR specificities of 95% in smear-positive specimens, using culture as the gold standard and clinical criteria only to evaluate the inconsistent results. 4 Moreover, the PCR tests were evaluated separately, in contrast to clinical practice where associated tests are required for diagnosis. More recently, the evaluation of the usefulness of PCR, in parallel with the classical diagnostic techniques for rapid diagnosis of TB, has been considered a novel approach. [5][6][7] In order to compare the performance of the use of a molecular test (PCR dot-blot assay) or culture in parallel with ZN for the diagnosis of PTB, we conducted a prospective study in a TB-HIV reference hospital, located in Porto Alegre City in the south of Brazil where, in 2004, 1432 TB cases were reported, 420 of them diagnosed in hospitals and 51% being HIV-infected patients. 8 Materials and Methods Consecutive patients, adults suspected of having PTB and referred to the TB and HIV Reference Center, Parthenon Reference Hospital in Porto Alegre City, capital of Rio Grande do Sul, State of Brazil, were studied prospectively. PTB suspects were referred from community health care units to have their respiratory specimens cultured for mycobacteria, according to Brazilian National Guidelines. 3 Eligible patients were those who reported more than three weeks of coughing; ineligible patients were those receiving anti-TB treatment. Patients were excluded from the study if any of the following conditions were met: i) the culture was contaminated; ii) when expectorated sputum was not obtained; iii) laboratory or clinical data did not fulfill the PTB definition; and iv) written informed consent was not obtained from the study participant. All clinical samples were sent to the Laboratory of the State of RS, State Foundation for Research in Health, Porto Alegre, RS, Brazil (FEPPS, Lacen, RS) for laboratory analysis. All clinical specimens were processed using the acetylcysteine method. Ziehl-Neelsen staining (ZN) and culture (Lowenstein Jensen solid medium) were performed following routine procedures. Positive cultures were submitted to standard identification procedures for differentiation of the MTB complex from atypical mycobacteria. 9 The PCR dot-blot assay was performed as previously described. 10 Briefly, using the IS6110 insertion element as a target for PCR, PCR products were transferred to a nylon membrane, and hybridization was performed with a specific biotinylated probe. The detection of hybridization was performed using conjugated streptavidin-alkaline phosphatase. A positive reaction was obtained by adding 5bromo-4-chloro-3-indoyl phosphate (BCIP) and nitro-blue tetrazolium (NBT). Positive and negative controls were included for each PCR set. In order to detect specimen inhibitors in negative results, a tube of PCR mix for each specimen was spiked with the purified DNA target. 10 All PCR tests with discrepancies in the results were assayed in duplicate. Suspects of PTB, after signing their written informed consent, underwent a validated questionnaire with questions regarding demographic variables and clinical history (e.g. smoking, alcohol abuse, HIV infection/AIDS). 11 Chest radiographs and physical examinations were performed by a respiratory specialist using a standardized form. Respiratory specialists were blinded as to the results of culture and PCR, and laboratory technicians were blinded as regards the chest radiographs and clinical predictors. HIV-testing by ELISA was performed using Western blot as a confirmatory test. The gold standard was the combination of a positive culture with a clinical definition of PTB. 10 Clinical and final diagnosis of confirmed PTB cases were defined as those with a positive culture for MTB in the respiratory specimen; presumptive PTB as those showing clinical improvement after six months of anti-TB treatment, as judged by three different chest physicians not involved in this study in a blinded manner. 12 Non-PTB was considered when patients had a negative acidfast smear and MTB culture, and did not present clinical and chest radiographic changes after six months of follow-up. Test performances were calculated using specific formulae as a function of sensitivity (SE) and specificity (SP) of PCR used in parallel with the ZN smear examination: SEZN + SEPCR -(SEZN × SEPCR); used for parallel tests, predictive values (PV) for different simulated statistical prevalence rates, and likelihood ratios (LR), according to the literature. 13 Although the information in a diagnostic test can be summarized using SE and SP, other parameters may be important clinically for the definition of the accuracy of a laboratory test. LRs allow the investigator to take advantage of all information in a test. For each test result, the likelihood ratio is the ratio of the likelihood of that result in someone with the disease to the likelihood of that result in someone who does not have the same disease. The LR for a positive test is: SE / (1-SP), and the LR for a negative test is (1-SE /SP). The higher the LR, the better the test result for ruling in a diagnosis; a LR of greater than 100 is very high (and very unusual among tests). On the other hand, the lower a LR (the closer it is to 0), the better the test result is for ruling out the disease. The positive PV (PPV) is the proportion of true positives in all positive results, and shows the probability that one patient with a positive test has the disease. The negative PV (NPV) is the proportion of true negatives in all negative results and shows the probability that one patient with a negative test does not have the disease. Ethics Written informed consent was obtained from all patients, and HIV was tested by ELISA, using the Western blot as a confirmatory test. This study was approved by the Institutional Review Boards of FEPPS (n. 01/2002). Comparing the SE and LRof ZN in parallel with PCR, among those individuals not previ-ously treated for TB and those that used anti-TB in the past, the figures were 93% and 0.08, and 85% and 0.18, respectively (Table 3). From In our study, with a TB prevalence of 46%, the NPV and PPV of PCR observed were 81% and 79%, respectively. The use of ZN in parallel with PCR among HIVindividuals showed a NPV and PPV of 93% and 83%, respectively. This strategy, among HIV + individuals, had different results with a NPV and PPV of 82% and 87%, respectively. Among HIV + individuals that had not previously been treated, the NPV and PPV of ZN with PCR were 88% and 82%, respectively. Assuming different TB prevalence scenarios, the use of ZN in parallel with PCR showed similar NPVs and PPVs to those observed with ZN used in parallel with culture, among HIV + and HIVpatients (Table 4). ZN associated with culture presented the best performance in all scenarios. ZN associated with PCR demonstrated a different performance. In regions with a TB prevalence of 5-10%, usually in out-patient units attending individuals presenting with coughing for more than three weeks (respiratory symptoms, according to WHO), the NPV for ZN when associated with PCR ranged from 99-100%. In health units, general hospitals, ambulatory reference centers, or TB clinics, while the TB prevalence usually ranges from 15% to 20%, NPV of this diagnostic strategy was 98-99%. In reference TB hospitals, where the TB prevalence ranges from 30% to 50%, among HIVindividuals the NPV of ZN in parallel with PCR was 96%-94%, but among HIV + individuals this figure was reduced to 93% and 89% (Table 4). Discussion The observed overall PTB prevalence of 46% and of 54% among HIV + subjects confirmed the high prevalence of TB-HIV co-infection in hospital units in Brazil, as reported by the Porto Alegre City TB Control Program. This finding highlights the necessity to evaluate innovative approaches for TB diagnosis in these settings, where atypical chest X-rays and the low SE of ZN as well as the existence of paucibacillary specimens are more frequently observed in HIV + patients, similar to the results described by others. 5 Table 2. Performance of Ziehl-Neelsen staining, culture, and polymerase chain reaction dot-blot assays, according to the history of anti-tuberculosis treatment, among human immunodeficiency virus-seropositive individuals. Laboratory results and performance of methods All groups Non-treated TB Past-TB N=74 group group N=47 N=27 TB Non-TB TB Non-TB TB Non-TB N=40 N=34 N=32 N=15 N=8 N=19 ZN Positive 17 0 14 0 3 0 Negative 23 34 18 15 5 Considering anti-TB treatment status, there was a tendency toward a higher SE in the nontreated group as compared with previous TB cases in all tested methods, and SP was similar to that previously reported. 7 ZN used in parallel with PCR showed SPs ranging from 83% to 86%, as previously described (84% to 87%) in developing countries using solely automated nucleic acid amplification (NAA) tests, and lower than those described (<95%) in industrialized countries. 2,7,14 When different prevalence rates were simulated, high NPV was observed with a TB prevalence of 5-20%, characteristic of outpatient units and general hospital settings. However, these figures decreased in scenarios with a TB prevalence of >30%, especially among HIV + subjects. As mentioned by other authors, in this report the sensitivity of the ZN staining was significantly lower among HIV + -TB patients, and the SE of both in-house PCRs was not influenced by the HIV status of the patient. 2,15,16 These data confirm that the strategy of using ZN in parallel with PCR can be used for excluding TB in outpatient units and hospital settings, particularly in HIVsubjects. The lower SE of ZN when used in parallel with PCR (85%) may be a result of several factors. One of these is the presence of inhibitors that remained in the specimen after the extraction procedure; however, in our study the proportion of inhibitors (1.9%) was similar to those used in NAA tests (0.85% to 22.7%). 14,17 Other factors may include a small number of unequally distributed mycobacteria in the test suspension owing to its division into three aliquots for the laboratory tests used in our study, or levels below the detection limit for in-house PCR (50 CFU). 10 In fact, among the false negative results, 33.3% (11/33) of specimens were below the amplification test detection limit used for PCR. Additionally, the low copy number of IS6110 (insertion element) in the MTB is reported to decrease SE, but this has not been reported previously in Brazil. 10 PCR demonstrated 22 false-positive results (including nine that had had TB in the past, one that presented a scar image in the chest X-ray that resembled inactive TB, five that were HIV + , and six that referred proximity with smear-positive PTB cases during the last six months). The value of the Kappa score obtained between the duplicates of PCRs was 100%. The strategy of associating ZN in parallel with culture showed the best performance in subjects infected or not by HIV; however, culture can delay diagnosis by three to six weeks, making the quick diagnosis of TB difficult. Therefore, the use of ZN in parallel with PCR may provide an alternative for the rapid diagnosis of TB, particularly among HIV + individuals or those with atypical presentation and/or co-morbidities, where diagnosis delay may be lethal, and is critical for the prompt ini- Table 3. Performance of Ziehl-Neelsen staining, culture, and polymerase chain reaction dot-blot assays, according to the history of anti-tuberculosis treatment, in human immunodeficiency virus-seronegative individuals. All groups Non-treated TB Past-TB N=203 group group N=156 N=47 TB Non-TB TB Non-TB TB Non-TB N=88 N=115 N=77 N=79 N=11 N=36 ZN Positive 60 1 54 0 6 1 Negative 28 114 23 79 5 tiation of anti-TB treatment. 17 Additionally, this strategy could reduce the risk of dissemination to other hospitalized patients and healthcare personnel. In our study, the combination of ZN and PCR showed a great improvement in SE and LR -. Thus, the use of ZN and PCR may offer a novel approach for ruling out PTB cases, especially among HIVsubjects, not previously treated for TB, attended in hospitals in developing nations. In-house PCR is usually less expensive than automated nucleic acid amplification tests, and should be introduced more widely in developing nations after an evaluation of its cost-effectiveness and refined estimates of the likelihood of TB disease in different settings.
v3-fos-license
2017-09-13T02:23:58.229Z
2017-09-12T00:00:00.000
5924034
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2017.00620/pdf", "pdf_hash": "e96de178e05c074b3d438b1d91f6a7542b2877ad", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1473", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e96de178e05c074b3d438b1d91f6a7542b2877ad", "year": 2017 }
pes2o/s2orc
Tamoxifen from Failed Contraceptive Pill to Best-Selling Breast Cancer Medicine: A Case-Study in Pharmaceutical Innovation Today, tamoxifen is one of the world's best-selling hormonal breast cancer drugs. However, it was not always so. Compound ICI 46,474 (as it was first known) was synthesized in 1962 within a project to develop a contraceptive pill in the pharmaceutical laboratories of ICI (now part of AstraZeneca). Although designed to act as an anti-estrogen, the compound stimulated, rather than suppressed ovulation in women. This, and the fact that it could not be patented in the USA, its largest potential market, meant that ICI nearly stopped the project. It was saved partly because the team's leader, Arthur Walpole, threatened to resign, and pressed on with another project: to develop tamoxifen as a treatment for breast cancer. Even then, its market appeared small, because at first it was mainly used as a palliative treatment for advanced breast cancer. An important turning point in tamoxifen's journey from orphan drug to best-selling medicine occurred in the 1980s, when clinical trials showed that it was also useful as an adjuvant to surgery and chemotherapy in the early stages of the disease. Later, trials demonstrated that it could prevent its occurrence or re-occurrence in women at high risk of breast cancer. Thus, it became the first preventive for any cancer, helping to establish the broader principles of chemoprevention, and extending the market for tamoxifen and similar drugs further still. Using tamoxifen as a case study, this paper discusses the limits of the rational approach to drug design, the role of human actors, and the series of feedback loops between bench and bedside that underpins pharmaceutical innovation. The paper also highlights the complex evaluation and management of risk that are involved in all therapies, but more especially perhaps in life-threatening and emotion-laden diseases like cancer. INTRODUCTION Today, tamoxifen (brand name Nolvadex) is one of the world's best-selling hormonal breast cancer drugs. However, it was not always so. Compound ICI 46,474 (as it was first known) was synthesized in 1962, quite unusually for the time, by a female chemist: Dora Richardson, who was responsible for making triphenylethylene derivatives within a project to develop a contraceptive pill in the pharmaceutical laboratories of the British chemical group ICI (now part of AstraZeneca). Although designed to act as an anti-estrogen, the compound was found to stimulate, rather than suppress ovulation in women. This, and the fact that at first it could not be patented in the USA, its largest potential market, meant that ICI nearly stopped the project. If it was saved, it was partly because the team's leader, Arthur Walpole, threatened to resign, and pressed on with another project: to develop tamoxifen as a treatment for breast cancer. Even then, its market appeared small, because at first it was mainly used as a palliative treatment for advanced breast cancer. An important turning point in tamoxifen's journey from orphan drug to best-selling medicine occurred in the 1980s, when the results of clinical trials showed that it was also useful as an adjuvant to other forms of therapy in the early stages of the disease. Later, trials demonstrated that it could prevent its occurrence or re-occurrence in women at high risk of developing breast cancer. Thus, it became the first chemopreventative for any cancer, helping to establish the broader principles of chemoprevention, and extending the market for tamoxifen and similar drugs further still. Hailed as a pioneering medicine that has saved the lives of thousands of women 1 , much has been written about tamoxifen, especially in recent years by Craig Jordan, the researcher who was influential in the latter part of its history (Maximov et al., 2016). However, as the public's and the medical profession's dependence on drugs not only to treat, but also to prevent an ever growing variety of conditions has come under increasing scrutiny (Greene, 2007), tamoxifen has also been investigated by sociologists as an example of what they describe as the "biomedicalization" of society, i.e., the shift in the use and meaning of drugs from treatment to prevention, involving a costbenefit calculation that is seldom openly discussed (Fosket, 2010, pp. 341-348;Löwy, 2010, pp. 185-188;Löwy, 2012). There is another strand in the literature, which is somewhat less well-developed, and concerns tamoxifen at once as an emblematic and an idiosyncratic example of pharmaceutical innovation. For the history of tamoxifen suggests a model of pharmaceutical innovation that is far more complex than a linear model from bench to bedside (Schwartzman, 1976;Howells and Neary, 1988;Gambardella, 1995;Landau et al., 1999). Rather, it incorporates numerous dead ends, feedback loops, as well as serendipitous observations made by individual researchers (and associated with other discoveries, in this instance the isolation of the estrogen receptor). Hence, the scientists whose work has shaped pharmaceutical innovation are an important part of the story-in the case of tamoxifen, not only Richardson, but also Walpole, the biologist who led the research team at ICI and provided the link between the different projects within which tamoxifen was developed (Jordan, 1988), and, for the later stage in the drug's tortuous journey, Jordan. At a time when the drying up of old drug pipelines has led to anxieties about the end of the Therapeutic Revolution and the need to find new models of drug discovery to replace those which produced many of the blockbuster drugs we know today, tamoxifen therefore presents an opportunity to explore the historically contingent nature of pharmaceutical innovation, addressing several of the questions posed by the editors (see their introduction to this special issue). Using the research and development reports of the company that developed the drug (ICI) 2 , an unpublished history of tamoxifen, written by Richardson and accompanied by letters from patients 3 , as well as some of the numerous publications on the topic, the paper will show how the early history of the drug shaped its fate in the medical marketplace, and therefore deserves to be better understood than it is at present. The paper argues that its origins as a contraceptive pill rather than a cancer remedy meant that concerns over side-effects, alongside its ability to counteract the action of estrogen, dominated the company's research and development agenda. Hence patients' voices, which provided indications the drug's safety and efficacy at once directly and indirectly, helped to define this agenda, and the absence of sideeffects relative to its anti-estrogenic activity would become one of the key selling points of tamoxifen as an anti-cancer drug compared to alternative treatments. However, because of its very ability to prolong life in women suffering from breast cancer, tamoxifen was later found to have a number of potentially serious long-term side-effects, which range from pulmonary thrombosis to endometrial cancer. Nevertheless, its usefulness in treating and preventing a major cause of death in women has meant that, to this day, it remains on the WHO's List of Essential Medicines (WHO, 2015). This paper will therefore also highlight the complex evaluation of risk that is involved in all therapies, but more especially perhaps in diseases as threatening and emotionally charged as cancer, not only at the regulatory and clinical levels, but also at the individual level of the patient. RESULTS Before focusing on the development of tamoxifen, it is useful to describe the background for the different projects that led first to its synthesis, second to its early trajectory as an anticancer drug, for it illustrates not only the non-linear nature of pharmaceutical innovation, but also the lengthy accumulation of in-house scientific knowledge and technical know-how which underpins it, and yet is rarely brought to the fore in histories of drug discovery (Weatherall, 1990;Sneader, 2005;Ravina, 2011). The Use of Sex Hormones and Synthetic Analogues in Cancer The link between hormones and cancer has been known at least since 1916 (Lathrob and Loeb, 1916). However, their usage in the treatment of cancer depended on their isolation, purification and chemical determination, which was not achieved until the 1930s in the case of sex hormones. One such hormone was the follicular hormone (Follicle-Stimulating Hormone, FSH), which was prepared by the Roussel Laboratories, a French company 2 AstraZeneca, formerly ICI (hereafter AZ), research and other reports: Oral Contraception (AZ CPR 70: 1960-64); Endocrinology and Fertility (AZ CPR 101: 1965-72); Viruses and Cancer (AZ CPR 54-55: 1958-64). These unpublished reports were made accessible to me between 2002 and 2009 by kind permission from AstraZeneca. 3 D. N. Richardson, "The history of Nolvadex" (AZ PH27039 B, 13 May 1980). specializing in biologicals, and supplied to Antoine Lacassagne at the Institut du Radium in Paris. Using this hormone, Lacassagne was able to show a direct link between estrogens and the appearance of breast cancer in mice (Lacassagne, 1932(Lacassagne, , 1936. But natural estrogens were difficult to obtain in the quantities required for large-scale experiments. A major turning point occurred when E.C. (later Sir Charles) Dodds, working at the Middlesex Hospital in London in collaboration with researchers at the Dyson Perrins Laboratory in Oxford, discovered that the synthetic compound stilboestrol had estrogenic properties (Dodds et al., 1938). Dorothy Crowfoot (later known by her married name, Hodgkin), who worked nearby at Oxford University's Inorganic Chemistry Department, established using X-ray crysallography that its chemical structure resembled estrogen (Carlisle and Crowfoot, 1941). Inexpensive to make and apparently well tolerated in patients, stilboestrol was therefore widely prescribed for cases of estrogen deficiency, especially in menopausal women (Sneader, 2005, p. 197). Although it would later be linked to cases of vaginal or cervical adenosarcomas in daughters of women who had been prescribed the drug in their first trimester of pregnancy (to avoid unwanted abortion; see Gaudillière, 2014), it was also the first synthetic drug to be used for treating cancer (Weatherall, 1990, pp. 217-218). Indeed, in 1939, Charles Huggins of the University of Chicago successfully treated cases of prostate cancer with stilboestrol (known as diethylstilbestrol in the US; Huggins and Hodges, 1941), and by 1950 a co-operative trial had shown that the synthetic estrogen was effective in delaying the progress of this type of malignant disease (Nesbitt and Baum, 1950). However, breast cancer proved more difficult to treat, as it could either be inhibited or stimulated by administration of estrogen. Following the publication of Dodds' findings, synthetic substances with a similar structure were examined for estrogenic activity, such as, triphenylethylene. These substances, which could not only be mass produced, but also be chemically modified to obtain derivatives with anti-estrogenic activity, therefore became compounds of choice for studies in Britain and elsewhere. One of the organizations that studied it was ICI, which I turn to now. ICI and Cancer Research The company's interest in cancer was a long-standing one. When triphenylethylene was found by Charles Scott, a researcher in Edinburgh, not only to be active by mouth, like stilboestrol, but to have a durable estrogenic action, and therefore to have potential as an alternative to stilboestrol, Arthur Walpole, a biologist who had joined ICI's Medicinal Section in 1938, began carrying out some exploratory work with the substance. This work led to the synthesis of various triphenylethylene derivatives, including triphenylmethylethylene (M 612) and triphenylchloroethylene (registered in 1940 under the name Gynosone) 4 . In 1942, these compounds were supplied by the company for trials in breast cancer to Alexander Haddow of the Chester Beatty Institute in London, Edith Patterson at the Christie Hospital in Manchester and their collaborators. Although improvements were only temporary, there was clear evidence that Gynosone in 4 Richardson, "The history of Nolvadex." particular caused regression and therefore could be beneficial in the treatment of breast cancer (Haddow et al., 1944;Walpole and Paterson, 1949). Meanwhile, on the other side of the Atlantic, the compounds known as "nitrogen mustards, " which were being studied as part of a chemical warfare research programme, were shown to inhibit the growth of blood and lymph tumors by Goodman and Gilman at the University of Yale, a discovery often hailed as the beginning of cancer chemotherapy 5 . Despite this wartime work being top-secret, Walpole and Haddow were also able to investigate these compounds, thanks to an Anglo-American agreement to exchange scientific information (Weatherall, 1990, p. 218). Another, parallel study relating to cancer at ICI involved anti-metabolites. Following the discovery that ICI's novel anti-malarial drug Paludrine was converted in the body to cycloguanine, an active metabolite which interferes with purine biosynthesis, and spurred by the announcement that Burroughs Wellcome's drug 6-MP was effective against leukemia, the search for anti-metabolites began at ICI under the leadership of Frank Rose, who had run their anti-malarial programme during the war. Rose became Research Manager of the Chemistry Department in 1954, whilst remaining involved in bench work. As well as the search for alkylating agents, synthetic estrogens, and anti-metabolites, Rose also encouraged investigations into carcinogenesis, which was a rare interest for researchers working on cancer chemotherapy at that time (Suckling and Langley, 1990, pp. 507-508). At first, ICI's approach to cancer was therefore largely empirical, involving the synthesis of derivatives of compounds that had known anti-tumor properties, without a formal cancer research programme. However, once plans had been made to build a pharmaceutical research center at Alderley Park near Manchester, and ICI started organizing its research in team projects, Cancer became such a project in 1955. The project was entitled "Cancer and Viruses: antibacterials 6 , " and its team leader was the biologist E. Weston Hurst. Alderley Park opened in 1957, and between 1957and 1960 Cancer and Viruses separated into two different projects. During that time Cancer was merged with a new project to find an oral contraceptive, led by Arthur Walpole. Then, in 1960, the discovery of the natural antiviral substance interferon, and ICI's involvement in its study in collaboration with the Medical Research Council (see Pieters, 2005, Chapters 5-6), led to Viruses and Cancer coming together again. Oral Contraception therefore split away from Cancer, with Walpole working in parallel on both projects. His involvement in the Oral Contraception project (which in 1963 was re-named "Endocrinology, " and later "Fertility, " reflecting a gradual change in the research emphasis) would ensure that breast cancer remained an important focus for both his teams. It was within 5 There compounds were later understood to work by alkylation-i.e., the transfer of an alkyl group from one molecule to another, in the case of anti-cancer agents attaching it to DNA, thus inhibiting cancer cell division-hence such compounds became known as "alkylating agents" because of their mechanism of action. 6 Since the 1930s it was known that viruses can induce tumors in laboratory mice. The oncogenic potential of a number of virus groups, including adenoviruses, herpesviruses, and poxviruses, was identified in the 1950s and 1960s (See Rigby and Wilkie, 1985). this Oral Contraception project that tamoxifen (Nolvadex), a triphenylethylene derivative, was synthesized and subsequently developed, initially as a contraceptive pill. ICI, Oral Contraception, and the Origins of Tamoxifen The first contraceptive pill had been synthesized in the early 1950s, and in 1956 Walpole wrote a survey entitled "The technical possibility of oral contraception 7 , " which-as had become customary within ICI by that time (see Quirke, 2005)gave an overview of the field to enable ICI to decide whether or not it was worth entering. Walpole began by introducing the context in which such a pill would be developed. In doing so, he showed the extent to which contemporary concerns, which included anxiety over population growth, a decrease in death rates, food shortages, and an awareness of important differences between the developed and developing world, were internalized and acted upon by companies such as ICI. Then, in the main body of his report, Walpole enumerated the requirements for contraception: 1. it should not "offend social or religious scruples" and as little as possible the "aesthetic feelings" of those who might wish to avail themselves of it (however, he added, such considerations remained outside the scope of experimental biology); 2. it should be cheap enough to be readily available and simple enough to use by any people "intelligent enough to realize the possible consequence of coitus and to know whether or not they wished to conceive"; 3. it should be effective over a known period of time with no prejudice to subsequent fertility; 4. it should not depend on a local action contemporaneous with coitus or any form of treatment which must be timed in a complex or critical manner in relation to the menstrual cycle; 5. it should involve only occasional dosage by mouth. After describing what at the time was understood about the physiology of reproduction, he went on to list the technical possibilities of contraception at different stages in the reproductive cycle, from (a) spermatogenesis and sperm, to (b) ovulation and ovum, (c) fertilization, (d) fertilized ovum, (e) implantation of embryo, and lastly (f) development of embryo. On the basis of substances already known to act as contraceptives, he concluded that it "would seem possible to produce temporary infertility in men by giving androgens, and of these methylsterone is active by mouth." He added that it was also possible "either to prevent conception or interrupt pregnancy at a very early stage in women by giving estrogens by mouth, " but that such treatments must be free from undesirable side-effects. Among the newer partly synthesized steroids now becoming available, he believed that substances might be found that were be more specifically antagonistic toward progesterone (antiprogestins), and he argued that these would seem more suitable for continued use 8 . Other substances from natural sources, such as, the Lithospermum ruderale, a North American plant with a small white flower that could also be found in English hedgerows and was being investigated at the time by the Medical Research Council (Marks, 2001, pp. 49-50), appeared to him as "rather more suspect, " and he acknowledged that clinical evidence was lacking, not only concerning these natural compounds, but also human contraception more generally. As to the other substances that might be considered for contraception, toxicity was a major problem, such as the anti-folic drug aminopterin, for not only did it act as an early abortifacient, but it carried serious toxic hazards, like some of the other antimetabolites. Similar concerns were associated with biological alkylating agents, which were potentially mutagenic and carcinogenic. Hence, taking into account both the requirements for contraception and the need to avoid toxic effects, especially since contraceptive substances were intended for use in normally young and healthy adults (Oudshoorn, 2002, pp. 123-157), the search for triphenylethylene derivatives, alongside investigations of natural and part-synthesized steroids, became the preferred course of action, as evidenced by ICI's research reports 9 . ICI were not alone in pursuing the triphenylethylene route. Indeed, when Leonard Lerner, a researcher working on a cardiovascular research program at the American drug company Merrell, reported in 1958 that a newly synthesized compound, MER 25 (ethamoxytriphetol), not only resembled structurally triphenylethylene, but had anti-estrogenic activity on both spayed and intact female rats, his discovery stimulated laboratory research and clinical investigation of other potential anti-fertility agents among triphenylethylene derivatives. ICI considered acquiring the drug under license from Merrell in order to study and potentially exploit it as a contraceptive, but interest in it waned, for in the meantime ICI had found that another compound, ICI 22, hydrazine], which they employed in analytical chemistry and were currently investigating as an anti-parasitic for use in the poultry industry, prevented the development of sex organs and secondary characteristics such as the emergence of combs in chicks 10 . This finding led Walpole's team, which at that stage included G. E. Paget and J. K. Walley working on the biological side (while Dora Richardson and G. A. Snow worked on the chemistry), to test the compound in male and female rats, producing evidence that it caused a selective and reversible inhibition of the gonadotrophic functions of the pituitary in rats, and prevented pregnancy either by inhibiting ovulation, or by preventing implantation (the precise mechanism of action was yet unclear). In a report written in September 1960, Walpole wrote that the compound not only provided an interesting lead in oral contraception, but also in hormone-dependent cancers of the prostate and breast, and it was decided that "if an alternative patentable compound were found which, in laboratory tests, proved superior (or even equivalent to it), then this compound should replace 22,365 in clinical studies 11 ." The most promising compound to come out of this programme, ICI 33,828 (which had a similar structure to 22,365), was therefore tested in pre-menopausal patients with mammary carcinoma, which was justified on the grounds that it might have a therapeutic as well as an anti-fertility effect. It was also tried in prostatic cancer, however the clinicians involved in these trials at the MRC Clinical Endocrinology Unit in Edinburgh received complaints from patients about nausea, anorexia, and occasional vomiting. Walpole also discovered that, before trials with 33,828 could begin, 22,365 had been given in November 1960 to a psychotic patient who was 15 weeks pregnant in order to induce abortion. However, the drug had failed to terminate the pregnancy, and estrogen excretion had remained unaffected by the treatment. The fetus, which had therefore had to be removed surgically, appeared normal. At the same time as plans for more extensive clinical studies, preferably closer to home so that his team could be more directly involved in the trials, Walpole therefore also made plans to develop more sensitive assay methods for gonatrophins in urine, blood, and pituitary, to better assess the clinical effects of their lead compound, and obtain more reliable measures of activity in animal experiments 12 . Shortly afterwards, in 1962, Mike Harper, a young endocrinologist who would play a significant part in the tamoxifen story, was invited to join the team. Meanwhile, at Merrell, researchers had pressed on with the search for novel triphenylethylenes and in 1961 discovered that MRL 41 (also known as clomiphene, or chloramiphene, brand name Clomid), which was in fact an ether derivative of Gynosone, also inhibited pituitary gonadotrophins although it showed weak estrogenic activity. Remembering the earlier trials with Gynosone and M 612, Walpole therefore suggested to his team that they develop and examine an ether derivative of M 612 13 . The compounds they prepared in 1961 not only inhibited implantation of the fertilized ovum in the rat at a low dose (below that at which they would show estrogenic activity), but with the addition of a methoxy group they also had a greater duration of action. After the arrival of Harper, whose new series of biological tests helped to produce a clearer picture of the structure-activity relationships of triphenylethylenes, the programme of chemical synthesis was therefore stepped up. The team had grown, and as well as Walpole, Walley NB: the reports were re-named "endocrinology" after the study of steroids and their action on cholesterol metabolism was included in the project. 13 Richardson, "The history of Nolvadex." ICI could improve upon both clomiphene and a new Upjohn product with similar activity, U 11,555, by finding alternatives with less estrogenic and pituitary-inhibitory activity relative to their anti-fertility activity. For, by then, clinical studies of ICI 33,828 had produced disappointing results: not only did it have unpleasant and worrying side effects (nausea, drowsiness, a fall in thyroid function measured by thyroidal I 132 uptake, and a rise in serum cholesterol) 14 , but the inhibition of ovulation could not be achieved without suppressing menstruation, which made it undesirable as an oral contraceptive in women 15 . Among the newly synthesized triphenylethylenes, Harper drew up a short list for further study, primarily as potential anti-fertility agents. These included the dimethylamino ethoxy compound ICI 46,474 (later known as tamoxifen, brand name Nolvadex). It had been synthesized in 1962 by Richardson, and Harper selected it for additional tests and for preliminary toxicity studies. At the same time, the company lodged patent applications to protect ICI 46,474 and related compounds from competitors 16 . As well as providing basic data on these compounds, Patent GB1013907 covered a number of potential therapeutic uses, including cancer. It read: The alkene derivatives of the invention are useful for the modification of the endocrine status in man and animals and they may be useful for the control of hormone-dependent tumors or for the management of the sexual cycle and aberrations thereof. They will also have useful hypocholesteraemic activity 17 . ICI 46,474 (1962-67) Although marred by a number of dead ends, which were partly due to ICI's strategy of closely following their competitor's activities and using their compounds as leads in the search for new, patentable products, the early phase of the Oral Contraception programme shaped tamoxifen and determined its future in many ways. The compounds developed within this programme were designed to act as contraceptive pills, yet from the beginning their usefulness in breast cancer was explored in close parallel. This dual objective was pursued as a result of Walpole's own research interests, and thanks to the fruitful collaborations he established both with endocrinologists and with clinicians working in cancer. The feedback loops between bench and bedside which this relationship created, and which led to a series of twists and turns that would become the hallmark of the tamoxifen story, meant that the compounds functioned both as research tools to study hormone function and metabolism in the laboratory, and as experimental treatments in the clinic. Importantly, the dual objective of developing a contraceptive pill whilst assessing the usefulness of compounds in breast cancer (even if as we have seen this was also a means of testing drugs 14 Cholesterol levels had become a serious concern since Merrell's new drug MER 29 (Triparanol) had been found to cause irreversible cataracts by interfering with cholesterol biosynthesis, and had had to be withdrawn from the market. B. W. before administering them to healthy women), also meant a constant preoccupation with side effects, and the low toxicity of tamoxifen relative to its potency would turn out to be one of its crucial advantages over its competitors. A triphenylethylene derivative, with groups and side chains to enhance its anti-estrogenic and pituitary-inhibitory effect and prolong its duration of action, without interfering with its anti-fertility activity, ICI 46,474 had been demonstrated as the most potent and least toxic of all the compounds tested by June 1964 18 . But what exactly was it? In the process of gathering data for patent applications, scaling up production and preparing a submission to the newly formed Committee on Safety of Medicines (CSD), uncertainty arose as to the precise structure of the compound. Using an NMR spectrometer recently acquired by the company, in 1964 G. R. Bedford, a spectroscopist who had joined ICI's Pharmaceutical Division in 1963, showed that many of the active compounds synthesized so far were a mixture of isomers. However, it was unclear in which isomer the anti-estrogenic activity resided (did it reside in the cis, or the trans isomer?). The isomers were separated by fractional crystallization by Richardson. This represented quite a feat at the time 19 , and revealed ICI 46,474 to be more active as an anti-implantation agent than its cis isomer ICI 47,699, which was more estrogenic (Bedford and Richardson, 1966;Harper and Walpole, 1966). In the meantime, Merrell had carried out a spectroscopic analysis of their own drug clomiphene, and disagreed with ICI's interpretation of the spectroscopic data, attributing the anti-estrogenic activity to the cis, not the trans isomer. The controversy led to some confusion among researchers, and eventually the matter was settled by X-ray analysis, which confirmed ICI's findings that the anti-estrogenic activity did indeed reside in ICI 46,474, that is to say in the trans isomer of the compound (Kilbourn et al., 1968). So how did tamoxifen work? Before making a submission to the CSD, which in the wake of the thalidomide disaster had been set up to review all laboratory data on potential drugs in advance of their introduction into human patients, a basic understanding of their mechanism of action, as well as knowledge about any toxic effects, had to be achieved (see Quirke, 2012a). Therefore, unsurprisingly perhaps since it was intended for use in contraception, the first teratogenic test ever to be performed by ICI was carried out with tamoxifen. At the very low doses necessary to allow implantation of the fertilized ovum, rat offspring developed a deformity called "kinky ribs." However no such effects could be seen in rabbits or in primates, and it was later concluded that since ICI 46,474 restricts uterine growth, the deformity was caused by mechanical contraction and therefore could not considered a true teratogenic effect 20 . Tamoxifen was most effective in preventing implantation in rats when given on day 4 of the pregnancy, and virtually inactive on day 5. This suggested that it acted by interfering with a crucial event that had already occurred by the 5th day. It was suspected that ICI 46,474 prevented implantation by interfering with the critical estrogen release on the uterus that occurs between 12 and 20-21 h on the 4th day 21 . However, it was unclear whether the estrogen released at this time acted directly on the uterus or whether its action was mediated by vasodilating amines such as histamine. As there was evidence to support the latter hypothesis, ICI 46,474 was thought to act either as a direct estrogen antagonist, or by preventing the release of histamine, or as an antagonist of the amine. To explore this hypothesis, whilst carrying out further toxicity tests, experiments were devised in additional animal species (as well as rats, in mice, rabbits, dogs, monkeys, and sheep, for by then the compound was also being considered for use in veterinary medicine) 22 . These experiments revealed considerable species specificity, and by 1965 doubts had arisen whether an "estrogen surge" was necessary for ovoimplantation in humans, as it was in rats, and whether at the dosage required to oppose estrogen sufficiently to inhibit implantation ICI 46,474 would cause menstrual irregularities, therefore whether the compound would prove effective and be acceptable as an oral contraceptive 23 . Although it was still hoped that ICI 46.474 would provide a welcome alternative to the now familiar method of using mixtures of orally active estrogens and gestagens (also known as progestogens) to inhibit ovulation while at the same time producing withdrawal bleeding to replace spontaneous menstruation, a method which was considered too costly and too complicated for use in underdeveloped communities, it was felt that such doubts could only be "settled in the clinic 24 ." However, first, the team needed to ascertain whether or not ICI 46,474 would produce irreversible damage to the ovaries or uterus, and for this studies in monkeys, particularly pig-tail monkeys in which changes in the reproductive cycle were found to most closely resemble those in man 25 , were deemed to be the most helpful. The First Collaborative Trials (1967-71) While these further studies were being carried out, ICI began planning a trial with Dr. Klopper at Aberdeen, for the induction of ovulation in amenorrheic women rather than contraception 26 . Indeed, by then, clomiphene had been found to stimulate ovulation and prolong luteal function in amenorrheic women, and in 1967 was approved for the treatment of infertility in the US 27 . Moreover, obtaining approval to evaluate ICI 46,474 in oral contraception was problematic, not only because it involved long-term administration, but because of persisting fears among British gynecologists that it might lead to fetal malformation. In their eyes, unlike the conventional pill which contained 21 These changes were measured by radio-immunoassay (plasma estradiol) as well as protein binding (progesterone) once appropriate tests had been developed. 26 AZ CPR 101/7B Endocrinology and Fertility, January 1967. 27 Its introduction is said to have begun the era of assisted reproductive technology. See Dickley and Holtkamp (1996). familiar ingredients such as estrogens and progestins that had traditionally been given to pregnant women without harm to the fetus, evidence of a lack of teratogenic effect in animal experiments with an unknown compound like ICI 46,474 did not constitute an adequate safeguard. Therefore, they believed that the first women to receive ICI 46,474 as a contraceptive must be offered an abortion, but under the terms of the 1967 Abortion Act this could only be offered to a very limited number of women 28 . Two solutions to this conundrum were envisaged: (1) to arrange a consortium of gynecologists to contribute such patients to a central unit in the hope of collecting a reasonable number fairly quickly; (2) to go abroad to a country, such as Hungary, where abortion was accepted as a means of population control. Meanwhile, therapeutic studies would be conducted to provide the sort of doses to be used in contraceptive trials, and approval to carry these out was obtained from the CSM in 1969. These studies included ICI 46,474 (now also referred to by its brand name Nolvadex) for the treatment of anovulation or menorrhagia associated with high levels of endogenous estrogen (to be carried out at Aberdeen, Manchester and the Women's Hospital in Chelsea), and of breast carcinoma in 30 menopausal and post-menopausal women (at the Christie Hospital in Manchester). The preliminary reports received from Dr. Klopper in Aberdeen and Drs. Murray and Osmond-Clarke in London helped to cast further light on the drug's mechanism of action, showing that that tamoxifen was capable of inducing ovulation at higher dose levels, while at lower doses it tended to have an antiestrogenic effect 29 . As to the Christie breast cancer trial, although two of the women complained about hot flushes (which was taken as evidence of its anti-estrogen effect), no toxicity was observed and the drug appeared to be well tolerated, even at the highest dose of 10 mg by mouth. In her unpublished history of tamoxifen, Dora Richardson wrote of the team's excitement as the first trial results arrived. She described the news of the birth of a child to a woman who had been infertile for 12 years and had failed to respond to treatment with clomiphene as a "boost to morale 30 ." She also described how the team were encouraged by the results of the breast cancer trial, even though these results were not received with universal enthusiasm at ICI: Walpole and his colleagues were told that they were supposed to be looking for a contraceptive pill, not an anti-cancer agent! At a Development meeting on 28th August 1970, sales estimates and quantities of bulk drug were set at 2 kg for initial stocks. Richardson concluded from these figures that the Development Department obviously envisaged treating only "dead people, " an indication of the hopelessness of the condition as it was viewed at that time (as well as lack of faith or ignorance on the part of the Development team) 31 . However, fortunately, on the basis of the positive clinical results, the CSM granted 28 It was thought that British gynecologists would be unlikely to come across more than one woman a year to whom abortion could be offered under the new law. Walpole et al. (AZ CPR 101/19B Endocrinology and Fertility, 19 Feb. 1971 the company permission to prolong the trials as well as extend them to other centers. By the end of 1970, 60 patients had been admitted to the Christie breast cancer trial, and of the 40 women who had been on the trial for more than 10 weeks, all had shown measurable and marked tumor regression. Although these results were comparable to those achieved with the established synthetic hormone diethylstilboestrol, the clinicians carrying out the trial, Drs. Todd and Cole, reported how impressed they were with the absence of toxicity and the low incidence as well as trivial nature of any side-effects (Cole et al., 1971), especially compared with other agents used in cancer at the time, which were often either toxic, or-in the case of breast cancer-tended to have androgenic effects, and in some instances were so intolerable that patients had been withdrawn from treatment 32 . In return, the trials provided clinical material for laboratory studies of tamoxifen. By then, the estrogen receptor had been isolated and identified by Gorski (Gorski et al., 1968), and Walpole and his team developed a receptor proteinbinding assay method 33 . However, in a clinical setting, it was felt that a radio-immunoassay was more specific for measuring blood-estradiol levels in patients given tamoxifen 34 . The receptor-protein binding assay was therefore mainly used for experiments in laboratory animals, and showed tamoxifen to be a competitive inhibitor of estradiol binding to the uterine receptor protein in rabbits and in mice. Receptors sensitive to anti-estrogen were also found in various parts of rats' brains, including the hypothalamus and the pituitary. The results of the receptor-protein binding experiments in both these test systems suggested that, like other anti-estrogens, the action of tamoxifen was due to a high association constant but low effectiveness of the complex it formed with estrogen receptors (i.e., it was a partial agonist, with high affinity but low intrinsic activity) 35 . This was a pharmacological action with which ICI researchers had become familiar in their work on the beta-blockers (Quirke, 2006). It helped to cast further light on the physiological processes at a molecular level 36 , and made tamoxifen a particularly useful research tool for investigations of hormone-dependent tumors (Jordan et al., 1972). Rendered confident by the clinical and laboratory studies carried out so far, Walpole's team began planning trials in contraception, and the Nolvadex Development Programme was drawn up 37 . This would play an important part in the drug's transformation from quasi-orphan to blockbuster drug (Quirke, 2012b The Nolvadex Development Programme (1971) The "Development Programme" was an organizational innovation which standardized and codified the R&D process at ICI. It marked the transition from the "Proving Trial" to the "Development Trial Stage 38 , " thus helping to bring together the "R" and the "D" in R&D 39 . ICI's first Development Programme had been written up in 1964 for the beta-blocker propranolol (Inderal) 40 . It followed a series of quarterly development reports 41 , and coincided with the hitherto separate Research and Development Departments coming together under the responsibility of a single Director, the Technical Director, as well as with the creation of the CSD in 1963. It therefore was a response to both internal and external factors and stimuli. The Nolvadex Development Programme, which came 7 years after the Inderal Development Programme, included 16 rubrics, describing the work done up to June 1971 (the date of the start of the Programme), making an assessment of the drug's potential market, and plans for future work: Three important considerations were taken into account when planning future work. First and foremost were tamoxifen's possible clinical uses, based on the results of trials received to date. These included: treatment of estrogen-dependent mammary carcinoma; induction of ovulation on women suffering from infertility due to failure to ovulate; menstrual disorders associated with abnormal levels of endogenous estrogen; oral contraceptive (a) for women, (b) for men; treatment for oligospermia; test for pituitary function; others. Secondly, the drug's position in North America was under question, following Ayerst's rejection of ICI's offer of Nolvadex for the American market, and the FDA's likely negative attitude toward its use in breast cancer. This attitude may have been due to a 1971 report in JAMA which had suggested that there was a link between diethylstilbestrol and a rare form of vaginal cancer, and was promptly followed by an FDA bulletin warning against the use of DES (FDA, 1971). Thirdly, the commercial situation, shown in Table 1, indicated that a number of therapeutic treatments of hormone-dependent breast cancers were already in existence, each of which commanded almost equal shares of the market. Despite such competition from rival firms in America and Europe, tamoxifen had two advantages on which its market position would ultimately depend in relation to breast cancer: (1) its unique mode of action in being an estrogenantagonist without androgenic properties, and since at the time it was the only product of its type its use should be larger; (2) it possessed very low incidence of side-effects compared with other forms of treatment. Another important consideration was that of past R&D costs (shown in Table 2), which had a bearing on budgeting and planning for future expenditure. The gaps in particular columns and rows in Table 2 exemplify the non-linear nature of pharmaceutical R&D, with bottle necks and feedback loops when advances in one area are held up by, and then develop in response to those in another. They also illustrate the pivotal part played by drug regulation in shaping the research and development activities of pharmaceutical firms. The trials that followed the CSM's approval for Nolvadex in 1969 not only led to an increase in existing expenditure in areas such as biochemistry, but to new expenditure in areas such as formulation (shown in bold). As well as further trials in anovulatory infertility (in Abderdeen, Oxford, London, and Dublin), and in breast cancer (Manchester, Glasgow, and London), the Nolvadex Development Programme included plans for trials in contraception. "In view of the reluctance of British gynecologists" to become involved in such trials, in 1971 ICI contacted Professor Egon R. Diczfalusy, co-founder and Director of the WHO Research and Training Centre on Human Reproduction at the Karolinska Institute in Stockholm 42 , where he had already carried out collaborative 19641965196719691970Est 1972 projects involving healthy human volunteers using estrogens and other compounds 43 . The Swedish trials led to the finding that, contrary to what might be expected from the laboratory studies in rats, tamoxifen stimulated rather than suppressed ovulation, and therefore would not work as a contraceptive pill in women. The market for a fertility drug was small, as seemed the market for an anti-cancer drug, partly due to the poor prognosis associated with the disease. Despite growing clinical evidence of the usefulness of tamoxifen in breast cancer, the very low sales estimates produced by the Marketing Department suggested that it was never going to cover the R&D costs and bring an appropriate return to the company. ICI's Main Board therefore made the decision to close down the Programme, but tamoxifen's champion, Walpole, threatened to resign. On this announcement, despondency spread through the entire research department. Moreover, when informed of the company's decision, one clinician said that, in view of the encouraging trial results, ICI could not morally withdraw the drug 44 . By then, the breast cancer trials had led to a number of publications, which sparked world-wide interest in tamoxifen 45 . Under such pressure, the company reversed its decision, Walpole remained, and the project was saved. In February 1973 ICI applied for a product license, which was granted a few months later, and in October of that year Nolvadex was launched in the UK for both anovulatory infertility and the palliative treatment of breast cancer. Although there continued to be crossovers between the two projects, the rest of this paper will focus on breast cancer. It will show how tamoxifen was transformed from a research object and palliative therapy for advanced breast cancer, into a diagnostic and predictive tool, an adjuvant chemo-endocrine treatment first in post-menopausal, then also in pre-menopausal women with early breast cancer, and eventually into the first chemopreventative for cancer. 43 Walpole et al. (AZ CPR 101/20B, 28 June 1971). On the WHO Research Centre see: http://ki.se/en/kbh/who-center-for-human-reproduction (accessed 02.06.17). See also Oudshoorn (1998). 44 Richardson, "The History of Nolvadex"; see also Jordan (2003). 45 Tamoxifen, from Palliative Care to Adjuvant Therapy Among the large number of clinical trials now being carried out with tamoxifen, Dr. Einhorn's studies at the Karolinska Institute in Stockholm had included a measurement of the rate of DNA synthesis in breast tumors and the effect this had on treatment. As a result, his group had been able to anticipate clinical response to, or relapse after, treatment with tamoxifen. From these observations, Walpole concluded that tamoxifen could be employed in pre-menopausal women with breast cancer for a short period as a tool to predict the usefulness of drastic treatments such as ophorectomy in these women. At the same time, he began making plans for a trial with Dr. J. C. Heuson of the European Breast Cancer Group, who was anxious to compare tamoxifen with Nafoxidine (an Upjohn compound which like tamoxifen could bind to the estrogen receptor, but unlike tamoxifen had several toxic side effects). The trial would include estrogen receptor determinations on biopsies taken from each patient to determine whether there was a correlation between clinical response to the compound and the presence of estrogen receptors in the tumor tissue 46 . By then, the clinical trials in fertility and contraception had also shown that in some instances tamoxifen led to the suppression of lactation. Walpole felt that this action would be of interest in the context of breast cancers which may be associated with high blood prolactin levels, and indeed at Westminster Hospital two patients who had responded well to tamoxifen had tumors which were thought to be prolactin-dependent 47 . Taken together, these observations on the measurement of DNA synthesis before and after treatment, of the content of estrogen receptors in breast tumors, and of blood prolactin levels led to the hope that it would be possible to predict the type of patient likely to respond to treatment with tamoxifen, i.e., to develop what is now referred to a "stratified therapy" (i.e., a re-branding of what was formerly known as "personalized medicine"; Smith, 2012). However for this to happen, better screens had to be devised, first in animals and then in humans. In her unpublished history of tamoxifen, Dora Richardson commented that no laboratory tests for anti-tumor activity had been carried out with tamoxifen until after its activity in patients had been confirmed. The laboratory model adopted by Walpole's team to test for tumor inhibition was the DMBA (dimethyl benzanthracene) induced tumor in rats (also known as the Huggins tumor). The next step was to design a simplified method of receptor analysis, which could be applied routinely on a large scale in this model, before being applied in humans 48 . Walpole's team developed such a method in collaboration with Craig Jordan (from the Department of Pharmacology at Leeds University, who at the time was on leave of absence at the Worcester Foundation for Experimental Biology, USA, and whose work would later be sponsored by ICI; (Jordan, 2006), pp. 40-41). If it proved effective, i.e., if it demonstrated that tamoxifen could bind to the estrogen 46 Ibid. 47 The presence of prolactin in human blood had been confirmed by Henry Friesen et al. (1970). 48 Walpole et al. (AZ CPR 101/27B Fertility 24 Oct. 1973). receptor in human breast tumors, it was hoped that this method would make it possible to screen patients for the presence of specific estrogen receptor in biopsy specimens of their tumors and to pre-select for treatment with Nolvadex those in whom such receptors had been found. However, alongside these highly scientific methods, clinicians continued to use observations such as "hot flushes" as indications that the treatment was working and remission was likely to occur 49 . Walpole therefore proposed that physiological indicators might also be used to ensure that individual patients were not being "under-treated" and could be given the maximum effective dose to produce an improved response 50 . In his report of February 1974, Walpole wrote: "By good fortune, Nolvadex was launched at a time of increased interest in the assessment of the endocrine status in breast cancer 51 ." Tamoxifen was shown to be highly effective in binding to the estrogen receptor and, before long, researchers in Europe and the US were therefore using tamoxifen as a tool to "predict the response of breast tumors to hormone therapy 52 ." However, this new use for tamoxifen brought out the fact that not all patients whose tumors had demonstrable estrogen receptor levels responded well to endocrine therapy. Although this paradox might be due to the fact that the receptor assays used were not of consistent standard, it suggested that a number of biochemical events were a pre-requisite for complete endocrine regulation, and that other lesions occurred in patients for whom endocrine therapy failed, thereby casting further light upon the complex processes involved in malignant disease. Nolvadex was also launched at a time when the value of chemotherapy in cancer was being established with novel drugs tested first alone, then combined, in collaborative multi-center trials (see Keating and Cambrosio, 2007;also Quirke, 2014, pp. 670-671). With drug resistance becoming a growing concern, not only in bacteria, but also in cancer cells, combination therapy was being developed and its modalities refined. Hence, in June 1974, Walpole began planning a trial in which two different treatment modalities, supposedly devoid of cross-resistance, would be used, and he proposed to alternate their administration on a 4-week basis 53 . The rationale for this trial was that, unlike conventional sequential treatments, each alternating treatment would be started before rather than after the effect of the previous one was exhausted, thus resulting in a cumulative effect. Two added benefits of such an approach were that (1) drugs with high levels of toxicity, such as adriamycin and vincristin, could be given for much longer, and (2) at precise moments in the treatment cycle, the patient's bone marrow and immune system would have a chance to recover. This approach was tried by 49 (Jense et al., 1971), who showed that patients with tumors containing high affinity estrogen receptors were more likely to have a remission following adrenalectomy than those without such receptors (with remissions in 10/13 patients with positive tumors, but in only 1/26 patients without). 52 Dr. Heuson under the aegis of the European Organization for Research and Treatment of Cancer (EORTC, which had been created in 1962), alongside another trial in pre-menopausal women 54 . Such plans and discussions, which were based on a growing number of publications and symposia presenting evidence not only of symptom relief, but also of remissions and survival from breast cancer 55 , indicate that, both as a research tool and a therapeutic agent, tamoxifen was shifting from palliative care into the realm of chemotherapy, transforming it in the process. What follows will concentrate on the years 1975-1980, after which ICI's research reports on tamoxifen and related topics ended. During that period Walpole was mainly involved in the Nolvadex Development Programme until his sudden death in 1977. Although his involvement ensured continuity between the research and development phases, Walpole's gradual disengagement from the research, which can be detected in the reports, meant that the project lacked clear purpose and direction. Months were lost to pressures of competing work inside the company, and aspects of the research were outsourced to external laboratories (Jordan, 2006, Chapter 3). Nevertheless, in that time, the foundations were laid for the next phase in tamoxifen's trajectory, from adjuvant therapy to the first chemopreventative remedy for cancer. 1975-1980: The Final Years of ICI's Tamoxifen Project Clinical trials carried out in Britain by Ward (Birmingham) and Brewin (Glasgow) and beyond (in Germany) showed that the response to tamoxifen in patients who experienced a recurrence of their breast tumor after primary surgery and/or radiotherapy tended to increase with age 56 . These findings prompted the question of what the mechanism for this action might be, since tamoxifen was an "anti-estrogen." Could it be that tamoxifen exerted an estrogenic action (albeit a weak one) by way of its metabolites? 57 The study was taken up at ICI by Barry Furr 58 and B. Valaccia, and a programme of synthesis and tests of analogs of tamoxifen metabolites in a number of different screens, not only estrogen, but also progesterone and androgen receptor screens, was initiated to find out whether tamoxifen could bind with them, and therefore be useful in other cancers. Later prostaglandin synthetase (PGS) inhibitor screens were also developed by the team. These showed that tamoxifen was an effective inhibitor of human breast tumor PGS in addition to arresting tumor growth, thus offering an explanation for the clinical observation that patients taking Nolvadex for advanced breast cancer often experienced relief from bone pain, and strengthening the rationale for its use 54 in adjuvant chemotherapy further still. Hence, it was hoped as a result of this programme that a follow-up compound for Nolvadex might be found-the target being an anti-estrogen of similar potency to Nolvadex with one or more of the following properties in addition: lower agonist activity, shorter half-life, greater inhibitory activity against PGS, anti-androgenic activity 59 . This new research strand, which would lead ICI to its second major breakthrough in cancer therapy: ICI 118,630 (goserelin, Zoladex), was stimulated by the discovery by Schering Plough of the first non-steroidal anti-androgen Flutamide for the treatment of prostate cancer. As they had done earlier with Merrell's drug, ICI therefore mobilized their synthetic capabilities and the scientific expertise acquired with tamoxifen to search for a non-steroidal anti-progestin (which unlike anti-androgens would have the advantage of having neither anti-anabolic activity nor any effects on "normal sexual behavior") 60 . Another approach consisted in looking for a novel, potent analog of the luteinizing hormone-releasing hormone (LHRH), also referred to as the gonadotrophin-releasing hormone (GnRH), although this was initially expected to be used mainly in animal breeding 61 . As well as testing the compounds in the company's by now well established receptor-binding assays, once again the team needed to develop new in vivo screens, and "in view of the previous experience with Nolvadex, that is anti-estrogenic in the rat and estrogenic in mice, " tests would have to be carried out in more than one species. Because the chick comb was known to be androgen sensitive and chicks were cheap, it was chosen as one of the animal models in which to test active compounds and compare them to Flutamide. Meanwhile, a special organization had been created for the purpose large-scale clinical studies of tamoxifen as an adjuvant treatment for cancer: the Nolvadex Adjuvant Trial Organisation (NATO). Until then, adjuvant therapy had consisted either in chemotherapy using mainly cytotoxic drugs, or in major endocrine ablation after curative surgery. Clinical trials of tamoxifen in adjuvant therapy therefore began in 1976, some progressing ahead of schedule, and their favorable results, which showed that Nolvadex was effective in both pre-and postmenopausal women regardless of their receptor status, were frequently discussed at symposia and in the medical press from 1977 onwards 62 . Not only did these results change the modalities of adjuvant therapy for breast cancer whilst helping to establish tamoxifen in the treatment of the early stages of the disease (NATO, 1983(NATO, , 1988, but in the context of these adjuvant trials evidence also emerged of the drug's potential to prevent the recurrence of breast cancer in women at high risk (i.e., who had already had cancer in one breast). This potential was explored in a trial carried out in Denmark, with the aim of establishing the value of tamoxifen as a "prophylactic" in breast cancer (Andersen et al., 1981;Mouridsen et al., 1988). Patients were selected who had had a mastectomy with or without radiation and in 59 Crossley (AZ CPR 101/37B Fertility 27 Jan. 1977). 60 Ibid. 61 Ibid. 62 Richardson, "The history of Nolvadex." whom there was no evidence of metastases, for it was known that 55-60% of them would develop local recurrence of the disease or metastases within 5 years. They were then randomly allocated either to Nolvadex, stilboestrol, or a placebo 63 . The trial eventually showed that although 10% of the women treated with placebo developed a recurrence of their breast cancer, none of those treated with tamoxifen had experienced such a recurrence 64 . Such results would later help to justify the initiation of breast cancer prevention trials, for instance the Breast Cancer Prevention Trial NSABP-P1 (BCPT), with the aim of establishing whether 5 years of tamoxifen would reduce the incidence of invasive breast cancer in women identified as being at high risk of the disease, and yet healthy (Fosket, 2010;Löwy, 2012;also Fosket, 2004). Almost as soon as it had moved into the realm of cancer chemotherapy, tamoxifen therefore hinted at the theoretical and practical possibilities of chemoprevention in cancer. Further trials would turn tamoxifen into the first preventative for any cancer, helping to establish the broader principles of chemoprevention, while extending the market for tamoxifen and similar drugs further still (Early Breast Cancer Trialists Collaborative Group, 1992, 1998 65 . Tamoxifen, from the Clinic into the Medical Marketplace Thanks to tamoxifen, ICI were able to tap into the vast cancer research network connected in Europe through the EORTC, and across the Atlantic through the National Cancer Institute (NCI). The interest tamoxifen generated among scientists and clinicians, rather than the promotional activities of the company, which Dora Richardson argued remained very limited, greatly enhanced its position in the medical marketplace. In a personal communication to Walpole, Dr. Scott Lippman of the NCI had described his method for testing tamoxifen in human breast cancer cell lines which were dependent on estrogens for their long-term growth in tissue culture 66 . In these cells, tamoxifen showed itself to be strongly inhibitory of both DNA and protein synthesis. Lippman had turned this method into a "kit" for measuring receptors, and spurred by their American subsidiary (ICI-USA), ICI did not waste time in starting work on their own quantitative assay "kit to be marketed as an adjunct to Nolvadex." Such a kit would not only make money for itself, but by helping to justify the use of tamoxifen, would further enhance the market position for the drug, particularly in the USA 67 . By then, the company had submitted an Investigational New Drug (IND) application to the Food and Drug Administration (FDA convincing case for Nolvadex in breast cancer 68 . By 1984, the NCI were describing tamoxifen as the adjuvant chemotherapy of choice for breast cancer (Consensus Conference, 1985). Although ICI's application for a US patent for tamoxifen had originally been rejected on the basis that the US Patent Office did not recognize advances on existing inventions, and that Merrell's patent for clomiphene pre-dated that for tamoxifen, in 1985 the American court of appeals finally granted ICI the patent rights for tamoxifen in the USA, thereby starting the 17-year patent cover there, paradoxically at a time when it was coming to an end in other countries (Jordan, 2006, p. 40). Tamoxifen's entry into the American market contributed to rising worldwide sales: although ICI's Marketing Department had only expected it to make £100,000 p.a. in 1970, by 1974 figures on the home market alone amounted to £140,000, overtaking one of ICI's well established drugs Mysoline (for epilepsy). By 1976, sales figures were equivalent to those for the anesthetic Fluothane, the first drug to put ICI's Pharmaceutical Division "in the black, " and for over-the-counter drugs such as, the antiseptic Savlon. As the expiry date for their tamoxifen patents was drawing near, in 1979 ICI obtained a 4-year extension for their UK patent, on the basis of "the nature and merits of the invention in relation to the public, " as well as "the profits made by the patentee 69 ." By 1980, it was making £30 M for the firm 70 . Nevertheless, even as late as September 1982, at the annual portfolio review attended by the managers of the Biology Department (Dr. J. D. Fitzgerald) and Chemistry (Dr. R. Clarckson), the manager of the Marketing Department, who also attended the meeting, commented that "there was no market for cancer 71 ." ICI's Marketing Department were not alone in under-estimating the market for anti-cancer drugs: if tamoxifen had not been "stolen" by American companies while it remained unprotected by patents, it was partly because they did not believe in its usefulness either (Jordan, 2006, p. 40). The fate of tamoxifen therefore rested on the qualities of the drug itself, and the interest it generated not only among researchers both inside and outside the company, but also among patients and the wider public. As mentioned earlier in this paper, Dora Richardson's history of Nolvadex was-quite unusually for such an internal publication-accompanied by letters from patients who attributed their lives to tamoxifen. Appendix 6 entitled "What do the patients think" included a letter to ICI's Pharmaceutical Division, in which a grateful patient wrote: "Thank you for a miracle." Tamoxifen benefited not only from being the first of a kind, which helped to confer upon it the status of a "miracle drug, " but once again from its origins as a contraceptive pill. As the name indicates, it could be taken orally, and this mode of administration meant that Nolvadex was suitable for home treatment, and a large proportion of sales (75%) occurred 68 Richardson, "The History of Nolvadex." 69 UK 1949Patents Act, Section 23. http://www.legislation.gov.uk/ukpga/1949. I thank Dr. Michael Jewess for pointing out this section to me. 70 Richardson, "The History of Nolvadex." 71 Dr. J.D. Fitzgerald, personal communication. through retail pharmacies. This enabled local tinkering with established protocols, as well as a degree of self-experimentation, as testified by another letter, written by a cancer researcher (Dr. June Marchant of the Regional Cancer Registry, West Midlands Oncology Group), who had been diagnosed with breast cancer, and having spent 20 years in cancer research was well versed in the modalities of cancer therapy 72 . After discussing her ideas with her clinician, whom she described as "understanding, " together they worked out "an unconventional management programme." Because her thymus gland was within the radiation field of her breast tumor she refused radiation therapy. Instead, she decided to undergo therapy with a new cytotoxic drug that was being tested locally in a clinical trial. She appeared to make an uneventful recovery, but in 1972 a scan revealed metastases in her brain. At this point, she therefore elected local treatment with radiation of the head and adjuvant therapy with tamoxifen. Knowing from her own research that prolactin had been identified as a hormone with perhaps an even greater significance than estrogen in the maintenance of breast tissue and breast tumor growth, she started reading the relevant literature. A number of inhibitory substances had been tried on a few patients with breast cancer, and among them Levodopa appeared to give beneficial results. Her clinician therefore agreed to give her Levodopa as additional anti-hormonal therapy. Her drug regimen was phased out in 1975, and at the time of writing her letter, in 1976, the author felt "very well indeed, having had no ablative operation, cytotoxic drugs or masculinizing hormones." From her own experience, she therefore concluded that "systemic therapy, in addition to local therapy, had a vital role to plan in the management of the disease, " and she wished to share this positive experience with others. Her conclusions went beyond ascertaining the value of tamoxifen in adjuvant therapy-extrapolating from her experience with the drug, she defended "systemic therapy" more generally. Yet, after Dr. Stephen Carter, who had been responsible for ICI's cancer project on Cell Division and Growth 73 , left the company, taking early retirement in 1979, he was not replaced, and the project on cell growth was terminated. Thus, in 1980, when tamoxifen was bringing in sizeable profits for the company and Zoladex (for prostate cancer) was in the pipeline, ICI had no longer a cancer research programme, a situation that lasted until in 2006, when Alderley Park became the Global Lead Centre for the company's cancer research 74 . DISCUSSION If tamoxifen made it into the medical marketplace, it was largely despite rather than because of the company's marketing department. Thanks to having inside a drug champion prepared to risk his career to save his project and a medical department willing to run the gauntlet of the FDA to promote tamoxifen in the USA, but also thanks to interest generated outside, among scientists, clinicians, and patients who asked for or agreed to take the drug, it was transformed from a failed contraceptive pill into a successful breast cancer medicine. The patients' letters referred to in this essay provide us with a unique insight into this transformation, but also into the public demand and experimentation which escape the control of both the industry and the professions, and are not normally included in discussions of pharmaceutical innovation. Focusing on the early history of tamoxifen has made it possible to examine in some detail both the brakes and the stimuli for pharmaceutical innovation. These come from inside as well as outside industry, contrary to a rather narrow model of pharmaceutical innovation, according to which companies, motivated by a commercial more than a scientific agenda, push drugs onto an unsuspecting public, often with the connivance of the medical profession, but hopefully kept in check by the actions of regulatory authorities (for example see : Crawford, 1988;Marsa, 1997;Law, 2006). In the case of tamoxifen, pharmaceutical innovation was predominantly science-and clinic-driven, rather than marketdriven (so a case of demand-pull rather than supply-push, Walsh, 1994). It benefited from a number of coincidences: its ability to bind to the newly-discovered estrogen receptor helped to make it into a useful tool for investigating hormonedependent tumors, as well as a drug of choice for treating breast cancer. It was developed at a time when palliative care was becoming an important part of cancer treatment (Clark, 2007), and when chemotherapy was successfully being applied to cancer in collaborative trials. These placed ICI at the center of a global network of cancer institutions and organizations, which helped to maintain interest in their drug even as ICI was losing its research focus on cancer. Hence the last phase in tamoxifen's transformation, into the first chemopreventative for cancer, owed more to this global network than to ICI's efforts at promoting their drug. Finally, tamoxifen was developed at a time when cancer patients were encouraged to demand better treatments, to become more proactive in their own care, and engage with ideas of risk. In the beginning, when tamoxifen was being developed as a contraceptive pill, cancer patients had to some extent been used as "proxies" for normal, healthy human subjects, and their voices were mostly heard trough the clinicians who reported on their symptoms as indications of the drug's activity and sideeffects. Nevertheless, the fact that their voices were included, both indirectly in the reports and directly in Dora Richardson's history of Nolvadex, suggests that to the company these voices did matter: they helped to shape the content of the research, whilst justifying it, both morally and scientifically. Rather than a "detour" in relation to contraception (Oudshoorn, 2002), the study of tamoxifen in breast cancer was therefore carried out in close parallel with its study in contraception (and subsequently fertility). This is not surprising, given that ICI's interest in cancer pre-dated their interest in contraception by 20 years. Nevertheless, the contraception project helped to determine tamoxifen's fate as a drug: from what it was (a synthetic antiestrogen, safe with a relatively low incidence of side-effects), to how it could be taken (orally, and therefore suitable for home treatment). Thus, the drug and its fate were shaped by the industrial setting from which it emerged. In return, tamoxifen transformed the biomedical landscape in which it was deployed. As it moved from contraception into cancer, tamoxifen expanded its market at the same time as its clinical role, transforming cancer therapy in the process. Not only did it cast fresh light on the function of sex hormones and their role in malignant disease, but it hinted at the possibility of personalized medicine, and helped to lay the foundations of chemoprevention. Indeed, although the concept of chemoprevention had already begun to take hold with drugs to treat cardiovascular diseases (to lower cholesterol or blood pressure, for instance), by becoming associated with and tapping into the drive to catch cancer early by screening and, even better, prevent it by introducing life-style and other changes, the principles and practice of chemoprevention were further strengthened by drugs like tamoxifen and their application to the field of cancer. In the context of cancer chemoprevention, the question of its use in normal, healthy women arose once more, but it did not go unchallenged. In her chapter on "Breast Cancer Risk as Disease", Jennifer Fosket has described the controversies that surrounded the BCPT which took place in the USA in the 1990s, highlighting the fact that the risks associated with tamoxifen were often downplayed, and this despite letters from ICI (which had spun off its pharmaceutical division to form Zeneca) warning both doctors and women enrolled on the trials that-by then-some women taking tamoxifen had developed endometrial cancer (Fosket, 2010, p. 345). Although the BCPT identified an increased risk of pulmonary embolism, deep-vein thrombosis, as well as endometrial cancer in women who had taken tamoxifen compared to the control group on placebos, their findings were nonetheless favorable to tamoxifen: only 124 women had developed breast cancer in the tamoxifen group, compared to 244 in the placebo group (Fisher et al., 1998). On the other hand, the results of the Royal Marsden Study, carried out in the UK at roughly the same time as the BCPT, were not so clear-cut: they revealed no significant reduction in breast cancer incidence in women at risk who took tamoxifen (Powles et al., 1998). These different results were attributed to key differences between the American and European trials, ranging from their organization, to the numbers of women enrolled, the criteria for their selection, and different conceptualizations of what constituted "high risk 75 ." Such differences and controversies surrounding the trials led the FDA to downgrade its approval from "prevention" to "reduction of risk" (Fosket, 2010, p. 348). In a sense then, tamoxifen had been the victim of its own success. Originally intended for women with little chance of 75 Foskett has suggested that Royal Marsden selected women based on their family history, and this may have led to more women with the BRCA1 and 2 gene mutations, for whom tamoxifen is a less effective preventative, being enrolled in the trial (Fosket, 2010, n. 2 p. 352). survival, its ability to cause disease in women experiencing long-term remissions thanks to tamoxifen led to a complex assessment of risk, which had to be shared with women undergoing treatment for breast cancer. Thus, in 1996, a guide written for clinicians and patients on the subject of tamoxifen stressed the importance of communicating the risks involved in taking the drug, from minor side effects such as hot flushes, to potentially serious ones including other cancers. Hence, what was nevertheless a message of hope did not only relate to tamoxifen itself, but also to the "new patient" which caring professionals, breast cancer advocates, and the media had helped to create: "prepared with background information about the disease"; requiring "treatment options"; wanting "good communication and information" and wanting "the truth" (Langer, 2006, p. 134). Such patients did exist, as we saw in the case of June Marchant, even though she may have been exceptional, and in many ways drugs like tamoxifen had also helped to bring them about. CONCLUDING REMARKS The focus of this paper on the industrial context for the development of tamoxifen highlights the importance of the early phases in the history of pharmaceutical innovation, for this early history shapes the form and content of drugs, has the potential to define their use and ultimately determine their fate in the medical marketplace, and this despite the many twists and turns that characterize their trajectory from bench to bedside. This particular focus also throws into sharp relief the contribution made by applied research to the advancement of scientific knowledge: in the case of tamoxifen, more specifically to the understanding of basic physiological processes involved in human reproduction and malignant disease. Such a contribution is in part due to the fact that industry, perhaps more easily than academia with its rigid disciplinary boundaries, enables a to-ing and fro-ing between separate, yet contiguous research projects and therapeutic areas (in this instance, between contraception, fertility, and cancer). This to-ing and fro-ing between projects illustrates once again the non-linear nature of pharmaceutical innovation. Typified by blind alleys, fresh departures, feedback loops between the laboratory and the clinic, as well as serendipitous discoveries, the early history of tamoxifen brings to the fore the role of human agency, the institutional memory that is often associated with long-term investment in particular areas of expertise, and is embodied in individual researchers like Walpole. Just as the industrial context is worthy of historical enquiry, the early history of drugs such as, tamoxifen, which are at once emblematic and idiosyncratic examples of pharmaceutical innovation, may yield useful lessons for potential innovators, by helping them to identify key moments when choices are made and decisions taken, so that these may in time be revisited and alternative paths may be explored. For innovators are at once the makers and the products of history, even if history is often remote from their concerns or absent from their writings. Unfortunately, because of the growing difficulty of accessing pharmaceutical archives, this rich vein of historical enquiry may fast be coming to an end. The hope remains that, as an essential component of their intellectual capital, such archives will continue to be available to researchers both inside and outside companies. TOPIC EDITORS' DECLARATION This article is classified as "Original Research" as it reports on primary sources of a historical nature, including previously unpublished studies. ACKNOWLEDGMENTS I am indebted to many in writing this article: the Wellcome Trust which over the years has funded my research (grants number: 096580/Z/11/A, 086843/Z/08/Z), and Oxford Brookes University, the institutional home from which this research has been carried out; David McNeillie, who was instrumental in granting me access to the AstraZeneca archives, first when I was working on the history of penicillin for my DPhil, and later as a post-doctoral researcher and lecturer at Brookes; Audrey Cooper, now retired, whose knowledge of the archives as well as ICI's Pharmaceutical Division was invaluable (she knew about Dora Richardson's unpublished history of Nolvadex, and made it available to me); John Patterson, who commented on my earlier work and pointed out some of the key features of the history of tamoxifen, which I was grateful to be able to include and expand upon in this article; Desmond Fitzgerald, who provided me with precious information and helped to put a human (and often humorous!) face on the history of drug discovery; last but not least the editors, Apostolos Zarros and Tilli Tansey, who invited me to contribute to their project on "Pharmaceutical innovation after World War II, " and the two referees whose comments were most helpful and constructive.
v3-fos-license
2020-06-18T09:06:59.369Z
2020-04-01T00:00:00.000
219800578
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0040-1710342.pdf", "pdf_hash": "83cb744206952beafffa98bb77b52bd0d9a77da5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1475", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d366023095bfc5d892c462a2f6db0eb994d3d4f7", "year": 2020 }
pes2o/s2orc
Intrapancreatic Accessory Spleen Masquerading as a Pancreatic Mucinous Neoplasm Incidentally discovered pancreatic cysts have become more common with increasing use of abdominal cross-sectional imaging. Tools that help us to better risk stratify a pancreatic cyst include advanced imaging techniques, such as pancreatic protocol computed tomography (CT) scan or magnetic resonance imaging (MRI) with cholangiopancreatography. Endoscopic ultrasound (EUS) and fine-needle aspiration (FNA) are invasive measures to better define and sample cysts especially if high-risk features are present. EUS may also yield pancreatic cyst fluid for analysis of carcinoembryonic antigen (CEA) which is elevated in mucinous cysts. This case highlights a rare finding of a mucinous, epidermoid cyst in an intrapancreatic accessory spleen (IPAS) with high-risk features on EUS. Incidentally discovered pancreatic cysts have become more common with increasing use of abdominal cross-sectional imaging. Many of these cysts are benign, while some are malignant or have malignant potential. Further evaluation and management of asymptomatic pancreatic cysts depends on its malignant potential. Tools that help us to better risk stratify a pancreatic cyst include advanced imaging techniques such as pancreatic protocol computed tomography (CT) scan or magnetic resonance imaging (MRI) with cholangiopancreatography. Endoscopic ultrasound (EUS) and fineneedle aspiration (FNA) are invasive measures to better define and sample cysts especially if high-risk features are present. High-risk features within a cyst, such as nodularity, calcifications, or pancreatic ductal dilatation, are concerning. These may suggest advanced dysplasia or malignancy. These lesions are strongly considered for surgical resection due to malignant potential even when needle aspiration does not demonstrate concerning findings due to the low sensitivity. 1 EUS may also yield pancreatic cyst fluid for analysis of carcinoembryonic antigen (CEA) which is elevated in mucinous cysts. This case highlights a rare finding of a mucinous, epidermoid cyst in an intrapancreatic accessory spleen (IPAS) with high-risk features on EUS. Case Presentation This is a 42-year-old African American female who had an incidental finding of a new 17-mm pancreatic tail cystic lesion found on a CT scan during workup for abdominal pain. Patient was lost to follow-up, until a repeat CT scan 1 year later demonstrated a stable low attenuation, 15 mm  11 mm pancreatic tail cyst (►Fig. 1). She complained of decreased appetite and intermittent epigastric pain over the last year. She was referred for EUS with FNA for further evaluation of the cyst. Her history is significant for diabetes mellitus, human immunodeficiency (well-controlled), alcohol and intermittent substance Keywords ► pancreatic neoplasm ► intrapancreatic accessory spleen ► endoscopic ultrasound Abstract Incidentally discovered pancreatic cysts have become more common with increasing use of abdominal cross-sectional imaging. Tools that help us to better risk stratify a pancreatic cyst include advanced imaging techniques, such as pancreatic protocol computed tomography (CT) scan or magnetic resonance imaging (MRI) with cholangiopancreatography. Endoscopic ultrasound (EUS) and fine-needle aspiration (FNA) are invasive measures to better define and sample cysts especially if high-risk features are present. EUS may also yield pancreatic cyst fluid for analysis of carcinoembryonic antigen (CEA) which is elevated in mucinous cysts. This case highlights a rare finding of a mucinous, epidermoid cyst in an intrapancreatic accessory spleen (IPAS) with highrisk features on EUS. usage (marijuana and cocaine), recent ventral hernia repair, and family history significant only for breast and lung cancer. She has no significant tobacco usage history. No prior episodes of pancreatitis, known pancreatic disorder, or prior intervention. Exam is otherwise unremarkable, without palpable mass, jaundice, or tenderness to palpation. The patient underwent an EUS which found a complex cystic lesion in the pancreatic tail and an abnormal lymph node in the peripancreatic region (►Fig. 2). The pancreatic lesion had high-risk features, demonstrating both cystic and atypical solid components, measuring 27 mm  9 mm, and abutting the splenic vessels near the hilum without invasion and a 27 mm  11 mm enlarged peripancreatic lymph node. Both were sampled by FNA. Approximately 5 mL of pancreatic cystic fluid was obtained which appeared cloudy, blood tinged, and viscous. Fluid analysis demonstrated CEA level of 5,327.7 ng/mL, amylase of 335 U/L, and glucose of 69 mg/dL. Cytology demonstrated benign appearing squamous cells and a few atypical, degenerated cells. Lymph node findings were benign. Serum CA 19-9 was low at <3 U/L. Given these high-risk findings the patient underwent distal pancreatectomy and splenectomy. Intraoperative findings included no evidence of distant metastatic disease, no worrisome lymphadenopathy, and a lesion in the pancreatic tail without invasions into surrounding tissues. Pathological evaluation demonstrated IPAS with associated benign epithelial-lined mucinous cyst, without in situ or invasive carcinoma identified. The cyst lining shows squamoid and apocrine features. No goblet cells are identified. Immunohistochemistry shows positive staining for CEA in the cyst lining (►Fig. 3). Twelve benign lymph nodes negative for carcinoma and negative margins. The patient did well postoperatively and was discharged on postoperative day 6. At outpatient postoperative visit, she continued to do well and had no ongoing issues with abdominal pain. Discussion IPAS is found in approximately 10% of autopsies. Accessory spleens are due to postoperative splenosis or ectopic proliferation of splenic tissue during fetal development. 2 IPAS may frequently be detected as a nodule on CT and MRI imaging characterized similar to the spleen on precontrast and contrast-enhanced imaging modalities. 3 Similarly on ultrasound imaging, accessory spleens are echogenically similar to the main spleen and are round or oval with presence of vascular hilum on Doppler ultrasound. 4 Other studies have suggested the usage of contrast-enhanced EUS and EUS-elastography as a useful tool for diagnosis of IPAS. 5 Appearance of the lesion in this case was a very unusual presentation for anIPAS, given its association within a mucinous cyst. This is a rare diagnosis with less than 60 reported cases. Malignant potential is unknown; however, there is a report of squamous cell carcinoma arising in an epithelioid cyst within the spleen. 6 Epithelial cysts in IPAS is rare with half of these reported incidentally detected. Zavras et al reviewed 36 patients and found the mean age of patients to be 46 years and slightly over half were female. Patients were asymptomatic or complained of abdominal or epigastric pain. The cysts were all located in the tail of the pancreas and ranged in size from 1.4 to 12.6 cm. Serum CEA and CA 19-9 levels were normal in the majority of patient and immunohistochemistry demonstrated positivity for CEA in the cyst lining. 7 Similar findings were found in this case. Her serum CEA was normal and immunohistochemistry was positive staining for CEA. Epithelial cysts of the spleen demonstrate an epithelial lining of low cuboidal, low columnar, or squamous type, surrounded by splenic tissue. 8 The cyst lining of epithelial cysts in IPAS shows similar histology which was also found in our patient. Fluid analysis and serum biomarkers are widely used for the evaluation of pancreatic masses. However, the efficacy of fluid studies continues to be poorly demonstrated in IPAS with less than 20% of cases reporting fluid biomarkers. 9,10 Li et al conduced a systematic review of 56 patients with epidermoid cysts in IPAS and found 9 of 9 patients with an elevated cyst fluid CEA and 1 of 6 patients with an elevated fluid CA 19-9. Serum CEA was normal in 26 patients compared with only one patient with an elevated serum CEA. Reported serum CA 19-9 levels were elevated in 20 of 37 patients. 11 Despite these findings, there is insufficient data to conclude significance of these biomarkers in epidermoid cystic IPAS. Appropriate management of mucinous epidermoid cyst in IPAS has not been established. A study in Spain found four cases of IPAS that were diagnosed with EUS and FNA which were safety followed with imaging studies. 12 However, all reported cases of epidermal cysts in IPAS have undergone surgical resection or excision. 13 Conservative management of these lesions has yet to be reported or studied. Conclusion In conclusion, IPAS within a mucinous epithelioid cyst is a rare entity in the literature. Imaging and endoscopic evaluation may be helpful in the diagnosis; however, there continues to be little information of appropriate management. Specific biomarkers have yet to identify to separate IPAS from other malignant potential mucinous cysts of the pancreas. Future case series regarding radiographic imaging, cystic features, and specific fluid biomarkers may be required to fully understand Epidermoid Cyst in IPAS and identify malignant potential. Authors' Contributions S.P. wrote, drafted, and revised the manuscript. S.P. is the guarantor of the case report. S.L. drafted and revised the manuscript. P.K. reviewed pathology, drafted, and revised the manuscript. K.R. drafted and revised the manuscript. Financial Support This research received no specific grant from any funding agency, commercial or not-for-profit sectors. Conflict of Interest There are no financial or personal relationships with other people or organizations that could inappropriately bias this work to disclose.
v3-fos-license
2020-02-27T09:31:47.873Z
2020-02-24T00:00:00.000
212918681
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/24740527.2020.1734918?needAccess=true", "pdf_hash": "fd1de163d26a7b7c271fe8315f077e1c475f1e15", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1477", "s2fieldsofstudy": [ "Medicine" ], "sha1": "727ecedc5c30b11f727907eb6157fdf2dce422d0", "year": 2020 }
pes2o/s2orc
Predicting recovery after lumbar spinal stenosis surgery: A protocol for a historical cohort study using data from the Canadian Spine Outcomes Research Network (CSORN) ABSTRACT Background: Symptomatic lumbar spinal stenosis (SLSS) is a condition in which narrowing of the spinal canal results in entrapment and compression of neurovascular structures. Decompressive surgery, with or without spinal fusion, is recommended for those with severe symptoms for whom conservative management has failed. However, significant persistent pain, functional limitations, and narcotic use can affect up to one third of patients postsurgery. Aims: The aim of this study will be to identify predictors of outcomes 1-year post SLSS surgery with a focus on modifiable predictors. Methods: The Canadian Spine Outcomes Research Network (CSORN) is a large database of prospectively collected data on pre- and postsurgical outcomes among surgical patients. We include participants with a primary diagnosis of SLSS undergoing their first spine surgery. Outcomes are measured at 12 months after surgery and include back and leg pain, disability (Oswestry Disability Index, ODI), walking capacity (ODI item 4), health-related quality of life, and an overall recovery composite outcome (clinically important changes in pain, disability, and quality of life). Predictors include demographics (education level, work status, marital status, age, sex, body mass index), physical activity level, smoking status, previous conservative treatments, medication intake, depression, patient expectations, and other comorbidities. A multivariate partial least squares model is used to identify predictors of outcomes. Conclusion: Study results will inform targeted SLSS interventions, either for the selection of best candidates for surgery or the identification of targets for presurgical rehabilitation programs. Introduction Symptomatic lumbar spinal stenosis (SLSS) is a condition with primarily a degenerative etiology in which narrowing of the spinal canal results in entrapment and compression of neurovascular structures. [1][2][3] Patients with SLSS have leg pain, substantially diminished walking ability, back pain, high disability (high levels or pain related disability), and poor health-related quality of life (HRQoL). 1,4 It is estimated that there is a 2% prevalence of LSS in people between 40 and 49 years and 11% in those 70 to 79 years of age. 3 With an aging population, SLSS is a growing problem with similar levels of disability and impact on HRQoL as seen in those undergoing joint replacement surgery. 5 The majority of patients with SLSS receive conservative interventions such as physiotherapy, steroid injections, and opioids. 2 Decompressive surgery is recommended for those with intolerable SLSS-related pain and/or functional limitations for whom conservative management has failed. Instrumented spinal fusion is usually reserved for patients with SLSS with associated deformity or instability, and these procedures have significant risk of complications. 6 Unfortunately, significant persistent pain, functional limitations, diminished HRQoL, and narcotic use can affect up to one third of patients postsurgery. [7][8][9] More specifically, there is evidence to suggest that approximately 30% of patients do not reach a minimal clinical important change in disability, pain, or quality of life 1 year postsurgery. [10][11][12] Further, a recently published large population-based study identified that more than 40% of patients undergoing fusion for SLSS remain longterm opioid users. 13,14 A number of studies have evaluated predictors of postsurgical outcomes, including a systematic review published in 2006. 15 Poor surgical outcomes as related to disability, pain, walking capacity, or HRQoL may include an array of potential predictors, such as frailty, obesity, smoking, recovery expectations, depression, opioid use, better walking capacity, and lower pain at baseline as well as higher education level and socioeconomic status, age, sex, and comorbidities. 11,12,[15][16][17][18] A limitation of these studies is that they are unable to account for a large number of predictors because of the likelihood of multicollinearity and therefore generally only include a limited number of factors in their models. 11,12,[15][16][17][18] In addition, as with many studies in SLSS, outcomes used in prediction analysis are variable, which means that it is difficult to make generalized conclusions. A major advantage of the current protocol is that it allows for a stable and simultaneous analysis of multiple outcomes with a large number of predictors using core back pain outcomes. 19 As previously mentioned, the literature could be improved upon by utilizing sophisticated statistical modeling on very large, high-quality data sets to identify modifiable predictors. Personalized management strategies to identify best candidates for surgery and the development of a presurgical rehabilitation program may improve patient outcomes. Thus, our primary aim is to identify predictors of back and leg pain, disability, walking capacity, HRQoL, and clinically important change across all outcomes (recovery) 1-year post SLSS surgery. Study Design This is a historical cohort study using data from the Canadian Spine Outcomes Research Network (CSORN) registry. [20][21][22] The STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist is used for reporting of the study and was used to construct the protocol. 23 This study received ethics approval from the Hamilton Integrated Research Ethics Board (HiREB #7285-C). The CSORN is a large database consisting of spine surgical data collected from patients of more than 50 neurosurgeons and orthopedic spine surgeons at 18 sites across Canada. It includes data pre-and postsurgery that were collected from patients diagnosed with a variety of different spinal pathologies, including SLSS. Data collection is conducted at baseline (pre-op), as well as at 3 and 12 months postoperatively. Standardized questionnaires are used to collect information on demographics and comorbidities and include lifestyle questions such as physical activity level and work status, past and current management strategies as related to the condition, self-reported expectations of outcomes, and pain, disability, and HRQoL as outcome measures. Surgeons also record surgical information such as specific procedure, complications, and length of hospital stay. All 18 sites that contributed to the registry obtained research ethics board approval prior to any data collection. Recently, the CSORN steering committee implemented improvements to their data collection procedures to improve data completeness; thus, only data from January 2015 to September 2019 will be utilized in this study. Participants The inclusion criteria for this study include two factors: (1) Patients must have a primary diagnosis of SLSS provided by the treating spinal surgeon. Diagnosis was provided based on the surgeon's assessment and clinical judgment because there are no clearly established diagnostic criteria. (2) Availability of 12-month postoperative CSORN outcome data (pain, disability, and quality of life). Exclusion criteria include previous history of spinal surgery (self-reported), as well as low levels of back and leg pain (less than 3 on a 0-10 scale), low levels of disability (<20% on the Oswestry Disability Index [ODI]), and high quality of life (<20% on the Health Utility Index) measured at baseline. There were no exclusions in relation to the length of symptoms or comorbidities. These are included in our analysis as potential predictors. We did not exclude participants based on the presence of additional imaging findings such as disc herniation or degenerative disc disease. Outcomes Six different outcome measures are included in this study, including back pain, leg pain, disability, walking capacity, HRQoL, and clinically significant change. Back and leg pain were measured using a numeric rating scale (NRS). The NRS is one of the most frequently used instruments to measure low back pain and is currently a core outcome measure in the last low back pain outcome measures consensus. 19 The NRS is a scale ranging from 0 (no pain) to 10 (worst possible pain), with patients indicating their current pain intensity. Disability was measured using the ODI, a condition-specific outcome measure for spine-and back-related disorders that presents a subjective percentage score of a patient's level of function (scored from 0 to 100). 24 The ODI is also a core outcome measure in low back pain with significant evidence for validity, reliability, and responsiveness. 17,19 Walking capacity is assessed using item 4 of the ODI. This self-reported question assesses a patient's ability to walk various distances (pain does not prevent me walking any distance, pain prevents me walking more than 100 meters, pain prevents me walking more than 500 meters, pain prevents me walking more than 1 kilometer, I can only walk using a stick or crutches, I am in bed most of the time and have to crawl to the toilet). This question is often used in SLSS literature, has been found to have good evidence for responsiveness, 25 and is a recommended outcome from a recent systematic review of walking tests in SLSS. 26 HRQoL was measured using the EQ-5D-5L, which is a questionnaire describing the patient's health state using an index value system (scored from 0 to 100, with 100 indicating perfect health). 27 This instrument utilizes a value set that weighs each health state description according to the preferences of the general population of a region. 28 Overall recovery composite outcome is used as a surrogate measure of recovery and is assessed using a combination of clinically important change in pain, disability, and HRQoL. 29 A patient is deemed fully recovered if he or she meets all four criteria outlined in Table 1. This was used because it has been recommended for low back pain and no similar index has been indicated for surgical populations. The cutoffs for mild levels of pain, disability, and HRQoL used to define recovery are summarized in Table 1. 30 Potential Predictors Predictors were chosen for the present study based on data available from the CSORN registry, previous literature, and clinical assumptions. Factors that have been identified to be associated with improved disability outcomes include higher education level, higher quality of life (EQ-5D) at baseline, lower disability (ODI) at baseline, shorter duration of back pain, 17 and lower levels of obesity (body mass index < 30). 11 Shorter duration of symptoms prior to surgery is also positively correlated with reduced pain postsurgery. 15 Factors that have been identified in the literature as being associated with improved walking capacity outcomes include younger age, male sex, higher reported walking capacity at baseline, lower levels of back and leg pain at baseline, 16 and better self-rated health at baseline. 15,16 Factors that have been identified to be associated with poor postsurgical outcomes include depression (in this study measured using the Patient Health Questionnaire-9), high levels of back pain at baseline, higher expectations of pain relief expectations going into surgery, 15 cardiovascular comorbidity, 12 and smoking, living alone, and unemployment. 17 The data available from the CSORN registry include many of these predictors. The predictors we have chosen to include in our models are listed in Table 2. Statistical Analysis The goal of this analysis is to identify the predictors that are most relevant to the six identified outcome measures. The outcomes (back pain, leg pain, disability, walking capacity, HRQoL, and clinical recovery) are likely correlated, and the large number of predictors would probably suffer from multicollinearity in a multiple regression model. Therefore, we use a multivariate approach, partial least squares (PLS), 31 that allows for the simultaneous analysis of multiple outcomes with a large number of predictors and is stable even when the input data have moderate levels of collinearity. PLS is a technique that combines principal component analysis with multiple linear regression. The mix of categorical and continuous predictor variables requires the use of a modified version of the PLS algorithm, partial least squares correspondence analysis. 32 This technique has been used for neuroimaging and genetics data sets. The set of predictors are listed in Table 2, and the set of dichotomous outcomes is provided in Table 1. Prior to the analysis, the CSORN data were preprocessed using the table functions in MATLAB; for example, to compute total scores or identify rows with missing data. As part of our inclusion criteria, we excluded any patient who did not complete 12-month follow-up. When data were missing for predictors and covariates (at a maximum of 20%), we used multiple imputation and conducted a sensitivity analysis after removing rows with missing data. However, when data were missing for outcomes (e.g., a patient completed health-related outcomes but not disability at 12 months postsurgery), the patient record was excluded from the analysis of multiple outcomes. An objective criterion was used to eliminate predictors from the model to achieve a more parsimonious model. An example of such a criterion is iterative variable importance for projection. 33 This method identifies the least important variable, eliminates that variable, reruns the analysis, and repeats until the desired balance of parsimony and prediction is achieved. The PLS model was cross-validated by a hold-out procedure. This step prevents overfitting the model to extreme observations. The data are split into two sets, a training set and a test set, with set membership being random. The PLS model building step was repeated with the training set, and ability of the model to predict test set observations was evaluated. This was repeated 1000 times to ensure that every combination of observations was tested. Concomitantly to the PLS, we also conducted traditional regression analysis for each independent outcome. We used multiple linear regression for the continuous outcomes of pain, disability, and HRQoL; logistic regression for the outcome of recovery; and ordinal regression for the outcome of walking capacity (ODI item 4). Assumptions and multicollinearity were assessed as appropriate. Regression was conducted in Stata 14.0 with a significance level of 0.05. Following Sex and Gender Equity in Research guidelines, we conducted sex-specific analyzes (sex as per collected in the CSORN) by including sex as a cofounder in the total model and performing a disaggregated analysis by sex. 34 We conducted an a priori sample size calculation considering the baseline risk for our primary outcome of disability (assessed using the ODI). Although this outcome is continuous, we dichotomized the outcome as per the overall recovery composite outcome. As per Peduzzi et al., we included at least 20 patients (10 events, 10 nonevents) per predictor category. 35 With a total of 16 predictors, we needed to include at least 160 fully recovered and 160 unrecovered patients in the model. Thus, based on the literature that 30% of patients do not recover from pain-related disability following surgery, 12,15,18 our logistic regression model required a minimum sample size of 540 participants. However, we included at least 10 patients per predictor category as is customary for creating robust multivariable models. Discussion SLSS treatment outcomes, whether conservative or surgical, are variable, with a large number of patients continuing to have significant levels of pain, disability, and diminished HRQoL. To date, interventions are delivered based on health care professionals' expertise without much guidance on what treatment may be best for different patient subgroups. Recognizing the impact that this disorder has on the lives of the patients, it is imperative to develop better treatment approaches, treatment pathways, and personalized care. Modifiable predictors such as smoking, physical activity level, medication intake, and patients' expectations could all be addressed in a prehabilitation program and potentially lead to improved patient outcomes. Strengths The strengths of this study are the large sample size and data collected within usual clinical practice that reflect the Canadian context. Additionally, the robust, multivariate statistical analysis using PLS allows for the inclusion of predictors that are collinear within the model and the identification of patients who fit and do not fit within the predicted outcomes. This analysis also allows for the inclusion of multiple outcomes within a single model, potentially identifying patient phenotypes and their predictors (e.g., high pain, low function). This modeling approach provides important insight into the complex relationships between predictors and outcomes that will allow for more personspecific prehabilitation by identifying subgroups of patients with similar pathways. Additionally, a comprehensive use of the Sex and Gender Equity in Research guidelines for reporting sex differences allows for a better understanding of the role of sex given previous conflicting evidence reported in a systematic review of predictors of outcomes. 15 Limitations The CSORN database is a valuable resource with a large number of participants, allowing for the inclusion of a large number of predictors and outcomes. However, there is always inherited bias when using registry data, such as the potential for large numbers of missing data, leading to high attrition bias and potential selection bias. Recent support from the Canadian Spine Society has resulted in improved data collection within the CSORN registry. Thus, we decided to exclude data collected prior to January 2015 in order to increase the quality and completeness of available data and thus reduce the risk of bias. Additionally, we attempted to reduce the risk of potential bias by using multiple imputation and a sensitivity analysis for missing predictor data in order to reduce the amount of data eliminated from the analysis. In addition, data analysis was limited to data available within the database as well as the format and measures used to collect the outcomes. Nonetheless, the knowledge gained through the proposed approach will better inform prospective and treatment trials to further our understanding of modifiable factors that predict poor postsurgical outcomes for SLSS. Conclusion This protocol describes the first study of a Canadian surgical database to identify modifiable factors associated with pain, disability, HRQoL, and recovery in patients with SLSS. It is anticipated that the knowledge gained from the study described within this protocol will facilitate clinical decision making for managing patients with SLSS and inform the development of a prehabilitation program. Disclosure Statement Erynne Rowe has not declared any conflicts of interest. Elizabeth Hassanhas not declared any conflicts of interest. Lisa Carlesso has not declared any conflicts of interest. Janie Astephen Wilson has not declared any conflicts of interest. Douglas P. Gross has not declared any conflicts of interest. Charles Fisher has not declared any conflicts of interest. Hamilton Hall has not declared any conflicts of interest. Neil Manson has not declared any conflicts of interest. Ken Thomas has not declared any conflicts of interest. Greg McIntosh has not declared any conflicts of interest. Brian Drew has not declared any conflicts of interest. Raja Rampersaud has not declared any conflicts of interest. Luciana Macedo has not declared any conflicts of interest.
v3-fos-license
2019-08-18T23:18:46.044Z
2019-01-30T00:00:00.000
250024065
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-257/v1.pdf", "pdf_hash": "1a3e31d73bd94e57df2a072a2f6a945f24a723d6", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1478", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2d1e0cff33941d9b0f42af16e3f54a5d731d691b", "year": 2019 }
pes2o/s2orc
Difficult Airway caused by a Subglottic Tumor: A Case Report Background: Securing the airway is a core skill for an anesthesiologist, the gold standard of which is tracheal intubation. Patient with subglottic tumor is a situation of difficult airways and could be a challenge for anesthesiologists. The “cannot ventilate, cannot intubate” during anesthesia induction can be lethal. So we always prepared awake approach for diagnosed difficult airway, but awake fiberoptic intubation may be also failed. Case presentation: In this case report we present a 55 years old female patient was scheduled for laryngeal tumor resection, and was planned awake intubation guided by fiber bronchoscope. After awake intubation attempt failed, emergency tracheostomy was successfully completed by ENT surgeon. After securing airway, general anesthesia was performed and the operation proceeded with laryngeal tumor resection. Conclusions: It is important that ENT surgeon must be asked to remain standby for possible need of emergency tracheostomy to prevent awake fiberoptic intubation failure. Ultrasound or computed tomography scan examination of the trachea may be useful to provide guidance for anesthesiologists to choose the appropriate endotracheal tube IDs or tracheostomy directly by measuringthe degree of airway stenosis. Background Airway management is an important part of anesthesia practice, especially in patients with subglottic tumor. Subglottic tumors are rare 1 , but it is a challenge for anesthesiologists to manage their airways. The American Society of Anesthesiologists (ASA) has developed guidelines for managing difficult airways, with a focus on intubation strategies and alternative airway techniques for patients with airway difficulties 2 , but 3 careful planning and preparation, can reduce the possibility of complications. We report the successful airway management of a difficult ventilation and intubation patient due to subglottic tumor by emergency tracheostomy when awake fiberoptic intubation failure. Patients with difficult airways may benefit from preoperative ultrasound or computed tomography scan examination of the neck to evaluate the degree of airway stenosis. Case Presentation A 55 years old female patient arrived in operating room for laryngeal tumor resection, with complaints of coughing, hoarseness and breathing hard for 2 years. She weighs 47 kg and is 158cm tall with BMI 19.56. Her previous medical history included hypertension with regular treatment for 3 years, and tuberculosis had cured for 8 months. No other significant cardiac, surgical or allergic histories were noted. Systemic examination, blood investigations, ECG were all normal. Airway examination relied on laryngoscope and computed tomography scan. Preoperative laryngoscope showed a tumor located just below the right glottis ( Figure 1). Computed tomography scan showed a subglottic tumor ( Figure 2,3). According to the airway examination, we planned awake intubation guided by fiber bronchoscope with tracheal tube of ID 6.0. Topical anesthesia was achieved by nasal packing with Dicaine and tracheal mucosa was anaesthetized through cricothyroid injection of Dicaine. Sedation was supplemented with 50ug fentanyl through intravenous injection. The fiberscope smoothly crossed glottis and the mass, until arriving above Tracheal juga, but the tracheal tube were difficult to insert. Therefore, we prepare to use a smaller size of tracheal tube to reattempt intubation. When we removed fiberscope, the patient were agitated, kept coughing and sat up on the operating table. At the same time, the number of SpO 2 declined progressively. Mask ventilation was 4 ineffective and patient's SpO 2 still gradually decreased. The patient was getting more irritable, cyanotic, and SpO 2 was reduced to 40%. Then we penetrated the patient's cricothyroid with a puncture needle, removed the inner core. She can breathe a little and was calmed down. The SpO 2 gradually rised to 85%. At this point, emergency tracheostomy was successfully completed by ENT surgeon ( Figure 4). After securing airway, general anesthesia was performed and the operation proceeded with laryngeal tumor resection. Discussion And Conclusions According to the origin, laryngeal cancer is divided into three subtypes, supraglottic, glottic and subglottic, and airway obstruction are more frequently associated with subglottic tumors 3 . It makes a serious problem for airway management. In this patient, we applied the algorithm for difficult airway management 2,4,5 and rescued the patient from an emergency situation. In this situation, the airway should be secured before induction, as airway muscle tone and reflexes are maintained 6 and respiratory function is not affected by anaesthetic. The two most common techniques are fiberoptic bronchoscopy (FOB) and awake tracheostomy, although they still take risks 7 . FOB is a gold standard for securing difficult airways, but it has its limitations in some cases. Many reasons for the failure of FOB are due to tumor invasion, bleeding or mucus, or severe upper airway stenosis resulting in loss of vision, which makes insertion of the cord of the bronchoscope impossible [8][9][10] . Subglottic tumors include a wide variety of lesions such as papillomas, hemangiomas, myxomas, neurofibroma, fibromas, chondromas, epidermoid cancer, chondrosarcoma, and others 11 . And it causes different degree of airway anatomy and physiological changes due to tumor invasion. Among the patients suffered from subglottic tumors, the airway anatomy may be distorted and the larynx may be deviated significantly. Despite the fact that the vocal cords could be clearly visualized, it may take several attempts or even can not pass the tip of the FOB beyond the vocal cords. These problems were due to the extreme angle produced by the laryngeal deviation 12 . The number of intubation attempts also correlates with increasing risk of airway trauma and agitation. Moreover, it may be difficult to obtain good local anesthesia due to pathological reasons, and at worst, may be associated with serious and potentially fatal complications. Even in nonobstructed patients, the initial application of topical lidocaine spray to the vocal cords can cause severe coughing followed by laryngeal spasm. Although this transient, reflex glottic closure can be tolerated by normal patients, it can be dangerous for patients whose airway diameter has been compromised 13 . Complete obstruction, hypoxia, confusion, and apnea 9 may result in brain damage and death if the tracheal intubation is not accomplished quickly. Even if adequate local anesthesia can be achieved, the procedure itself is technically challenging. In our experience, some tumors are both vascular and fragile, prone to bleed or fragment at the slightest touch. Not only will bleeding impede vision, but blood and tumour fragments can also physically obstruct the airway. Even with the successful introduction of a fiberscope, the patient may panic and start to struggle because his airway diameter has been further reduced 13 . In severe airway obstruction, we would have to use a paediatric size tube 4.0 or even smaller that is not of sufficient length. A tracheal tube that is too small can result in an excessive leak, inadequate ventilation, poor end-tidal gas monitoring and wastage of anaesthetic gases 14 . Under these circumstances, the passage of the tube over the bronchoscope may be 6 difficult or even impossible. So preoperative evaluation of the patient's airway with auxiliary examinations such as computed tomography scan and ultrasound 15 would be particularly important, especially in the degree of airway stenosis 16,17 . Just like the patient of this case, the narrowest airway diameter is only 3.18mm according to the preoperative computed tomography scan ( Figure 5). In this situation, if radiological imaging show that the FOB is not the most suitable choice, another technique is indicated. The technique invasive airway that can bypass the underlying obstruction, such as cricothyroidotomy or tracheostomy. The 4th National Audit Project (NAP4) in the UK emphasised on awake tracheotomy could provide a safer alternative to endotracheal intubation after anesthesia induction and should be actively considered 7 . Indications for awake tracheostomy for airway obstruction of the upper airway include: severe stridor, large tumor, fixed hemilarynx, gross anatomical distortion and larynx not visible on flexible nasendoscopy 18 . Awake tracheotomy should be performed on the impending airway obstruction and in a timely manner before complete obstruction occurs 19 . Because awake tracheotomy is life-saving, efficacious and safe method to secure an airway in these patients with a low incidence of complications 19,20 . In conclusion, preoperative auxiliary examination is important for predicted difficult airways, which can tell us how to choose the right size of tracheal tube. However, the placement of tracheal catheter is still high risk considering the unkown texture and nature of the tumor, the emergency awake tracheotomy should always be prepared forprotecting airway. Abbreviations ENT: ear, nose, and throat (as a department in a hospital). BMI: body mass index. Ethics approval and consent to participate Not applicable. Consent for publication Written informed consent was obtained from the participants for publication of this article and any accompanying tables/images. Availability of data and materials The datasets of the current study are available from the corresponding author on reasonable request. Figure 1 Preoperative laryngoscope. Preoperative laryngoscope showed a tumor located just below the right glottis. 12 Figure 3 Computed tomography scan showed a subglottic tumor, and it caused airway obviously stenosis. 13 Figure 4 Emergency tracheostomy. Emergency tracheostomy was successfully performed when failed to intubate with fiber bronchoscope. 14 Figure 5 The computed tomography scan showed narrowest part of the airway caused by subglottic tumor was only 3.18mm. Supplementary Files This is a list of supplementary files associated with the primary manuscript. Click to download. CAREchecklist.docx
v3-fos-license
2021-01-15T14:16:06.647Z
2021-01-15T00:00:00.000
231607169
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.624011/pdf", "pdf_hash": "8752fab29867c12c990da7e569ff48ace49be68d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1480", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Engineering" ], "sha1": "8752fab29867c12c990da7e569ff48ace49be68d", "year": 2021 }
pes2o/s2orc
Evaluation and Comparison of the Efficiency of Transcription Terminators in Different Cyanobacterial Species Cyanobacteria utilize sunlight to convert carbon dioxide into a wide variety of secondary metabolites and show great potential for green biotechnology applications. Although cyanobacterial synthetic biology is less mature than for other heterotrophic model organisms, there are now a range of molecular tools available to modulate and control gene expression. One area of gene regulation that still lags behind other model organisms is the modulation of gene transcription, particularly transcription termination. A vast number of intrinsic transcription terminators are now available in heterotrophs, but only a small number have been investigated in cyanobacteria. As artificial gene expression systems become larger and more complex, with short stretches of DNA harboring strong promoters and multiple gene expression cassettes, the need to stop transcription efficiently and insulate downstream regions from unwanted interference is becoming more important. In this study, we adapted a dual reporter tool for use with the CyanoGate MoClo Assembly system that can quantify and compare the efficiency of terminator sequences within and between different species. We characterized 34 intrinsic terminators in Escherichia coli, Synechocystis sp. PCC 6803, and Synechococcus elongatus UTEX 2973 and observed significant differences in termination efficiencies. However, we also identified five terminators with termination efficiencies of >96% in all three species, indicating that some terminators can behave consistently in both heterotrophic species and cyanobacteria. INTRODUCTION Cyanobacteria comprises a large and diverse phylum of photoautotrophic bacteria that can capture and convert inorganic carbon (e.g., CO 2 ) into a wide variety of secondary metabolites (Huang and Zimba, 2019). Many cyanobacterial species are genetically tractable and show great potential for green biotechnology applications, such as the sustainable production of biofuels and high value biomolecules (Lin et al., 2017;Knoot et al., 2018;Eungrasamee et al., 2019;Lin and Pakrasi, 2019;Włodarczyk et al., 2019). Much of the recent progress in engineering cyanobacteria has been driven by the uptake of synthetic biology approaches. One major aim of cyanobacterial synthetic biology is the development of new tools and strategies to facilitate stringent and precise control of gene expression. A wide variety of new molecular tools and genetic parts to tune gene expression are now available for use by the research community (Englund et al., 2016;Kim et al., 2017;Ferreira et al., 2018;Kelly et al., 2018;Vasudevan et al., 2019;Yao et al., 2020). The increase in availability of well-characterized genetic parts has allowed rational design, a core process to the synthetic biology paradigm, to be more routinely employed in the engineering of new cyanobacterial strains. Nevertheless, the majority of synthetic biology work in cyanobacteria has thus far concentrated on characterizing genetic elements that control gene transcription (e.g., promoters, CRISPRi) or translation modulation (e.g., ribosomal binding sites (RBS), riboswitches, small RNAs) (Huang and Lindblad, 2013;Camsund et al., 2014;Ma et al., 2014;Immethun et al., 2017;Kelly et al., 2018;Sun et al., 2018;Behle et al., 2020;Yao et al., 2020). Transcription terminators are also key transcriptional control elements, but far fewer studies have examined their roles in regulating gene expression in cyanobacteria. The rational design of efficient gene expression cassettes (and more advanced gene circuits) requires the use of genetic parts with well-characterized and predictable function (Moser et al., 2018). For instance, strong terminators attenuate transcription and isolate downstream genetic sequences, which can prevent interference and disruption of function from unwanted transcriptional readthrough (Kelly et al., 2019). This is particularly important when considering synthetic gene constructs, where several gene expression cassettes driven by strong promoters may occupy a short stretch of DNA. Furthermore, many prokaryotes (including cyanobacteria) are prone to homologous recombination. Homologous regions as small as 23-27 bp have been demonstrated to lead to recombination in Escherichia coli, so multiple distinct terminators are generally preferable for multi-gene expression systems and gene circuits (Shen and Huang, 1986;Sleight et al., 2010;Chen et al., 2013). As with other genetic parts, an understanding of terminator performance and robustness between species is also important. Promoters have been shown to drive gene expression differently in cyanobacteria compared to heterotrophic species (e.g., Escherichia coli) and between cyanobacterial species (Camsund et al., 2014;Vasudevan et al., 2019). In contrast, potential differences in behavior between cyanobacterial species has not yet been investigated for transcription terminators. In prokaryotes, transcription is terminated by two distinct terminator types: (i) Rho-dependent terminators that rely on a Rho transcription factor, and (ii) Rho-independent, or intrinsic terminators, which do not require a transcription factor. In E. coli, approximately 20% of terminators are Rho-dependent (Peters et al., 2009). However, Rho transcription factors appear to be absent in cyanobacteria, such that all transcription termination events are thought to rely on intrinsic termination (Vijayan et al., 2011). Intrinsic terminators are defined by a sequence motif that forms a hairpin loop secondary structure in the nascent RNA transcript. The hairpin loop is comprised of a GC-rich stem (8-12 nucleotides) (nt) and a loop (3-6 nt). Upstream of the hairpin loop is an adenine-rich region (the A-tract) typically 6-8 nt in length, while downstream is a uracil-rich region of 7-12 nt in length (the U-tract). Intrinsic termination depends upon the differential binding affinities between nucleotides. The interaction between U and A is weak, such that transcription of the U-tract results in a pause in transcription that allows the hairpin loop to form. The presence of the hairpin loop in the RNA polymerase (RNAP) exit channel, causes a ratcheting action and subsequent disruption of RNA-DNA binding. This leads to dissociation of RNAP from the DNA template and the subsequent release of the nascent RNA transcript (Wilson and Von Hippel, 1995;Herbert et al., 2008;Peters et al., 2011). In E. coli, many terminators have been assessed for termination efficiency (TE), which is typically calculated as a percentage estimate of the RNAP transcription elongation complexes prevented from continuing transcription passed a given sequence (i.e., a terminator) (Cambray et al., 2013;Chen et al., 2013). Importantly, a "no terminator" control was included to determine a normalized value for TE in those studies. Characterization studies of terminators in cyanobacteria are currently limited to the model species Synechocystis sp. PCC 6803 (PCC 6803). Liu and Pakrasi (2018) evaluated the relative strengths of seven native terminators using a dual fluorescent reporter system similar to that used by Chen et al. (2013). More recently, Kelly et al. (2019) evaluated 19 synthetic and heterologous intrinsic terminators ported from E. coli, with the aim of identifying terminators able to insulate a specific genomic locus in PCC 6803 from native promoter readthrough originating from upstream of the insertion site. Each terminator sequence was inserted between the transcription start site (TSS) and RBS of an inducible promoter driving YFP, and following induction, twelve terminators were shown to efficiently block transcription indicating a potential efficiency of nearly100%. These studies have provided valuable insights into terminator function in PCC 6803. But if comparisons in performance between different strains are to be achieved, a normalized quantitative parameter, such as TE, should be calculated. In this study we assembled a set of 34 intrinsic terminators from PCC 6803, and E. coli and synthetic libraries that have previously demonstrated a wide range of TE values in E. coli (Chen et al., 2013). We re-designed an established dual fluorescent reporter system to be compatible with the CyanoGate MoClo Assembly system, which allowed for increased cloning throughput (Liu and Pakrasi, 2018;Vasudevan et al., 2019). Importantly, all assays included a "no terminator" control vector as a reference to calculate a normalized TE value for each terminator, such that the TE values could be compared between different experiments and species irrespective of the instrument or gain settings used. We first validated and benchmarked our testing system by comparing TE values from the literature with our results in E. coli. Then we tested the performance of the terminators in two different cyanobacterial species: PCC 6803 and the recently described high-light tolerant Synechococcus elongatus UTEX 2973 (UTEX 2973) (Williams, 1988;Yu et al., 2015). Vector Construction and Parts Assembly All cloning was performed in OneShot TOP10 E. coli cells. Transformed cells were cultured in LB medium and on 1.5% (w/v) LB agar plates supplemented with either 100 µg/ml spectinomycin or 50 µg/ml kanamycin as required. E. coli strain MC1061 was cultured in LB medium supplemented with 100 µg/ml ampicillin and 25 µg/ml chloramphenicol. All E. coli strains were grown at 37 • C with shaking at 225 rpm. pPMQAK1-T (pCAT.000) from the CyanoGate toolkit was modified to generate pDUOTK1-L1 (pCA1.332, Addgene vector ID 162351) 1 (Supplementary Information S1) (Vasudevan et al., 2019). To assemble pDUOTK1-L1, pPMQAK1-T was first digested with BpiI and BsaI (Thermo Fisher Scientific). The linearized backbone was gel purified using a Monarch DNA Gel Extraction Kit (NEB). Sequences encoding P trc10 -eYFP from the CyanoGate vector pCAT.262, the LacZ expression cassette from the Plant MoClo level 1 acceptor vector pICH47732 and mTagBFP-T rrnB (from an available vector containing BBa_K592100) 2 fused at the 5 end to the RBS-associated sequence used by Chen et al. (2013) (BBa_B0034) were amplified using Q5 High-Fidelity DNA Polymerase (NEB) ( Supplementary Table S1). Finally, the three amplicons and the linearized pPMQAK1-T backbone were assembled together using Golden Gate assembly (Vasudevan et al., 2019). pDUOTK1-L1 contains BsaI restriction sites flanking LacZ that generate overhangs GCTT-CGCT, such that level 0 terminator parts can be assembled directly and screened using blue-white selection. Fluorescence Assays To measure fluorescence in E. coli, transformants were first inoculated into 5 ml LB medium supplemented with 50 µg/ml kanamycin and grown overnight at 37 • C with constant shaking at 225 rpm. To initiate the assay, overnight cultures were diluted 1:1000 into a black 96 well flat bottom plate (F-Bottom (Chimney Well) µCLEAR R , Greiner Bio-One) containing fresh LB medium supplemented with 50 µg/ml kanamycin to a final volume of 200 µl. The plates were incubated at 37 • C with constant shaking at 600 rpm and culture density (OD 600 ) was measured hourly using a FLUOstar OMEGA microplate reader (BMG Labtech). At early exponential phase (ca. 4.5 h following inoculation), eYFP and mTagBFP fluorescence levels were measured for individual cells by flow cytometry (minimum 10,000 cells per culture) with a FACSCanto II with HTS Flow Cytometer (Becton Dickinson). Cells were gated using forward and side scatter. Median eYFP and mTagBFP fluorescence levels were calculated from excitation/emission wavelengths 488 nm/530/30 nm and 407 nm/450/50 nm, respectively. An "empty" pPMQAK1-T vector (i.e., with no eYFP or mTagBFP expression cassettes) was included as a base line control. Fluorescence values for the latter control were subtracted from transconjugant strain measurements. To measure fluorescence in cyanobacteria, PCC 6803 or UTEX 2973 transconjugants maintained on BG11 + Kan50 agar plates were first inoculated into 10 ml BG11 + Kan50 medium and grown for 2-3 days to OD 750 ∼1.0. To initiate the assay, the seed cultures were diluted to a starting OD 750 of 0.2 in 24-well plates (Costar Corning Incorporated) containing fresh BG11 + Kan50 medium to a final volume of 2 ml. Cultures were grown for three days under culturing conditions and high humidity (95%) to avoid evaporation. eYFP and mTagBFP fluorescence were measured by flow cytometry for individual cells (minimum 10,000 cells per culture) with an LSRFortessa SORP with HTS Flow Cytometer (Becton Dickinson). Cells were gated using forward and side scatter. Median eYFP and mTagBFP fluorescence levels were calculated from excitation/emission wavelengths 488 nm/515-545 nm and 407 nm/ 425-475 nm, respectively. As above, a base line control was included for each species. Calculations for Termination Efficiency TE was calculated as a percentage from the ratio of the mTagBFP fluorescence signal downstream of the terminator to the eYFP fluorescence signal upstream relative to a control containing no terminator between fluorescent reporters: Where BFP 0 and YFP 0 are the mTagBFP and eYFP fluorescence signals, respectively, of the strain containing either pCA1.376 or pCA1.377. Where BFP Term and YFP Term are the mTagBFP and eYFP fluorescence signals, respectively, of a strain carrying a given level 1 terminator vector (Supplementary Table S2). Statistical Analysis Significant differences between sample groups were assessed by one-way ANOVA followed by Tukey's honest significant difference (HSD) post-hoc test using GraphPad Prism (version. 8.4.2). Estimation of Gibbs Free Energy Estimated Gibbs free energy values were generated using mFold v3.0 6 (Zuker, 2003). Free energy values were calculated without adjustment of the standard parameters, which included a fixed temperature of 37 • C. Generating a Screening System for Level 0 Terminator Parts The RSF1010-based level T acceptor vector pPMQAK1-T from the CyanoGate toolkit was modified to generate the new level 1 acceptor vector pDUOTK1-L1 for terminator screening ( Figure 1A and Supplementary Information S1) (Vasudevan et al., 2019). pDUOTK1-L1 comprises a dual fluorescent reporter system with eYFP and mTagBFP, similar to that in Liu and Pakrasi (2018). Terminators can be assembled as level 0 parts into pDUOTK1-L1 using Golden Gate assembly (Figure 1B), while the RSF1010 origin of replication allows for screening in a wide range of species (Mermet-Bouvier et al., 1993). We compiled a library of 34 level 0 vectors containing intrinsic transcription terminators (Table 1 and Figure 1C), and then 6 http://unafold.rna.albany.edu/?q=mfold assembled these into pDUOTK1-L1 (Supplementary Table S2). In order to maximize potential orthogonality with terminators in cyanobacterial genomes, we primarily targeted heterologous terminator sequences. The library included 22 native terminators from E. coli and eight synthetic terminators based on E. coli sequences that have been previously characterized in E. coli (Chen et al., 2013). We also included T rrnB (i.e., T rrnB from E. coli and the T7 viral terminator in tandem (Vasudevan et al., 2019)) and the pSB1AK3 terminator (T pSB1AK3 ) that was derived from the E. coli ribosomal RNA rrnC operon and is used in several BioBricks vectors, including pPMQAK1, to flank the cloning site (Huang et al., 2010). From PCC 6803, the terminator of the highly expressed D1 subunit of photosystem II was included (T psbA2 ), as we expected it to have a high efficiency of termination. In contrast, T psaB was included as a potentially low efficiency terminator based on previous work (Liu and Pakrasi, 2018). Two "no terminator" control vectors, pC1.376 and pC1.377, were assembled based on sequences used in previous E. coli studies (Cambray et al., 2013;Chen et al., 2013). In pC1.376, eYFP and mTagBFP were separated only by an RBS-associated sequence, while pCA1.377 included a spacer sequence reported to be inert (i.e., free from promoter or terminator activity in E. coli) (Supplementary Information S1). Validation of the Dual Reporter Testing System in E. coli We first assessed the dual fluorescent reporter system in E. coli by generating TE values for each terminator and compared these to the data reported by Chen et al. (2013) (Figure 2A). Terminator strength (TS) values reported by Chen et al. (2013) were converted to a more commonly reported TE (Supplementary Table S3; Hess and Graham, 1990;Yager and von Hippel, 1991;Cambray et al., 2013;Mairhofer et al., 2015). E. coli cultures measured at early exponential growth phase had similar levels of eYFP fluorescence across different strains with an average value of 7034 ± 134 arbitrary units (a.u.) (Supplementary Figure S1). In contrast, the strains showed a wide range of mTagBFP fluorescence values from 1.3 ± 3.4 a.u. to 9094 ± 446 a.u. Both eYFP and mTagBFP fluorescence values showed a unimodal and narrow distribution (Supplementary Figure S2). As expected, the two "no terminator" controls pC1.376 and pC1.377 produced the highest mTagBFP fluorescence values. Previous reports have indicated that translation efficiency is dependent on the length of the transcript (Lim et al., 2011), so we checked if eYFP levels might be decreased in the "no terminator" controls compared to plasmid with terminators. However, we observed no significant differences in eYFP levels between different plasmids, indicating that efficiency of eYFP translation was not reduced in either "no terminator" controls (Supplementary Figure S1B). The mTagBFP:eYFP ratio (i.e., Equation 1) for pC1.376 was 22% higher than for pC1.377, which indicated that pC1.376 produced more transcripts containing both mTagBFP and eYFP. Thus, we decided to use pC1.376 for all TE calculations in this study. Sixteen terminators had TE values of >95% in E. coli (Figure 2A and Supplementary Table S3), with T L3S2P21 and FIGURE 1 | The dual fluorescence reporter system for screening terminators. (A) The acceptor vector pDUOTK1-L1 contains two BsaI sites that generate 4 nucleotide (nt) overhangs (i.e., GCTT and CGCT) following restriction, which are compatible with standard level 0 terminator parts (Engler et al., 2014). (B) Following a level 1 Golden Gate assembly reaction (Vasudevan et al., 2019), the level 0 terminator part is inserted between eYFP and mTagBFP and the dual fluorescent reporter system is formed, which can then be used to evaluate termination efficiency (TE). The reporter system is driven by the strong promoter P trc10 and is terminated by the terminator T rrnB . Ribosome binding sites (half circles) are indicated (see Supplementary Information S1 for sequence details). (C) Example of an intrinsic terminator structure and nt sequence, comprised of an adenine rich region (A-tract) (black), followed by a G-C rich stem (blue), a hairpin loop (red), and a uracil rich region (U-tract) (green). T Bba_B0011 producing the highest (99.9%) and lowest values (40.8%), respectively. TE values for both PCC 6803 terminators were relatively low in E. coli (ca. 60%). Overall, the terminator library demonstrated a corresponding 10-fold change reduction in normalized downstream reporter expression ( Figure 2B). We then compared the TE values for 30 native E. coli and synthetic terminators with those also reported in Chen et al. (2013) and observed a reasonable correlation (coefficient of determination (R 2 ) = 0.78), with 19 of the observed TE values differing by less than 5% (Figure 2C). The latter included 14 of the 16 strongest terminators with TE values of >95%. Similarly, the three weakest terminators (T Bba_B0011 , T ECK120010842 , and T ECK120010820 ) were the same in both data sets. Six terminators showed a greater difference in TE values (i.e., 12-26%), which comprised four native E. coli terminators (T ECK120030798 , T ECK120010820 , T Bba_B0011 , and T Bba_B0061 ) and two synthetic terminators (T L3S1P22 and T L3S1P13 ). These variations may have been due to differences in experimental setup (e.g., the vector, origin of replication (ori) and reporter genes) and the different strain of E. coli used, as significant differences in the behavior of some terminators has been reported between different E. coli strains (Kelly et al., 2019). Performance of the Terminator Library in Synechocystis sp. PCC 6803 We next evaluated the terminator library in PCC 6803. Due to the slower growth rates of PCC 6803 compared to E. coli (Supplementary Figure S3A), we measured fluorescence levels at 24, 48, and 72 h (Supplementary Figure S3B). The cyanobacterial strains grew at comparable rates and the majority expressed eYFP at similar levels between strains at each time The sequences have been annotated with features common to intrinsic terminators, including the A-tract (black underlined), stem (blue), loop (red), and U-tract (green underlined) (see Figure 1C) as reported by Chen et al. (2013). The features for the additional terminators were predicted using ARNold ( Table S3) and TE values determined in this study (n = 30). T rrnB , T pSB1AK3 , T psbA2 , and T psaB were excluded, as data was not available for comparison. The coefficient of determination (R 2 ) is shown. Terminator TE values marked in red (T ECK120030798, T ECK120010820, T Bba_B0011, T Bba_B0061, T L3S1P22 , and T L3S1P13 ) differed from Chen et al. (2013) by more than 10%. Removal of these six terminator from the correlation analysis resulted in R 2 = 0.9). point. The single exception was T L3S2P21 , which produced eYFP values consistently 2.5-fold higher than other strains. We are unsure why eYFP values were higher for T L3S2P21 , but we did re-confirm the terminator sequence in this strain by Sanger sequencing. In E. coli and bacteriophages, some intrinsic terminators can enhance upstream gene expression by enhancing the stability of the mRNA transcript via the hairpin loop (Abe and Aiba, 1996;Cisneros et al., 1996). Enhancement of mRNA stability by several putative intrinsic terminators has also been demonstrated for the marine species Synechococcus sp. PCC 7002, where transcripts with a canonical intrinsic terminator downstream were found to have a longer a half-life compared to transcripts without a downstream terminator (Gordon et al., 2020). However, T L3S2P21 shares the same U-tract as both T L3S2P11 and T L3S2P55 but no increased eYFP expression was observed in the latter strains. mRNA transcript stability is a subject of ongoing research, but some examples of causative factors in heterotrophic bacteria include starvation in E. coli and Lactococcus lactis (Redon et al., 2005;Morin et al., 2020), and temperature induced stress in Staphylococcus aureus and Mycobacterium tuberculosis (Anderson et al., 2006;Rustad et al., 2013). mRNA concentration can influence mRNA stability, with increasing transcript concentration leading to decreased stability and mRNA turnover in E. coli and L. lactis (Nouaille et al., 2017). Similar examples have not been reported yet for PCC 6803. Similarly to E. coli, PCC 6803 strains produced a wide range of mTagBFP fluorescence values at each time point (Supplementary Figure S3B), while the mTagBFP:eYFP ratio for the "no terminator" control pCA1.376 was also consistently higher by 21 ± 2% compared to pCA1.377. A strong correlation was shown between TE values measured at different time points with R 2 values ranging from 0.982 to 0.988 (Supplementary Figure 3C). Table S3). In contrast, weaker terminators tended to show a small decline in TE over time, although there was no significant change in the rankings observed. Overall, terminator behavior in PCC 6803 was consistent between on OD 750 of 0.4 and 5.9 (Supplementary Table S3). Thus, we focused on reporting TE values at a single time point (48 h) below. Comparison of TE values over the three time points were consistent for strong terminators (Supplementary Thirteen terminators had TE values of >95% in PCC 6803 ( Figure 3A and Supplementary Table S3), with T L3S2P21 and T ECK120029600 producing the highest value (99.5%) and T ECK120010842 producing the lowest value (25.3%). Ten of the 13 strongest terminators in PCC 6803 also produced TE of >95% in E. coli (Figure 2A). Similarly, the two weakest terminators in PCC 6803 (T ECK120010842 and T Bba_B0011 ) were also the weakest in E. coli. Notably, T L3S1P22 showed no detectable terminator activity in PCC 6803, but had a TE value of 73% in E. coli. Overall, the terminator library demonstrated a corresponding 8-fold change reduction in normalized downstream reporter expression in PCC 6803 ( Figure 3B). The TE values of 10 terminators differed more widely from those in E. coli (i.e., by 12-46%). Thus, the correlation of TE values between E. coli and PCC 6803 was modest (R 2 = 0.46) (Figure 3C). Removal of T L3S1P22 led to only a marginal improvement (R 2 = 0.53). Performance of the Terminator Library in Synechococcus elongatus UTEX 2973 and Comparison Between Species Lastly, we evaluated our terminator library in the high-light tolerant strain UTEX 2973. UTEX 2973 generally grew faster than PCC 6803, but showed more variability in growth rates (Supplementary Figure S4A). This was likely due to a greater relative difference in light distribution within the growth incubator under the higher light levels used for culturing UTEX 2973, as strains in the same plate showed more similar rates of growth compared to those located at different positions within the incubator. As for PCC 6803, we measured fluorescence Figure S4B). Consistent with the observed differences in growth, the expression levels of eYFP were variable between strains at 24 hr. However, this variation decreased over time. As for PCC 6803, mTagBFP fluorescence values for the UTEX 2973 strains showed a wide spread at each time point, while the mTagBFP:eYFP ratio for pCA1.376 was consistently higher by 20 ± 5% compared to pCA1.377. Furthermore, the expression levels of mTagBFP and eYFP for pCA1.337 were more variable over time in UTEX 2973, with large increases in both eYFP and mTagBFP fluorescence values observed at 48 h (Supplementary Figure S4B). The TE values over the three time points were similar for most strains, with R 2 values ranging from 0.964 to 0.978 (Supplementary Figure 4C), indicating that terminator behavior in UTEX 2973 was consistent between an OD 750 of 0.4-11 (Supplementary Table S3). Thus, as for PCC 6803 we also focused on reporting TE values at 48 h below. Eleven terminators had TE values of >95% in UTEX 2973 ( Figure 4A and Supplementary Table S3), with T ECK120029600 producing a very high value of 99.9% and T Bba_B0061 producing the lowest value (29.7%). Six of the 10 strongest terminators in UTEX 2973 produced TE values of >95% in E. coli (Figure 2A), while seven of these terminators also produced TE values of >95% in PCC 6803 ( Figure 3A). The three weakest terminators in UTEX 2973 (T Bba_B0061 , T ECK120030798 , and T ECK120010820 ) were among the bottom ten ranked terminators in PCC 6803 and E. coli. T ECK120010820 achieved the same ranking (i.e., 3rd weakest terminator) in both UTEX 2973 and E. coli. Overall, the terminator library demonstrated a corresponding 10-fold change reduction of normalized downstream reporter expression in UTEX 2973 (Figure 4B). Similarly to PCC 6803, the correlation of TE values between UTEX 2973 and E. coli was low (R 2 = 0.35) ( Figure 4C). More surprisingly, the correlation of TE values between UTEX 2973 and PCC 6803 was even lower (R 2 = 0.12) ( Figure 4D). We next compared the TE values for E. coli, PCC 6803 and UTEX 2973 to identify terminators that were consistently strong between different species (Supplementary Table S3). The overall strongest terminator was T ECK120029600 , which had TE values of >99.5% across all three species. A further four terminators (T L3S2P21 , T ECK120010850 , T L3S2P11 , and T rrnB ) also had consistent cross-species TE values of >96%. For the two cyanobacterial species alone, T ECK120033736 and T psbA2 had TE values of >95.8%. The TE values for these seven strong terminators was also very consistent over time for PCC 6803 and UTEX 2973. The Performance of the Seven Strongest Terminators Was Consistent Under Suboptimal Growth Conditions To examine if terminator performance might be affected by the growth environment, we measured the TE values for the seven strongest terminators in PCC 6803 and UTEX 2973 grown under suboptimal conditions. Both species were cultured at 30 • C in 300 µM photons m −2 s −1 , which is considered high light for PCC 6803 (typically grown at 100 µM photons m −2 s −1 ) and a low temperature for UTEX 2973 (typically grown at 40 • C) (Vasudevan et al., 2019). Both PCC 6803 and UTEX 2973 grew at similar rates and reached an OD 750 of 5.9 and 5.7 after 72 h, respectively (Supplementary Figure S5A). In higher light PCC 6803 grew faster than under typical conditions, while growth rates were reduced in UTEX 2973 due to the lower temperature. Fluorescence measurements for eYFP and mTagBFP in PCC 6803 were comparable to those under typical growth conditions (Supplementary Figure S5B). In contrast, fluorescence values were generally reduced at all time points in UTEX 2973 (Supplementary Figure S5C). TE values for each day were calculated as before (Supplementary Table S3), and the mean values for the three time points were compared ( Table 2). Overall, all seven terminators retained TE values of >95.8% for both species under the suboptimal growth conditions, and T ECK120029600 remained the strongest terminator. Overall, our results indicated that the performance of these terminators was generally consistent and robust between the two growth conditions. DISCUSSION Here, we adapted a dual reporter tool for the CyanoGate MoClo Assembly system that provides a normalized quantification of terminator efficiency within and between species. The pDUOTK1-L1 vector is compatible with several available libraries and thus facilitates easy adoption and sharing of parts with the community (Andreou and Nakayama, 2018;Lai et al., 2018;Valenzuela-Ortega and French, 2019;Vasudevan et al., 2019), and is accessible to any lab currently using Golden Gate cloning. The robustness of our system was validated by comparing results in E. coli against previously published data (Chen et al., 2013). The pDUOTK1-L1 vector contains the broad host range replicative origin RSF1010, which has been shown to be functional in a wide diversity of prokaryotic species, including cyanobacteria from all five subsections (Mermet-Bouvier et al., 1993;Stucken et al., 2012;Bishé et al., 2019). Thus, pDUOTK1-L1 could help to make terminator characterization more accessible, as promising new strains are discovered (Włodarczyk et al., 2019;Jaiswal et al., 2020;Nies et al., 2020). To the best of our knowledge, this is the first study to compare the efficiencies of terminators between two different cyanobacterial species. We identified five strong terminators with consistent TE values in E. coli, PCC 6803 and UTEX 2973. These findings should help to inform future strategies for building gene expression systems or more advanced gene circuit designs. Besides the double terminator T rrnB , no unique features could be identified for any of the five strong terminators that behaved consistently between all three species (i.e., the hairpin loop length and GC content, and adenine and uracil content for the A-tract and U-tract, respectively). Overall, our results showed that terminator performances was highly reproducible at different growth points for the same strain but generally differed between the three species examined, and significant differences were observed between PCC 6803 and UTEX 2973 even though both are subsection I species (Castenholz et al., 2001). We also demonstrated that the performance of the seven strongest terminators was consistent in different growth conditions for PCC 6803 and UTEX 2793. Cyanobacterial RNAPs do differ in structure compared to other bacterial RNAPs [for a recent review see Stensjö et al. (2018)]. In addition, RNAP subunits also differ between cyanobacterial species [for a recent review see Srivastava et al. (2020)]. For example, the primary vegetative sigma factor (sigA) in PCC 6803 (srl0653) and UTEX 2973 (WP_071818124.1) have a shared identity and similarity of only 70.5 and 74.1%, respectively (Supplementary Figure S7). Furthermore, cyanobacteria lack transcription elongation factors commonly found in heterotrophic bacteria to restart elongation and for proofreading of transcripts. To compensate, cyanobacterial RNAPs have evolved additional proof-reading and elongation functionalities (Riaz-Bradley et al., 2020). These differences may account for the observed disparity in terminator performance between E. coli and cyanobacteria. However, the differences between PCC 6803 and UTEX 2973 were intriguing, and could suggest that RNAP activities differ between cyanobacterial species and/or that other unknown factors are involved. Several methods and prediction tools exist for the identification and mapping of intrinsic terminators in different species (Carafa et al., 1990;de Hoon et al., 2005;Gardner et al., 2011;Naville et al., 2011;Fritsch et al., 2015;Millman et al., 2017). Traditionally, these approaches have relied on identifying sequence features associated with intrinsic terminators (e.g., the hairpin loop). Previous studies have suggested a relationship between terminator performance and the estimated Gibbs free energy of the extended hairpin ( G A ), the U-tract ( G U ) and to a lesser extent the hairpin loop ( G H ) (Cambray et al., 2013;Chen et al., 2013). In our study, we did not find a strong correlation between TE values and G A, G H or the estimated Gibbs free energy of the complete terminator sequence (Supplementary Figure S6). Although our terminator library was relatively small, the differences in terminator behavior within and between species indicated that there may be more factors involved in determining intrinsic termination than can be attributed to the properties of individual structural components. For example, the U-tract appears dispensable for intrinsic termination in mycobacteria (Ahmad et al., 2020). Cutting edge approaches utilizing RNA-seq methods have also been applied for the identification of previously unknown terminators in the E. coli genome, which go beyond that which has been achieved with previous structural identification models (Ju et al., 2019). In addition, recent work has shown that terminator sequences can be designed as tunable control elements that can be "turned on" to attenuate gene transcription at low temperatures (Roßmanith et al., 2018). With the growing evidence that the structural components of terminators may be malleable depending on species, future work should focus on understanding the combined contributions of terminator components, including those beyond transcriptional control (e.g., modulation of protein expression) for metabolic engineering (Curran et al., 2013;Ito et al., 2020). This may lead to better designs for strong synthetic terminators with consistent cross-species performance. As terminator research and cyanobacterial synthetic biology progresses, tools such as pDUOTK1-L1 will be useful for reliable and convenient determination of terminator efficiency across a broad-host range. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS GG and AM: conceptualization and writing-original draft preparation. GG: performing the experiments. BW and AM: supervision. All authors: experimental design and writingreview and editing. FUNDING GG acknowledges funding support from the BBSRC EASTBIO CASE Ph.D. programme (BB/M010996/1). AM acknowledges funding from the UK Biotechnology and Biological Sciences Research Council (BBSRC) grant (BB/S020128/1). BW acknowledges funding support by the UK Research and Innovation Future Leaders Fellowship (MR/S018875/1) and the Leverhulme Trust research project grant (RPG-2020-241). ACKNOWLEDGMENTS Flow cytometry data were generated within the Flow Cytometry and Cell Sorting Facility in Ashworth, King's Buildings at the University of Edinburgh. The facility was supported by funding from Wellcome and the University of Edinburgh.
v3-fos-license
2022-08-31T06:17:35.028Z
2022-08-29T00:00:00.000
251931554
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-022-17379-6.pdf", "pdf_hash": "63c1da0af8f47db41dada0bdb793bcddf5903204", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1483", "s2fieldsofstudy": [ "Physics" ], "sha1": "42310b92f093461b0a42e40f524d18931f2343ec", "year": 2022 }
pes2o/s2orc
The geometry of evolved community matrix spectra Random matrix theory has been applied to food web stability for decades, implying elliptical eigenvalue spectra and that large food webs should be unstable. Here we allow feasible food webs to self-assemble within an evolutionary process, using simple Lotka–Volterra equations and several elementary interaction types. We show that, as complex food webs evolve under \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${10^5}$$\end{document}105 invasion attempts, the community matrix spectra become bi-modal, rather than falling onto elliptical geometries. Our results raise questions as to the applicability of random matrix theory to the analysis of food web steady states. : Vulnerability of resident species in three types of food webs. a, Cumulative probability distribution of residence times. b, Cumulative probability distribution of extinction event size. Resident times (Fig. S1a) and fractional extinction event sizes (Fig. S1b) of three different food web types. Resident times in all three food webs fall off approximately like ∝ exp −bt 1/c , though less accurately for large t. For the treelike food web we observe b ≈ 11 and c ≈ 9, whereas for the two food webs with network loops we observe b ≈ 12.5 and c ≈ 8. S3 Analytical spectrum of food webs with two species The only feasible food web with two species is that of one producer and one species consuming the producer. The steady states of this food web are for the producer and consumer, respectively. Here β = β 21 and η = η 21 . Inserting this in Eq. (6) and diagonalising yields the eigenvalues The food web is stable if the real parts of all eigenvalues are negative. The real part of Re(λ − ) is always negative. Re(λ + ) is only non-negative if the square root is real and equal to or greater than kα2 2βη . After some basic algebraic operations this criterion reduces to the following restriction on the species parameters Eq. (S3) is further restricted by feasibility of the food web, which requires the steady states of Eq. (S1) to be positive. S * 1 is always positive, but S * 2 is only positive if βη > α2 1−α1/k . Inserting this in Eq. (S3) now yields This condition can never be satisfied and consequently a feasible food web with two species will always be stable. The eigenvalues are purely real if the square root of Eq. (S3) is bigger than or equal to zero. Again using some basic algebra this criterion transforms to This second order polynomial is zero when and negative in the interval between the two roots. (βη) − is always negative, since kγ > 0 for all α 1 < k. This root is not physically meaningful. Accordingly, the eigenvalues are purely real if and only if βη ≤ (βη) + . S4 Omnivorous eigenvalue spectra of species richness 2-10 Figure S3: Complex eigenvalue spectra of evolved omnivorous food webs with β = 0.75. Each panel represents the two-dimensional histogram in the complex plane. Species richness as labelled in panels. Note that the colour scale is logarithmic, with green marking the areas with largest likelihood of eigenvalues. Left row corresponds to the omnivorous food webs in Fig. 3. As discussed in the main text, changing the invasion mechanics does not affect the spectrum notably. Raw moment of n i Table S2: The first three raw moments of the Binomial distribution, n i ∼ B(N, p i ). Adapted from [2] S5 Deriving the connectivity of omnivorous food webs In this section we derive Eq. (4), that is the connectivity of omnivorous food webs as a function of number of species in the food web, N , given no extinctions occur. The connectivity of a food web is the total number of interaction links in the food web, multiplied by two (since every interaction appears twice in the community matrix), and divided by the total number of off-diagonal elements, N (N − 1). Here, we disregard the diagonal of the community matrix since it represents self-regulation and is set to −d in the random matrix [1]. There are three types of species in an omnivorous food web in terms of number of interaction links: primary producers, with links to all other primary producers in the food web, consumers with one resource (1 link), and consumers with two resources (2 links). In addition, any of the three can be the resource of another species. To obtain the average number of interaction links stemming from each species type, we need to consider all possible configurations of the food web. Starting with the primary producers, we compute the average number of producer links, L p Here, n p denotes the number of producers in the food web, and n p (n p − 1) is the number of producer links in a given food web. Since all food webs start with a producer, the position of the first producer is fixed and only n p − 1 producers can be rearranged among N − 1 species. For the same reason, the probability that an invasive species is a primary producer, 1/3, is raised only to the power of n p − 1. 1 − 1 3 = 2/3 is the probability that an invasive species is not a producer, i.e. is a consumer with one or two resources. We also need to distinguish the consumer types when we consider the different configurations of food webs with N > 2. However, this is already accounted for by the probability 2/3. The probability of a specific configuration of consumers is (1/3) N −np . The fact that any consumer can be replaced by the other consumer type without affecting the configuration of producers adds a factor 2 N −np . We can rewrite Eq. (S7) as and use that ⟨n Then we compute the average number of links of the consumer species, following mostly the same pattern as Eq. (S7). First we consider consumers of one resource, L c1 . Because the probability that the second species in the food web is a one-resource consumer is twice the probability that any other species is a one-resource consumer, we treat the two cases separately (S10) The first sum represents the case where the second species to be added to the food web is a consumer of one resource, whereas the second sum represents the case where the second species is a producer. The factor in front of each sum is the probability of each case, respectively. In both sums we sum over 2n c1 since each one-resource consumer contributes with one interaction, and every interaction appears twice in the community matrix. In the first sum the upper limit is N − 1. Here, the position of one of the one-resource consumers is fixed, hence only n c1 − 1 one-resource consumers can be rearranged among N − 2 species. In the second sum the upper limit is N − 2 because the two first species in the food web are producers, therefore n c1 one-resource consumers can be rearranged among N − 2 species. 1/3 is the probability that an invasive species is a one-resource consumer in food webs of N ≥ 2, and is therefore raised to n c − 1 in the first sum and n c in the second. As in Eq. (S7), we do not need to explicitly consider the different configurations of producers and two-resource consumers. We rewrite Eq. (S10) as (S12) Here, we have used Tab. S2 to replace the sums. Lastly, we compute the average number of links stemming from the consumer of two resources, L c2 Where the sum only goes to N − 2 since two-resource consumers are only added to food webs of N ≥ 2. Finally, we combine Eq. (S9) and Eqs. (S12)-(S13) and divide by N (N − 1) to obtain the connectivity S6 The effect of β on eigenvalue spectra there is a peak around 0 on the imaginary axis, as expected since most spectra contain a significant fraction of purely real eigenvalues. As β increases, so does the width of the imaginary distribution, whereas the overall shape remains the same. nTotal is a global variable that keeps track of the total number of species in the food web. addAttempt is the iteration during which a given species invaded the food web. density, decay and level represent S i , α i and l i , respectively. resources and consumers contain interaction parameters w.r.t. the species' resources and consumers, respectively. That is, resources contains β i1 η i1 , β i2 η i2 , ...β in η in , and consumers contains η 1i , η 2i , ...η ni . All η ij and η ji representing interactions that are not present in the food web are set to zero. isProducer is a Boolean variable that is true if the given species is a primary producer and false otherwise. Lastly, the class Species contains the function computeDerivative() that computes the derivative of the given species according to Eq. (2). The class Producer contains the additional global variable nProducer which is the total number of primary producers in the food web. Furthermore, growth represents k i and computeDerivative() computes the derivative according to Eq. (1). The class FoodWeb is shown in Fig. S6b which is mainly created to keep track of indices during the integration. feasible and stable are true if the food web during a given invasion is feasible or linearly stable, respectively, and false otherwise. prevIteration is an integer that describes how the food web behaved after the previous invasion, and is used to optimise detection of non-convergent food webs. If the food web converged to the steady stated after the previous invasion, prevIteration is set to one. If the food web did not reach the steady states within the time limit, prevIteration is set to two (linearly stable food webs) or nine (unstable food webs). Finally, prevExtinction represents the previous species to go extinct. S8 Varying the sampling distributions of α and η In the main text we draw decay and interaction rates from uniform distributions (see Tab. 1). Here, we run the simulation for omnivorous food webs, drawing α and η from Gaussian and exponential distributions, respectively. The S9 Holling type-II response The simulation is extended to allow consumption rates following Holling type-II functional response [3] r(η ki , where S i is the density of the resource species and η ki is the link specific interaction strength between the resource i and consumer k as defined in Results and Discussion. The parameter h controls the significance of the type-II response. When h = 0 Eq. (S14) reduces to type-I functional response as used in Eqs. (1)-(2). With consumption rates following Holling type-II functional response, the system of equations analogous to Eqs. (1)-(2) is no longer linear in S i . Eq. (5) is therefore not applicable and neither are the convergence criteria described in Materials and Methods. A simpler algorithm is therefore employed in order to compute the eigenvalue spectra here. The food web is now considered to be converged if and the densities that satisfy Eq. (S15) are plugged in as S(t) in Eq. (6). The overall procedure is shown in Algorithm 2. Since the type-II response community matrix eigenvalues are computed only from the food webs that satisfy Eq. (S15), we only obtain the stable eigenvalues. Starting from h = 0, we run the simulation for h = 0.1, 0.2 and 0.5. The effect on the consumption rate can be seen in Fig. S9 for two values of η. In interactions of low interaction strength Figure S10: Complex eigenvalues spectra of evolved omnivorous type-II response food webs with h = 0.5.. Each panel represents the two-dimensional histogram in the complex plane. Species richness as labelled in panels and β = 0.75. the non-linear effects are negligible for most h, whereas for high interaction strengths the non-linearity is pronounced for all h. As we increase h the average species richness decreases, and for h = 0.5 there are no convergent food webs of species richness higher than 7, though there are non-convergent food webs of up to 9 species. Fig. S10 shows the stable part of eigenspectra of food webs with species richness 2-7. The type-II spectra resemble the type-I spectra from Fig. S3, but are more skewed towards positive real values. It seems intuitive that interactions following type-II response would act as a dampening, thereby stabilising the food webs and allowing for food webs of higher species richness. To get some insight in why this is seemingly not the case, we study the spectrum of a food web with two species, analogous to Sec. S3. The steady states of this food web are i.e. similar to the steady states of type-I response, with β ′ replacing β. From these we see that feasibility now requires Since β ′ ≤ β for all h and α 2 , we see that it requires a higher η to satisfy the feasibility criterion of Eq. (S18), compared that of type-I response. Then we study the stability of this food web. First we compute the community matrix according to Eq. (6) Since C 22 = 0 and C 12 C 21 < 0, the quadratic formula for the eigenvalues reduces to and we see that both eigenvalues are always stable when C 11 < 0, because the square root is always smaller or equal to C 11 . After some algebra we find the following criterion for stability The RHS is always larger than 1 (for h ≤ 1), which is the upper limit on η. Food webs of species richness 2 are therefore also always stable when we introduce Holling type-II response, given that h ≤ 1. However, they might be unstable for h > 1. We only use h < 1 in our simulations, and Eq. (S24) is always satisfied with our choice of parameters. Yet the upper limit on η decreases with h, meaning the system somehow becomes "less stable" with h. It therefore seems reasonable that larger food webs with type-II response can indeed be less stable than their type-I counterparts. Figure S11: Distribution of eigenvalues along the real axis Lastly, we plot the distributions of type-II eigenvalues along the real axis in Fig. S11. Despite a low number of eigenvalues from the simulation with h = 0.5, all distributions appear to follow approximately the same bi-modal form as in the case of h = 0. An exception is the pronounced peak near the lower limit for species richness 2 and h = 0.2. The origin of this peak is not investigated further. S10 Matrix elements of the community matrix From Eq. (6) we have the following mathematical expression for the community matrix C ij ≡ ∂ ∂S jṠ i (S(t)) S=S * . Here S m , m ∈ {1, . . . , n} represents a resource species of S k and S p , p ∈ {n 1 + 1, . . . , n} represents a consumer species of S k . Since Eq. (2) is linear in S k , we have that C kk = 0 for all k ∈ {n 1 + 1, . . . , n}. All matrix elements representing interactions that are absent in the present food web yield 0.
v3-fos-license
2024-02-25T06:17:15.339Z
2024-02-01T00:00:00.000
267850514
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b01040366f65356dd566bd53d8229e3fda9a3664", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1486", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "f640b17f767f478530f14efc343ad3b6d87c5121", "year": 2024 }
pes2o/s2orc
LRPL-VIO: A Lightweight and Robust Visual–Inertial Odometry with Point and Line Features Visual-inertial odometry (VIO) algorithms, fusing various features such as points and lines, are able to improve their performance in challenging scenes while the running time severely increases. In this paper, we propose a novel lightweight point–line visual–inertial odometry algorithm to solve this problem, called LRPL-VIO. Firstly, a fast line matching method is proposed based on the assumption that the photometric values of endpoints and midpoints are invariant between consecutive frames, which greatly reduces the time consumption of the front end. Then, an efficient filter-based state estimation framework is designed to finish information fusion (point, line, and inertial). Fresh measurements of line features with good tracking quality are selected for state estimation using a unique feature selection scheme, which improves the efficiency of the proposed algorithm. Finally, validation experiments are conducted on public datasets and in real-world tests to evaluate the performance of LRPL-VIO and the results show that we outperform other state-of-the-art algorithms especially in terms of speed and robustness. Introduction State estimation is crucial for unmanned mobile platforms, especially when operating in GPS-denied areas.Simultaneous localization and mapping (SLAM) algorithms have the ability to provide real-time pose estimation and build consistent maps; thus, it is a crucial technique for robots, self-driving cars and augmented reality (AR) devices [1].Pure visual SLAM algorithms [2][3][4], which use cameras as the sole sensor, are lightweight, low-cost, and have gained popularity over the past decade.However, they lack strong robustness because of sensitivity to illumination change and motion blur. Many researchers have found that combining a camera with an inertial measurement unit (IMU) offers complementary advantages [5].IMUs output high-frequency and biased inertial measurements while cameras produce images with rich information.Based on this, numerous visual-inertial odometry and SLAM systems are designed to obtain accurate and robust pose estimation.According to the estimation strategy, they can be divided into two categories: optimization-based methods and filter-based methods.The former constructs a factor graph with visual re-projection errors and IMU pre-integration errors to optimize poses and feature landmarks such as OKVIS [6] and VINS-Mono [7].The computational load is managed using a sliding window and marginalization to achieve real-time performance.The latter holds a state vector which consists of body states (position, speed, orientation, and inertial biases) and a fixed number of history poses such as MSCKF [8] and HybVIO [9].State propagation is finished on the basis of IMU kinematic model and visual update provides multi-frame constraints to produce an accurate trajectory.However, the aforementioned algorithms rely solely on points for visual constraints, which can lead to divergence or failure in low-texture environments. As line features are abundant in human-made worlds, more and more VIO frameworks fuse both points and lines to improve their performance.PL-VIO [10] is the first optimization-based point-line visual-inertial odometry framework.Points, lines and IMU pre-integration terms are integrated into the optimization window to recover trajectories and scene appearances.Hence, it can outperform its predecessor VINS-Mono in some large difficult environments with severe sacrifice of running time.To speed up the processing of line features, the effect of the hidden parameters in the LSD algorithm [11] was studied in PL-VINS [12].The authors modified a proper set of parameters to balance the speed and quality of line feature extraction in the original LSD for pose estimation tasks.In this way, PL-VINS is capable of outputting estimated poses in real-time.FPL-VIO [13] applied two methods to make the front end lightweight.It uses a fast line detection algorithm FLD [14] instead of LSD to extract line features and BRIEF descriptors [15] of midpoints to perform line matching, which greatly reduces the running time of the front end.The authors in [16] presented a similar solution, choosing EDlines [17] with gamma correction for rapid detection of long line features.They tracked a certain number of points on the line, instead of the entire segment, using the sparse KLT algorithm for line matching.As a result, the consumed time of line features in the front end is declined.However, the back end of these optimization-based methods is still a heavy module because of the repeated linearization of visual and inertial error terms, which becomes worse after fusing both point and line features [10]. Since filter-based methods avoid the re-linearization, they are considered to be more efficient [5].Trifo-VIO [18] is a stereo point-line VIO algorithm based on MSCKF.After state propagation, both point and line features are used for visual update.However, the line features are parameterized using a 3D point and a normal vector in this system, which is an over-parameterized representation because a space line has only four degrees of freedom.Another MSCKF with lines framework is proposed in [19].This system adopts the closest point method to represent line features and shows a good performance in real-world experiments.However, its front end uses LBD [20] to match line features; thus, its real-time performance is severely limited.A hybrid point-line MSCKF algorithm is proposed in [21].Based on the sparse KLT algorithm, it tracks sampled points on the line between three consecutive frames in a predicting-matching way; thus, a new line can be recovered if the original one is lost.However, extra memories and operations are required in the hybrid framework since line feature landmarks are preserved in the state vector. Most SLAM and odometry algorithms run on small-sized devices with limited available resources.How to provide accurate and high-frequency pose estimation with low computational consumption for multiple feature frameworks is still an open problem.To solve this, we propose a novel lightweight point-line visual-inertial odometry algorithm which can robustly track the poses of moving platforms.The main contributions of this paper are as follows: • A novel filter-based point-line VIO framework with a unique feature selection scheme is proposed to produce high-frequency and accurate pose estimation results.The whole system is fast, robust, and accurate to work in complex environments such as weak texture and motion blur. • A fast line matching method is proposed in order to decline the running time of the front end.The lines are matched using an endpoint-midpoint tracking way and a complete prediction-tracking-rejection scheme, which can ensure the matching quality with a fast speed. • Validation experiments on public datasets and in real-world tests are conducted to evaluate the proposed LRPL-VIO.The results prove the better performance of LRPL-VIO compared with other state-of-the-art systems (HybVIO [9], VINS-Mono [7], PL-VIO [10], and PL-VINS [12]), especially in terms of speed and robustness. The rest of this paper is organized as follows.Section 2 describes our filter-based point-line VIO system.The proposed fast line matching method is detailed in Section 3. The experiment results are explained and presented in Section 4. Finally, conclusion and future works are discussed in Section 5. Filter-Based Point-Line Visual-Inertial Odometry While point-only visual-inertial odometry algorithms can produce accurate pose estimations in environments with constant illumination and rich texture, they often struggle, tending to diverge or fail in more challenging scenes.Fusing multiple features is a good solution, while the whole system becomes heavy.In this paper, we design a lightweight and efficient point-line VIO system based on HybVIO [9] to tackle this issue.The working flowchart of LRPL-VIO is shown in Figure 1. State Definition Similar to most filters derived from MSCKF [8], the state vector in our system consists of the body states and a window of past poses.At timestamp k, the state vector is constructed as: where p k and q k denote the current pose of the body.v k is the velocity.And is a vector related to inertial biases.Only the diagonal elements of T a k are used for the multiplicative correction of the accelerometer.τ k represents the IMU-camera time shift.A fixed-length window , . . ., p holds n a poses of past moments. Filter Propagation The states are initialized as m 1|1 after obtaining the current orientation q 0 using the first inertial measurement.The initial covariance matrix P 1|1 are a diagonal matrix.The system are propagated using each subsequent inertial measurement as the prediction steps of the core filter: where ε k ∼ N (0, Q k ) is the Gaussian process noise.This propagation is finished in discretetime by a mechanization equation [22]: where ∆t k is the current time increment.The biased inputs of gyroscope and accelerometer are calculated as Gaussian noises.g is the gravity vector.The rotation process represented by the quaternion is q k (•)q * k and the quaternion is updated by the function Ω : R 3 → R 4×4 [23].The bias vector is propagated by where ) is modeled as the Ornstein-Uhlenbeck random walks [24] to better match the characteristics of the IMU sensor. Image Processing For points, we use the Good Features to Track (GFTT) algorithm [25] to extract new features and the sparse KLT optical flow algorithm [26] to perform feature tracking.The inertial measurements between consecutive frames are integrated to obtain the instant rotation.Initial values for the feature tracker, based on two-view geometry, could be obtained (See Equation ( 28)) and enhance tracking quality during rapid camera motions After all this, a hybrid 2-point [27] and 5-point [28] RANSAC method is performed to reject outliers. For lines, we use the modified LSD algorithm [11,12] to detect new line segments and set a fixed threshold to abandon short lines.The line matching is finished using the proposed fast line matching method (See Section 3), which can greatly decrease the execution time of the front end and provide higher accuracy for our VIO system than the traditional descriptor-based method LBD [20]. Feature Selection In addition to feature detection and matching, visual update in filter-based VIO methods is another time-consuming module.Paying more attention to the most informative features is an efficient way of decreasing computational load.Another novelty of the proposed LRPL-VIO is that we do not use all the tracked features (both points and lines) but a subset of them to perform visual updates. For a visual feature j, its whole track is a set of pose indices i = i j min , . . ., i j max where i j min denotes its first detection frame and i j max denotes its last tracked frame.As the system moves, old poses are abandoned; thus, the oldest pose in the window denoted as b(i) may not be i j min anymore.We use b(i, j) = max(i j min , b(i)) to represent the oldest tracked frame in the window.Not all the measurements but a subset of them are used for triangulation and linearization: where i ′ < i is the newest frame used in the last update.In a word, we always choose the freshest information for efficiency. For a new received frame, we also select a subset of all available visual feature tracks (denoted as U(i)) to perform visual update at random from more-than-median ones where the implementation of L(i, j) are different for points and lines in LRPL-VIO.For points, they are evaluated by the tracking length: where y j is the pixel coordinate.For lines, they are less sensitive to tracking length change than points.Thus, we use the frame number as the scoring policy: which ensures the update accuracy even using a small number of line features. Feature Triangulation and Update The visual update is triggered track by track until the target number is reached: with where ξ S (x, denotes the triangulated landmark using its tracked feature measurements y j S .r(•) is the re-projection process and d(•) is the error calculation. Point Feature The point error is the difference between the re-projected landmark and tracked measurements: where the point triangulation is the minimization process of the re-projection error using the GN method.Since the Jacobian of p S with respect to x is available after the initial value is provided by a two-frame triangulation, the whole optimization process of Equation ( 15) needs to be differentiated to render the direct linearization of Equation ( 14) which avoids the null space projection motion and can be used for visual update. Line Feature The line error is defined as the distance between the endpoints of tracked measurements and the re-projected line: where l = [l 1 , l 2 , l 3 ] is the re-projected line.For a space line representation, the Plücker coordinate T is used in our system.On the basis of two camera poses (p (1) j ) and their corresponding measurements (e s,j , e e,j , e s,j , e e,j ), we can obtain the dual Plücker matrix of a line feature [30] as where π = (e w s,j − p j ) × (e w e,j − p j ), −p j (e w s,j × e w e,j ) are the measurement plane determined by two endpoints and the camera optical center.Triangulation depending on just two frames is not reliable enough; thus, we introduce a n-views method proposed in [31].Specifically, for n L measurements of a line L, we stack all relevant planes: and perform singular value decomposition of Equation ( 21) as svd(W) = [s, d, v].We can obtain two main planes π 1 and π 2 from the columns of v by checking two largest singular values.We use Equation ( 19) to obtain the initial value of L if the singular values are reasonable and perform a nonlinear optimization to further improve the accuracy of this triangulation.Based on the above methods, the linearization of Equation ( 17) is performed as and the null space projection motion [19] is unavoidable for visual update because the feature positions are not maintained in the state vector. Pose Augmentation and Stationary Detection Every time a new camera frame is received, its predicted pose is inserted into the window and an old pose is removed.This process is performed as an EKF prediction step: with The adjustment of d can be treated as an efficient strategy and we follow [9] to combine a fixed-size n FIFO with a Towers-of-Hanoi scheme: where LSB(i) is the least-significant zero bit index of i.Then the max stride of poses is exponentially increased and the update time of old and new poses are properly set to different frequencies.When the moving platform stays still, the poses in the window are quickly be the same due to Equation (23), which makes the VIO unstable.Thus, an unaugmentation step is performed if a stationary signal is received as which pops the new inserted frame and holds most of old poses.We judge the stationary condition by the maximum pixel change of tracked point features: where m min is a fixed threshold.And a ZUPT of velocity [32] is also performed to correct the pose estimation results. Fast Line Matching The complex pixel distribution of line features makes their matching more challenging and time-consuming compared to point features.In this section, we propose a novel fast line matching method to break this bottleneck.An overview of our method is shown in Algorithm 1 and details are explained below. Predict(R 21 , K, s i , m i , e i ) 5: Extraction: For each line feature, tracking is focused on its two endpoints and midpoint, rather than the entire line or other sampled points.In other words, for n line features, we have 3n points in total. Prediction: To counteract aggressive motions, inertial measurements between two camera frames are used to determine the initial positions of the points for tracking.Specifically, for two consecutive frames, I 1 and I 2 , a point transformation between them is: where v 1 and v 2 are pixel coordinates of the same point in these frames.λ 1 and λ 2 are the corresponding depth measurements.K is the intrinsic matrix which is considered as a static variable.The pose between I 1 and I 2 is represented by R 21 and t 21 .By taking the assumption that the translation t 21 between two consecutive frames is small enough to be ignored, λ 1 and λ 2 can be removed from Equation (28).Thus, a simplified version is: We obtain the rotation R 21 through gyroscope measurements integration and then the predicted positions of the points using Equation (29). Tracking: After the above stages, the line matching task becomes the tracking of the points, which is finished based on the photometric invariance assumption in LRPL-VIO.Take a single line endpoint as an example.With its original pixel coordinate (x, y) in I 1 , our idea is to find the target pixel coordinate (x + dx, y + dy) in I 2 to satisfy Equation (30): where I i (a, b) is the photometric value of the pixel (a, b) in I i .Apparently we can not obtain (dx, dy) using one equation; thus, another assumption that the movements of all pixels in a local window are the same is applied.That is, we have for all w pixels in the window.To solve Equation (31) Equation ( 32) is a typical least squares problem and can be solved in an iterative way with the initial values provided by Equation ( 29).In addition, the image pyramids are introduced to improve the tracking quality. Outlier Rejection: As long as the points of a line feature are tracked, we first check the average photometric values of two endpoints.In other words, an endpoint track is considered as an inlier if where ε I is the threshold.However, Equation ( 34) is not enough to reject outliers when there is a large repeated texture area in the image.For this reason, an angle variation check is also performed if both two endpoints passed Equation (34).Namely, if a line matching pair [(s i , e i ), (s where θ i and θ ′ i are the angles of the line in consecutive frames, (s ′ i , e ′ i ) is seen as a candidate line.Generally, endpoints have the potential to move out of view or be tracked unsuccessfully.Hence, after obtaining the first batch of candidate lines by checking endpoints, we take tracked midpoints as new endpoints of the line features which failed to pass the above tests.For example, if [(s i , e i ), (s ′ i , e ′ i )] is not an acceptable tracking result, it will be replaced by [(s Certainly, the replaced line pairs have to satisfy both Equations ( 34) and ( 35).This scheme is able to improve the tracking length of line features with no additional sampled points.Finally, an 8-point RANSAC is performed to further reject outliers in these candidates. Matching: After all this, we build matched line features through connecting the reserved endpoints and remove short ones which are useless for pose estimation. Dataset and Evaluation To validate the necessity of fusing point-line features and the performance of our LRPL-VIO in different scenes, we conduct various experiments on three public academic datasets (EuRoC [33], UMA-VI [34], and VIODE [35]) and a collected real-world dataset.Four state-of-the-art algorithms (point-based VINS-Mono [7] and HybVIO [9], point-linebased PL-VIO [10] and PL-VINS [12]) are selected for comparison. For the evaluation criteria, we choose the root mean square error (RMSE) of the absolute trajectory error (ATE) to test the estimation accuracy of different algorithms.For the EuRoC, VIODE and our collected dataset which provide groundtruth poses during the whole running process, we use the evo [36] toolbox to compute RMSE ATE between the whole estimated trajectory and groundtruth poses.For the UMA-VI dataset whose groundtruth poses are available at the start and end segments of the whole running process, we use their python tool to compute RMSE ATE between these segments of the estimated trajectory and the ground truth poses (the alignment error [34,37]).And we report the average value of five times. A desktop computer with an Intel Core i7-9750H processor @2.60GHz and 15.5 GB RAM is used as the main experiment platform running Ubuntu 18.04 with ROS melodic. Accuracy In this subsection, we conduct an accuracy experiment on the EuRoC [33] dataset.It is made by a micro aerial vehicle (MAV) in three different indoor scenes.Sequences in each scene are divided into three modes: easy, medium, and difficult, according to the image quality and MAV motion speed.The results are shown as follows. Ablation Experiment In order to validate the effectiveness of our LRPL-VIO with point-line fusion, fast front end and feature track selection, we first conduct an ablation experiment on five sequences of EuRoC dataset including MH_02_easy, MH_03_medium, MH_05_difficult, V1_03_difficult, and V2_02_medium.We replace the fast line matching method with the PL-VINS LBD matching module in our system (denoted as LRPL-VIO (LBD)) for matching comparison.And the line feature selection module is disabled (denoted as LRPL-VIO (All Line Track)) to prove its necessity.The results are shown in Table 1.First, it can be seen from Table 1 that the point-line fusion strategy could bring more visual constraints for the VIO system; thus, LRPL-VIO could produce more accurate trajectories than the point-only HybVIO (with 11% enhance on the average).Second, the proposed fast line matching method could finish line matching more efficiently than LBD with higher matching quality (LRPL-VIO obtains lower RMSE ATE than LRPL-VIO (LBD) on all five sequences) and less running time (See Table 6).Finally, the feature track selection scheme avoids using all tracked line features and their updated measurements; thus, the pose estimation accuracy could be guaranteed (with 2% enhance on the average) even using a small numbers of features (5 successful line updates at most for one frame in our implementation). Accuracy Experiment We use all 11 sequences on the EuRoC dataset to test the pose estimation accuracy of LRPL-VIO and compare it with four SOTA open-source algorithms.The results are shown in Table 2. Compared with two point-only methods VINS-Mono and HybVIO, LRPL-VIO outperforms them on most sequences because of successful point-line fusion.Using visual constraints from various features, visual-inertial navigation systems could perform pose estimation more accurately.The average RMSE of LRPL-VIO is more than 10% lower than them.With improved line matching quality using the proposed method and feature selection scheme, line features could be used in LRPL-VIO in a more efficient way.Thus, compared with the LBD-based PL-VIO and PL-VINS, we outperform them with more than 7% lower average RMSE and less computational resource consumption (See Table 6). Robustness To further validate the robustness of the proposed LRPL-VIO, we select some challenging sequences from the following two datasets: The UMA-VI dataset [34] is recorded by a custom handheld visual-inertial sensor suite.The images recorded in different scenes are severely affected by many challenging factors including low texture, illumination change, sun overexposure, and motion blur, which makes it a difficult dataset for VIO algorithms. The VIODE dataset [35] is recorded by a simulated unmanned aerial vehicle (UAV) in dynamic environments.The novelty of this dataset is that the UAV navigates the same path in four sub-sequences (none, low, mid, high) of each scene, and the only difference between them is the number of dynamic objects. The sequence features are listed in Table 3 and the results are shown in Table 4. Table 3.The features of the selected challenging sequences.4, we can see that PL-VINS and LRPL-VIO can perform successful pose estimation on all these challenging sequences.However, we show a better performance with a lower error on each sequence, which validates the better robustness of LRPL-VIO.We also provide the alignment error figures and heat maps of estimated trajectories of PL-VINS and LRPL-VIO in Figure 2.For the alignment error figures, the smaller the translational error is, the better accuracy the VIO could provide.For the heat maps, we could focus on the difference between the estimated trajectory and groundtruth poses, which is marked in different colors.Based on this, Figure 2 can validate the better robustness of LRPL-VIO than PL-VINS on the other hand. Real-World Performance To test the performance of LRPL-VIO in real-world applications, we collected a custom dataset in a challenging indoor scene.A sensor suite with a Intel Realsense D455 camera (gray image, 30 Hz) and a Xsens MTi-680G IMU (inertial measurement, 200 Hz) is used as the collection platform.Two motion modes (normal and fast rotation) are applied to produce different evaluation sequences, which are shown in Figure 3a,b.The results are shown in Table 5.From Table 5, it can be seen that LRPL-VIO could perform pose estimation more accurately than HybVIO in the experiments.The RMSE ATE of LRPL-VIO is 35.4% lower in Lab_Normal and 26.5% lower in Lab_FastRotation.Fusing various features could bring more constraints; thus, the whole estimated trajectories of LRPL-VIO are closer to groundtruth poses.And Figure 3c-j could validate this more intuitively. Runtime To evaluate the real-time performance of LRPL-VIO, we divide it into three main modules including point processing (front end), line processing (front end), and VIO (back end) for convenience of comparison with PL-VIO and PL-VINS.And the MH_04_difficult sequence of EuRoC dataset is used to conduct this test.The results are shown in Table 6.As shown in Table 6, the time-consuming LBD and the heavy optimization back end are the most time-consuming module of PL-VIO and PL-VINS.In contrast, the proposed fast line matching method in Section 3 brings our system high efficiency.The execution time of line detection and tracking process of LRPL-VIO is much less than them.In addition, our core pose estimation scheme is an efficient EKF with a unique feature selection scheme, which ensures that our total processing speed of a single frame is nearly three times faster than PL-VINS. Conclusions and Future Work In this paper, a novel point-line visual-inertial odometry is proposed to address positioning issues in complex environments such as weak texture and dynamic features.The short runtime of feature correspondence is maintained by a fast line matching method; thus, the whole system can work at a high frequency.A line feature selection scheme is utilized to further improve the efficiency of the core filter.Validation experiments on the EuRoC, UMA-VI, and VIODE dataset have shown the better performance and efficiency of our system against other SOTA open-source algorithms (HybVIO [9], VINS-Mono [7], PL-VIO [10], and PL-VINS [12]).In the future, we will try to introduce the structural constraints of 3D line features and plane features to further improve the accuracy. -VI class_csc2 low texture, indoor-outdoor change parking_csc2 low texture, dark scene, illumination change third_floor_eng low texture, illumination change, fast motion VIODE cd3_high dynamic objects cn3_high dark scene, dynamic objects Effective point-line fusion strategy could improve the robustness of visual-inertial odometry algorithms.From Table Figure 2 . Figure 2. The pose estimation error of PL-VINS and LRPL-VIO on the UMA-VI and VIODE dataset.(a) The alignment error of PL-VINS in class_csc2.(b) The alignment error of PL-VINS in parking_csc2.(c) The RMSE ATE of PL-VINS in cd3_high.(d) The alignment error of LRPL-VIO in class_csc2.(e) The alignment error of LRPL-VIO in parking_csc2.(f) The RMSE ATE of LRPL-VIO in cd3_high. Figure 3 . Figure 3.The figures of real-world experiments.(a) An example image of sequence Lab_Normal.(b) An example image of sequence Lab_FastRotation.(c) The 3D error map of HybVIO in Lab_Normal.(d) The X-Y plane of 3D error map of HybVIO in Lab_Normal.(e) The 3D error map of HybVIO in Lab_FastRotation.(f) The X-Y plane of 3D error map of HybVIO in Lab_FastRotation.(g) The 3D error map of LRPL-VIO in Lab_Normal.(h) The X-Y plane of 3D error map of LRPL-VIO in Lab_Normal.(i) The 3D error map of LRPL-VIO in Lab_FastRotation.(j) The X-Y plane of 3D error map of LRPL-VIO in Lab_FastRotation. Table 1 . The results of the ablation experiment, which is evaluated using RMSE ATE in meter. Table 2 . The results of the pose estimation accuracy test, which is evaluated using RMSE ATE in meter.means failure. 1 means the best while 2 means the second best. * Table 4 . The results of the robustness experiment.For evaluation, the alignment error in meter is calculated on the UMA-VI dataset and the RMSE ATE in meter is calculated on the VIODE dataset. * means failure. 1 means the best while 2 means the second best. Table 5 . The results of the real-world experiments, which is evaluated using RMSE ATE in meter. 1 means the best. Table 6 . The results of the runtime analysis, which is evaluated using millisecond.
v3-fos-license
2018-12-08T18:33:41.374Z
2016-07-10T00:00:00.000
55052554
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/mpe/2016/2187647.pdf", "pdf_hash": "e609fb18e23ba3740886ae94fb60f183f361c928", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1487", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "e609fb18e23ba3740886ae94fb60f183f361c928", "year": 2016 }
pes2o/s2orc
A Method of Effective Text Extraction for Complex Video Scene Text information contains important information for video analysis, indexing, and retrieval. Effective and efficient text extraction has been a challenging topic in recent years. Focusing on this issue, a text extraction method for complex video scene is proposed in this paper. Multiframe corner matching and heuristic rules are combined together to detect the text region candidates, which solves the issue of Harris corner filtration for complex video scene and also improves the detection accuracy using multiframe fusion. Local texture description is then used for similarity evaluation judged by SVM. Experimental results for 4 different types of 395-frame video images show the effectiveness of the proposed method compared with 5 existing text extraction methods. Introduction In recent years, image-and video-based multimedia information has been playing an increasingly important role in the fields of information exchange and services.The contentbased retrieval is an important method to manage and search for the massive multimedia information [1].In the field of content-based multimedia retrieval, the correct identification of text from images and video will lay a strong foundation for achieving the proper retrieval result.Therefore, how to extract text from the complex background becomes a crucial step for understanding and retrieving images and videos. Generally, there are two main parts for text extraction: the text region detection and the text segmentation.The existing methods of text region detection can be divided into four categories [2]: edge-based detection, texture-based method, connected region-based method, and machine learning method.Method based on edge detection [3] detects the edges of the image by the edge detection operator, then filters the edges or aggregates the candidate text regions, and finally filters out text regions by defining some heuristic rules.This method, although it has quite high efficiency, possesses weak robustness under the disturbance of complex background.Texture-based method [4] judges whether pixel points or pixel blocks belong to the text by using the image texture features.Such method can effectively detect character region under the complex background but has low operational efficiency since it has to deal with the differential operation of the whole image.Method based on connected region [5] using image segmentation or color clustering methods extracts the same color text from the background.The premise of this approach is that the characters share the same color.However, when the complex background regions in the image have similar color as the text, the test results are not satisfactory.Machine learning based method [6] classifies text blocks and nontext blocks by constructing the mechanism of learning.Since such methods need to select samples in order to train the learning machine for classification [7], the similarity between training sample sets and test sample sets is not high enough to perform ideal detection results. Because the detected text region contains a complex background, the text needs to be split out of it for further applications.The prevailing text segmentation methods mainly use the text color and partial space information, which can be roughly classified into the following three categories: threshold method, unsupervised clustering method, and the method based on a statistical model [8,9], whereas the 2 Mathematical Problems in Engineering above methods apply only to the grayscale text blocks with simple background.When it comes to the background that contains the same or similar color component as the text, there would exist misclassification, or when the number of kernel functions of statistical model is difficult to determine, the text region extraction from complex video scene does not perform efficiently. Text superimposed in complex background video images is superimposed directly on top of the image.Thus, detected text blocks typically contain some unpredictable complex image backgrounds, which will cause obstacles for the segmentation.The diversity of the color and texture of complex background makes it difficult to estimate the text color.Meanwhile, the background of segmented text blocks only contains a small fragment of the whole original background image, and therefore information contained is limited due to the fragmentary texture, which cannot be easily described by model construction.Through the above analysis, this paper proposes a text extraction method for complex video scene.The proposed method includes two primary steps, which are text region coarse detection based on multiframe corner matching and heuristic rules and the video text region extraction under complex background based on texture and SVM aiming to position the text region precisely.Specifically, multiframe corner matching is mainly used to solve the issues of Harris corner filtration for complex background video scene.The method is based on the relationship between consecutive multiframe images, with the aid of the temporal redundancy of the video text, using multiframe fusion to improve the accuracy of text detection.Using heuristic rules to filter the candidate text regions, heuristic rules can change according to the type of scene, which can enhance the efficiency of the algorithm and also reduce the false alarm rate in some extent.LBP histogram is used to describe the local texture of the image, the similarity tolerance between images is then judged based on SVM, and finally text in complex video scene is extracted accurately. In the experimental section, four different types of 395frame video images including movie, news, sport, and cartoon are used for experiments, and the existing five methods are compared with the proposed one under three evaluation criteria: the text extraction accuracy, false alarm rate, and the recall rate.Experimental results show the effectiveness of the proposed method.Based on the detailed algorithm analysis, the effect of the whole system accuracy and improvement suggestions are then provided. Text Region Coarse Detection Based on Multiframe Points Matching and Heuristics Rules 2.1.The Harris Corner Detection.Corner [10] is an important feature of image texture, usually defined as the qualified high curvature points on the boundary of the image.Corner can be found at the edges and contours, which is the location of violent variation of image brightness or location of curve maxima of curvature at image edge and also is independent from text features as the font color and font size. Harris operator is a point feature extraction operator based on gray scale proposed by C. Harris and M. J. Stephens on the basis of Moravec algorithm [11].This operator is inspired by the self-related function in signal processing and giving matrix about the self-related coefficient, which has feature values as the first-order curvature of self-related function.If both the curvature values are high, the point is considered to be the corner point.The principle of corner detection in images can be described as if the offset in either direction at a point in the image will cause significant grayscale changes; then the point is a corner point. The specific steps of Harris corner detection algorithm for video images are shown as follows: (1) Convert the video image to grayscale: (2) Calculate the correlation matrix : where is the gradient of , is the gradient of , and () is the Gauss template. (3) Calculate the Harris corner response of each pixel: where det is the determinant of the matrix, tr is the straight stitch of the matrix, and is the default constant. (4) Within the scope of the Gaussian window to find the maximum point, if the Harris corner response is greater than the threshold, the maximum point would be considered as the corner point. Since Harris operator only involves the first-order difference and filtering of image grayscale, the calculation is quite simply without threshold comparison; therefore, the whole process is highly automatic.One frame of the news video scene is processed by Harris corner detection, and the result is shown in Figure 1. Corner Filtering Based on Multiframe Corner Matching. Since all the border points owing high curvatures in the image would be judged as corners, there might be some corners belonging to background, outside the text corners in the corner distribution image.In order to solve this problem, isolated corner filtering method is generally used to reduce noises in text extraction procedure [12].For video images with complex background, simply using corner density filtering method is difficult to overcome the interference of complex background.To solve the problem of complex background video image filtering, this paper proposed an effective method based on the relationship between consecutive frame images and the temporal redundancies of the video text, utilizing multiframe integration strategy to improve the accuracy of text detection.The similar subtitles' text in TV video will appear adjacently in several continuous video frames.And the relative location of subtitle text is fixed, so the pixels within text region will not have big changes.However, the movement of background leads to big changes in background pixel, which can be used for background filtering.In this paper, we propose a filtering method of video image corners based on multiframe corners matching under complex background.The specific procedure of the algorithm is described in the following steps. Step 1. Detect corners of two adjacent video frames based on Harris operators, respectively; corner set of each frame consists of zero or several background corners and characters corners.The corner sets of two adjacent frames and +1 can be presented as Step 2. Find out the corner set com including both and +1 presented as Step 3.For corner set com , define a sliding window of size 15 × 15, and make the window slide in directions of and , respectively.The sliding step length is 5. Scan the entire corner distribution map, and then calculate the corner density inside the window.If the corner density is below a certain threshold, the corner of sliding window center will be removed.The set of filtered corner made as filter can be presented as where is a certain corner, Den (( )) is the corner density of window center , and Den threshold is the threshold. The corner filtering algorithm based on multiframe matching processes each pixel accurately in matching progress.Therefore, the location of objects in two adjacent frames of a video image will be detected, even if it moved one pixel unit distance.Effectively, our method has the ability to solve interference problem that the text region detection is vulnerable to complex background using only one single image. Text Regions Detection under the Guidance of Heuristics Rules.Corner filtering method based on multiframe corner matching has a good performance on filtering the corners generated by the video background, involving a relative static text with a moving background.For the video subtitles with linear motions, we need to define a modified algorithm and heuristic rules [13] further improving the accuracy of text region detection.Modified algorithm is described as follows. ( as text candidate regions.For the text with vertical motion, the method is similar.The pseudocode of the above algorithm is described as in Algorithm 1. After the modified method, we also define heuristic rules [13] to carry out further filtering of the candidate text regions.Heuristic rules are described as follows. (1) Each minimum external matrix of candidate text regions is calculated.Count heights of all minimum external matrix.Make the height which appears most frequently as a standard.Candidate text regions with greater height than the standard will be filtered. (2) Eliminate the candidate text regions with the ratio of height and width of the minimum external matrix below a certain threshold. Through the multiframe corners matching and heuristic rules filtering, the results of Figure 1 are shown in Figure 2(a).The corresponding background-free corner map is shown in Figure 2(b). With the morphological operation to corners with no background, we can get the approximate outline of the text region.External rectangle of the outline is then found out and the text region can be finally extracted on the original image.Results are shown in Figure 3. Heuristic rules have the advantage of improving the algorithm efficiency and also reducing the rate of false alarms, as the rules can change according to the style of the video images.For the calculated LBP histogram, LIBSVM [15] is used for similarity measurement of text region and finally gives the text extraction results.We choose the RBF (Radial Basis Function) as the kernel function of SVM.Before the SVM training, we need to select parameters of SVM model, which is, namely, how to determine the optimal parameters set {, } of the Gaussian radial kernel function.Among them, presents error penalty factor, and presents the Gaussian radial parameter.In our experiment, the training set is divided into 5 groups.By crossing validation of the training set among the 5 groups and using grid-search method for finding the optimal parameter set in = {2 0 , 2 1 , . . ., 2 10 Text Region Extraction Results . According to the results in Figure 3(b), the text region is cropped to convert into grayscale image; the result is shown in Figure 5(a).The resolution of the cropped text region is then adjusted to 500 × 100 and divided into 50 × 10 subregions, as shown in Figure 5(b).According to the training results to classify the test data shown in Figure 5, the classification results are shown in Figure 6 Results for other video images including four different types of movie, news, sport, and cartoon by our proposed method are presented in Figure 7 (left is the coarse result, and the right gives accurate text extraction results). As shown in Figure 7(a), in the final text region extraction results of the news video, neither the character "%" in the text "1%" at the bottom of the screen is marked, nor is the time text region.The program logo in the lower right corner of the movie video in Figure 7(b) also fails to be marked, because the fonts and text layout are quite different from traditional subtitles.In Figure 7(c), although the sport game video contains TV station logo, program title, subtitles, and a variety of complex texts, our method can still successfully extract all the text regions.In Figure 7(d), the captions and background of the cartoon video are noticeably distinguished from each other; therefore, the text regions are also able to be correctly extracted by our approach. Comparison of the Algorithm Efficiency. In this section, we evaluate the algorithm performance of the text region extraction in the video image.Particularly, the text region here refers to the subtitle text added in the video production period.There are four evaluation standards for video text extraction, which are text extraction accuracy, false alarm rate, recall rate, and character recognition accuracy [16].In this paper, our algorithm is aiming to extract the text region in video images.Therefore, we use the first three evaluation standards for performance evaluation.For comparison, we compared our method with five existing methods: Otsu's method [17], Sato's method [18], Lyu's method based on edge detection [19], Song's method [20], and Shivakumara's method [21]. As the unit of text blocks, the text extraction accuracy, false alarm rate, and recall rate are defined as = Numbers of CTR Numbers of CTR + Numbers of ICTR × 100% False Alarm Rate where CTR means correct extracting text regions, ICTR means incorrect extracting text regions, FTR means false extracting text regions, and TRVS means text regions existing in video scene.Experimental results selected of four types of video data are shown in Tables 1, 2, and 3.It can be found that the text extraction method for complex video scene presented in this paper possesses an average accuracy rate of 92.7%, the average recall rate of 82.8%, and average false alarm rate of less than 6.1%, and all of these three indicators are higher than the above-mentioned five methods.In contrast, Lyu's method [19] is comparatively better than the other four methods; however, when it comes to do the extraction of the text data used in our experiments, it presents an average accuracy rate of 87.8%, recall rate of 80.2%, and false alarm rate probability of 6.7%, which are all below the level of our method results.The statistics of experimental results demonstrate that the precision and recall rate of the proposed algorithm are much higher than the existing five methods.From the results shown in Tables 1, 2, and 3, it can be found that the typesetting of title and subtitles in news video is well standardized, with less text species and rarely art effects processing, while text and background bring out a strong difference and also last longer in video, so the extraction of text in news is relatively easy; accurate rate and recall rate of text detection are both high.Since the texts in movie and cartoon video are mainly located in the lower screen, with nice text specification, and the color of cartoon background is much simpler compared to movie, the text extraction accuracy rate for cartoon is higher than movie.Generally, sport video is much more complex which often contains banner texts which appeared randomly, complex background, and more text types and effects, so text extraction accuracy and recall rate are lowest in the 4 different videos. Algorithm Detail Analysis. The proposed text extraction method for complex video scene consists mainly of four parts, corner coarse detection based on Harris (denoted as corner detection (CD)), corner filtering based on multiframe corner matching (denoted as corner filtering (CF)), text regions coarse detection under heuristics rules (denoted as text regions coarse detection (TRCD)), and text regions fine detection based on texture and SVM (denoted as text regions fine detection (TRFD)).The performance of each step will affect the final accuracy of the text extraction.Therefore, this part uses the statistical method to analyze detail performance of the proposed algorithm. The selected four types of video data (totally 395 frames) are used for detailed experiments.First of all, we manually obtain all the text regions of 395 frames as the benchmark; the results of each step are then compared with the reference. For corner detection and corner filtering, the detection accuracy is calculated by the percentage of corner points which fall in the benchmark text regions.For two text regions detection steps, the detection accuracy is calculated by the coincidence rate of detected text regions and the benchmark.Experimental results are shown in Figure 8. From the results, it can be seen that the CD step is based on Harris, as part of the detected corners for this step outside the text regions; therefore the detection accuracy is not high, especially for the sport video data which contain very complex background, and the average accuracy rate for 103 frames is only about 45%.Consequently, the corner filtering (CF) step is necessary.Through the comparison of experimental results of four different types of video in Figures 8(a), 8(c), 8(e), and 8(g), it can be found that, by the proposed corner filtering step based on multiframe corner matching, corner detection accuracy rate increased by 27%, which ensures the good input of the text regions detection step.And for comparison of the next two steps, TRCD and TRFD, TRCD based on heuristic rules, the average detection rate for movie, news, sport, and cartoon video data is 83.5%, 76.2%, 63.7%, and 87.4%, respectively.TRFD step is used for text regions fine detection; the average detection accuracy rate is 5% higher than TRCD.By comparing the results of TRCD and TRFD in Figures 8(b), 8(d), 8(f), and 8(h), it can be found that the TRCD step has a greater effect on the total system accuracy than the TRFD step.For the theoretical analysis, TRFD step used the results of TRCD step and extracted texture feature for SVM learning.Definitely, the learning results will affect the system accuracy; however, the input of the SVM, which means the results of TRCD step, has more influence on the whole learning system.For the detailed experiments of the system performance in this part, we analyzed the proposed algorithm performance step-by-step; we found that, in the whole algorithm system, corner filtering based on multiframe corner matching and text regions coarse detection under heuristics rules are two critical steps which affect the total system accuracy.In the future work, we will focus on theoretical modification of these two steps, in order to improve the performance of the whole algorithm system. Conclusion For the issue of text extraction for complex video scene, we proposed an effective video image text extraction method, which contains the coarse text region extraction based on multiframe corner matching and heuristic rules and precise text extraction under complex background based on the texture and SVM.Multiframe corner point matching was mainly used to solve Harris corner filtering problem in video image under complex background.Heuristic rules could be flexibly used to filter the candidate text regions according to the style of the video scene, which improved the efficiency of the algorithm and decreased the false alarm rate.The local image texture description was classified by SVM, and the accurate text extraction was finally achieved.The experimental results of four different video types including movie, news, sport, and cartoon, with 395 video frames, demonstrate that our method gets the average extraction accuracy of 92.7%, the average recall rate of 82.8%, and the average false alarm rate less than 6.1%, and all three indicators are obviously superior to the five comparative methods.Experimental results show the effectiveness of the proposed method for text extraction for complex video scene. Figure 1 : Figure 1: Results of Harris corner detection. LBP [ 14 ] (Local Binary Pattern) is an operator used to describe local image texture features.Therefore, LBP histogram can be used to describe the local image texture, further determine the similarity between images based on similarity measurement function, and finally complete precise text region extraction.Based on the above analysis, this paper adopts the model of uniform LBP to calculate LBP histogram of the text region extraction results obtained from the previous section.In this way, feature dimension can be reduced from the original 256 dimensions to 59 dimensions, which greatly reduce the data complexity, while keeping the effective characteristics of the data. } and = {2 −11 , 2 −5 , . . ., 2 −1 }, we can then obtain the value 128 of and the value 0.015625 of .Finally, according to the parameters above, we generate the SVM classifier model by training the entire training set. 4. 1 . Algorithm Description.In this paper, the flowchart of the proposed text extraction method for complex video scene is shown in Figure 4. Our method consists of two parts: one is the text region coarse extraction based on multiframe corner matching and heuristic rules and the other is the text extraction based on texture and SVM.Finally, the subtitle text in the video can be accurately extracted.In terms of implementation, firstly, according to the text region labeling results of the second section, we crop the text region in the original image, convert the color image into grayscale image, and then adjust the resolution of the cropped text region to 500 × 100 and divide it into 50 × 10 subregions.Finally, we compute the LBP histogram of each region and obtain the LBP histogram of 59 dimensions as test data.For the SVM training samples, we download video images that contain Chinese subtitles from the Internet and get training test regions manually.The sample data is obtained using the above preprocessing.Class labels such as text region with "1" and the nontext region with "0" are marked manually.With all the above preparations, we start the training and get results.For the SVM training step, the quantity and quality of training samples directly determine the result of classification.Theoretically, within certain limitations, the more the training samples are, the better accuracy and robustness the classification would have.Consequently, we select 400 video frames that contain different Chinese subtitles including four different kinds such as movie, news, sport, and cartoon to complete the training. (a); the white part is represented in the text region.The final text extraction results are shown in Figure 6(b) as drawing an external rectangle on the white part. (a) News video image (b) Movie video image (c) Sport video image (d) Cartoon video image Figure 7 : Figure 7: Text extraction results for four different types of video images. Figure 8 : Figure 8: Algorithm details analysis results.(a), (c), (e), and (g) show corner detection rate and corner filtering rate for movie, news, sport, and cartoon, respectively.(b), (d), (f), and (h) show text regions coarse detection rate and fine detection rate for movie, news, sport, and cartoon, respectively. , ) means sorting corners in set com , Coor( , ) is the coordinate value of , sort( , Coor( , )) means sorting corners according to the coordinate value of , and sort( , Coor( , )) means sorting corners according to the coordinate value of . 1) According to the motion direction of text, sort the corner set com of both video images according to the coordinate values of (for the text horizontal motion) or the coordinate values of (for the text vertical motion) presented as com = sort ( , Coor ( , )) ; for horizontal movement com = sort ( , Coor ( , )) ; difference of coordinate values of .If the number of corners with same coordinate value of and fixed coordinate difference of is greater than a certain threshold and moreover the qualified corners distribute concentratively in the same region, then the point sets of two adjacent video frames can be made Table 2 : Text extraction false alarm rate of four different kinds of video images. Table 3 : Text extraction recall rate of four different kinds of video images.
v3-fos-license
2022-01-08T16:17:30.486Z
2022-01-05T00:00:00.000
245805758
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-9292/11/1/168/pdf", "pdf_hash": "fef78673d6076bb38c30bf933d91a5438215a831", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1490", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "b2c79453d77edd74b762eea4dccefa5fe1ce7bef", "year": 2022 }
pes2o/s2orc
Steering a Robotic Wheelchair Based on Voice Recognition System Using Convolutional Neural Networks : Many wheelchair people depend on others to control the movement of their wheelchairs, which significantly influences their independence and quality of life. Smart wheelchairs offer a degree of self-dependence and freedom to drive their own vehicles. In this work, we designed and implemented a low-cost software and hardware method to steer a robotic wheelchair. Moreover, from our method, we developed our own Android mobile app based on Flutter software. A convolutional neural network (CNN)-based network-in-network (NIN) structure approach integrated with a voice recognition model was also developed and configured to build the mobile app. The technique was also implemented and configured using an offline Wi-Fi network hotspot between software and hardware components. Five voice commands (yes, no, left, right, and stop) guided and controlled the wheelchair through the Raspberry Pi and DC motor drives. The overall system was evaluated based on a trained and validated English speech corpus by Arabic native speakers for isolated words to assess the performance of the Android OS application. The maneuverability performance of indoor and outdoor navigation was also evaluated in terms of accuracy. The results indicated a degree of accuracy of approximately 87.2% of the accurate prediction of some of the five voice commands. Additionally, in the real-time performance test, the root-mean-square deviation (RMSD) values between the planned and actual nodes for indoor/outdoor maneuvering were 1.721 × 10 − 5 and 1.743 × 10 − 5 , respectively. Introduction Many patients still depend on others to help them move their wheelchairs, and patients with limited mobility still face significant challenges when using wheelchairs in public and in other places [1]. Statistics also indicate that 9-10% of patients who were trained to operate power wheelchairs could not use them for daily activities, and 40% of limited mobility patients reported that it was almost impossible to steer and maneuver a wheelchair [2]. Moreover, it was reported that approximately half of the 40% of patients with impaired mobility could not control a powered wheelchair [3]. Furthermore, the same study determined that over 10% of patients that use traditional power wheelchairs not equipped with any sensors have accidents after 4 months [3]. However, using an electric wheelchair equipped with an automatic navigation and sensor system, such as a smart wheelchair, would be beneficial in addressing a significant challenge for several patients. The smart wheelchair is an electric wheelchair equipped with a computer and sensors designed to facilitate the efficient and effortless movement of patients [4][5][6][7]. These wheelchairs are considered safer and more comfortable than conventional wheelchairs because they introduce new control options, which include navigation systems (GPS) and other technologies, such as saving places on the user's map [8,9]. Various sensors can be used in smart wheelchairs, such as ultrasound, laser, infrared, and input cameras. These wheelchairs adopt computers that process input data from the sensors and produce a command that is sent to the motor to spin the wheels of the chair [10]. One of the most important developments in this field is the introduction of the joystick control system. This system drives the wheelchair via an intelligent control unit [11]. However, patients with impaired upper extremities cannot operate the joystick flexibly and smoothly. This leads to fatal accidents when situations require rapid action in motion. Therefore, the conventional joystick system needs to be replaced with advanced technologies [12]. The human-computer interface (HCI) is a method for controlling a wheelchair using a signal or a combination of different signals, such as electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) [13][14][15]. Brain-computer interfaces (BCIs) are one of the most researched CIs that translate brain signals into action to control a device [16]. Regarding EEG-BCI, it has some limitations, including low spatial resolution and a low signal-to-noise ratio (SNR). A hybrid BCI (hBCI) that combines EEG with EOG exhibited improved accuracy and speed. Despite the HCI results, some limitations to the application of BCI systems still exist. EEG devices are relatively expensive, and bio-potential signals are affected by artifacts. Furthermore, although hBCI can address some of these challenges, it is not efficient and flexible in its simultaneous control of speed and direction [17][18][19]. Speech is the most important means of communication between humans and in human communication, speech is the most important mode of communication. By employing a microphone sensor, speech can be used to interact with a computer and serve as a potential method for human-computer interactions (HCI). These sensors are being used in quantifiable voice recognition research in human-computer interactions (HCI), which has applications in a variety of areas such as human-computer interactions (HCI), controlling wheelchairs, and health-related applications. Therefore, smart or intelligent wheelchair developments-based on voice recognition techniques-have increased significantly [20]. For instance, Aktar et al. [21] developed an intelligent wheelchair system using a voice recognition technique with a GPS tracking model. The voice commands were converted into hexadecimal numeral data to control the wheelchair in three different speed stages via a Wi-Fi module. The system also used an infrared radiation (IR) sensor to detect obstacles and used a mobile app to detect the location of the patient. Similarly, Raiyan et al. [22] developed an automated wheelchair system based on the Arduino and Easy VR3 speech recognition module. In this study, the authors claim that the implemented system is less expensive and does not require any wearable sensor or complex signal processing. In an advanced study, an adaptive neuro-fuzzy has been designed to drive a powered wheelchair. The system implementation was based on real-time control signals generated by the voice commands' classification unit. The proposed system used a wireless sensor network to track the wheelchair [23]. Despite highly sophisticated approaches presented by researchers in this area, high cost, accuracy in distinguishing, classifying, and identifying the patient's voice remain the most critical challenges. To overcome the lack of accuracy for distinguishing and classifying patients' speech, many researchers have used the convolutional neural network (CNN) technique [24,25]. This technique relies on converting voice commands into spectrogram images before being fed into CNN. This method has proven to be helpful in the level of accuracy for speech recognition. In this context, Huang et al. [26] proposed a method to analyze CNN for speech recognition. In this method, visualizing the localized filters learned in the convolutional layer was used to detect automatic learning. The authors claim that this method has advantages of identifying four domains of CNN over the fully connected method. These domains are distant speech recognition, noise robustness, low-footprint models, and channel-mismatched training-test conditions. In addition, Korvel et al. [27] analyzed 2D feature spaces for voice recognition based on CNN. The analysis used the Lithuanian word recognition task to feature maps. The results showed that the highest rate of word recognition was achieved using spectral analysis. Moreover, the Mel scale and spectral linear cepstra and chroma are outperformed by cepstral feature spaces. The driving of smart wheelchairs using voice recognition technologies with CNN has attracted many researchers [28]. For instance, Sutikno et al. [24] proposed a voice control method for wheelchairs using long short-term memory (LSTM) and CNN. This method used Sox Sound Exchange and Sound Recorder Pro to achieve the objective. The accuracy level of this method was above 97.80%. Another study was conducted by Ali et al. [29], who designed an algorithm for smart wheelchairs using CNN to help people with disabilities in detecting buses and bus doors. The method was implemented based on accurate localization information and used CPU for fast detecting. However, the use of CNN in smartphones is still under development due to associated complex calculations to achieve high accuracy predictions [30]. This paper develops a new powerful, low-cost system based on voice recognition and CNN approaches to drive a wheelchair for disabled users. The method proposes the use of a network-in-network (NiN) structure for mobile applications [31]. The system used smartphones to create an interactive user interface that can be easily controlled by sending a voice command via the mobile application to the system's motherboard. A mobile application, voice recognition model, and CNN model were developed and implemented to achieve the main goal of this study. In addition, all safety issues were considered during driving and maneuvering at indoor and outdoor locations. Results showed that the implemented system was robust in time response and had accurate execution of all orders without time delay. The paper is organized as follows: Section 2 illustrates the materials and methods used in this study. Section 3 addresses the experimental procedure. Section 4 shows the results of the study. Section 5 discusses the results. Section 6 concludes this study. Finally, Section 7 shows future work. Figure 1 illustrates the implementation of the system architecture of the proposed system. This system is divided into two stages. The first stage is the set of hardware devices used to control the movement of the wheelchair reliably. These devices include a standard wheelchair, Android smartphone (Huawei Y9-CPU: Octa-core, 4 × 2.2 GHz), DC electric motors, batteries, relay model, Raspberry Pi4, and an emergency push button in case of an abnormal system response. The second stage focuses on the software development of the mobile application, voice recognition model, and CNN model. The software was designed and implemented to control the wheelchair using the five voice commands mentioned in Table 1. The main components for controlling the chair were connected via offline Wi-Fi. Materials and Methods In this work, the mobile app was built based on Flutter software [32,33]. The design process includes creating a user flow diagram for each screen, creating and drawing wireframes, selecting design patterns and color palettes, creating mock-ups, creating an animated app prototype, and designing final mock-ups to prepare the final screens for coding to be initiated. Usually, the app appears on the application list, and after it has been opened, it displays the enlisted words that we have trained in our model. After permitting the application to use our microphone, it attempts the words and highlights them in the interface recognition, as shown in Figure 1. Voice Recognition Model Development Each audio file signal is subjected to feature extraction to create a map that shows how the signal changes in frequency over time. Therefore, the Mel frequency cepstral coefficients (MFCC) were used in speech analysis systems to extract this information [34]. The initial step in character extraction is to emphasize the signal by passing it through a one-coefficient digital filter (finite impulse response (FIR) filter) to prevent numerical instability as: where x(n) is the original voice signal, y(n) is the output of the filter, n is the number of sampling, and β is a constant such that 0 < β ≤ 1. To keep the samples in frame and reduce signal discontinuities, the framing and windowing [w(n)] are employed as: where is α constant and N is the number of frames. For spectral analysis, fast Fourier transform (FFT) is applied to calculate the spectrum of magnitude for each frame as: The spectrum is then processed by a bank of filters according to MFCC, where the Mel filter bank can be written as: If we consider f l and f h to be lowest and highest on the filter bank in hertz and frequency, then the boundary points f [m] can be written as: where N is the size of the FFT, M is the number of filters, and B is the Mel scale which is given by: To eliminate noise and spectral estimation errors, we applied approximate homomorphic transform as: The logarithmic energy operation log(∑|.| 2 ) and the inverse of discrete cosine transformer (DCT) are used in the final step of MFCC processing. The use of DCT has features for high decorrelation, and partial decorrelation can be given as: To obtain the feature map, we take the first and second derivatives of (8) to obtain: This is applied to all recordings that have been made; the database was thus created and used by the CNN. CNN Implementation Model Here, we adopted the network-in-network (NIN) structure as the foundational architecture for mobile application development [35,36]. NIN is a CNN technique that does not include fully connected (FC) layers and, in addition, can accept images of any size as inputs to the network by employing global pooling rather than fixed-size pools. This is useful for mobile applications because users may adjust the balance between speed and accuracy without affecting the network weights. To contrast CNN, we adopt a multi-threading technique. In this technique, the smartphone has four CPU cores that easily allow dividing a kernel matrix into four sub-matrices along with the row. Therefore, four generic matrix multiplication (GEMM) operations are carried out in parallel to obtain the output feature maps of the target convolution layer. Our method adopted cascaded cross channel parametric pooling (CCCPP) to compensate for the FC layers' elimination. Therefore, our CNN model consists of input, output layers, twelve convolution layers, and two consecutive layers, as shown in Figure 2. DC Motor Control Drive The drive wheels are powered by motors at the rear and front ends of the chair. The rear motors correspond to the rear wheel movement, which is used to drive the wheels forward, and the front wheel (freewheels) corresponds to different chair movements. The two motors were connected to the driver via four power lines. The motor speed was predefined at approximately 1 km/h. To move forward or backward, both wheels will move clockwise and anti-clockwise, respectively. However, to turn right or left, one motor uses the entire free gear and the other moves forward. If one needs to turn left, the left wheel uses the free gear and the right one moves forward, thereby causing the wheelchair to move in the opposite direction. The movement table of the wheelchair is presented in Table 2. All wheelchair movements are controlled by a relay module. The relay module provides four relays that are rated for 15-20 mA at 5 Vdc. Each relay has a normally closed (NC) and normally open (NO) contact. Each relay is controlled by a corresponding pin that originates from the microcontroller. The relays are optically isolated, and each motor is controlled by two relays: one relay is used to switch (on), and the other remains at the first position (off) by means (ground), which will cause the (on) motor to turn in a clockwise or opposite direction, based on the (on) or (off) state of the relay. Then, the command is sent by the microcontroller program, and the relay coil operates at 5 V. Figure 2 presents the complete electronic circuit diagram for the wheelchair movement. In this diagram, the polarity across a load for the four relay modules can be altered in both directions. Terminals are connected between the common poles of the two relays and the DC motor. Normally open terminals are connected to the positive terminal, whereas a normally closed terminal of both relays is connected to a current driver circuit (ULN2033) to protect the pins of the controller from any abrupt sinking current. The current driver circuit can support approximately 500 mA, which is sufficient for the relay module. Furthermore, the diode connects to each relay to ensure protection from voltage spikes when the supply is disconnected. Figure 3 illustrates the mechanical assembly of a wheelchair. This wheelchair was purchased from the market, and no mechanical modifications were made to the basic design of the original chair. In our proposed design, an electro-mechanical motor was attached directly to the frame of the wheelchair. A wheelchair's maneuverability depends on the position of the steering wheels, which significantly affects the space required for the chair to turn, including the way the chair moves in narrow spaces. Owing to their small 360-degree turning circumference and tight turning radius (20-26 in), mid-wheel drives are the most maneuverable, making them excellent indoor wheelchairs. Table 3 summarizes the hardware specifications of all parts that are used in this work. Figure 4 presents a flowchart of the complete wheelchair system. Experimental Procedure We evaluated our system on the English speech corpus for isolated words, which was conducted at the Health and Basic Sciences Research Center, Majmaah University. A total of 2000 utterances of five words are contained inside this collection, which was created by 10 native Arabic speakers. At a sample rate of 20 kHz and a 16-bit resolution, the corpus was recorded. Then, that data set was augmented by creating extra speech signals using a method of augmentation. The additional data set contains 2000 utterances by changing pitch, speed, dynamic range, adding noise, and forward and backward shift in time. The new dataset (original and augmented) contains 4000 utterances is divided into two parts: a training set (training and validation) with 80% of the samples (3200) and the test set with the remaining 20% of samples (800). To evaluate the accuracy and the quality of prediction of the proposed system, we calculate the F-score as: where P and R represent precision and recall, respectively, and are stated by the following: Here, T p is the true positive, F P is the false positive, and F N is the false negative. To evaluate the right prediction of each voice command during the classification, the percentage difference (%d) equation was used as: where V 1 and V 2 represent the first and second observations during the comparison process, respectively. The method also evaluates the real-time performance of indoor/outdoor navigation. This test (Video S1) describes the indoor/outdoor navigation performance, when the user controlled the wheelchair via voice commands, and the path is around and inside the mosque, with the coordinates 24.893374, 46.614728. Results In this work, the audio file was recorded and trained for five words to test the application performance until it reached the required prediction ratio. These words were chosen mainly based on the ease of pronunciation and circulation in the Arab countries and the significant variation among each other in the phonemic outlets. Figures 5-9 illustrate the recognizable faces of the voice command "Yes, No, Left, Right, and Stop". Each figure includes the sound waveform (a) two-dimensional long-term spectrum with frequency band (b), spectrogram (c), and voice command prediction ratio in the mobile app (d). Table 4 summarizes the resizing and normalization phases for each voice command. The program also displays the predicted weight of the spoken word for the user. It is always a one-voice command and has more weight than other words, and this indicates that an incorrect decision cannot be made during the classification process. Based on the previous results, the confusion matrix was calculated as shown in Table 5. The accuracy of voice commend "yes" was approximately 87.2% of the true prediction for the five voice commands. Regarding the classification tasks, we adopted the terms-true positives, true negatives, false positives, and false negatives. Tables 6 and 7 present the calculations of the voice-command prediction ratio, accuracy, and precision. In terms of calculating percentage difference when comparing one command with other commands, the example displays "STOP" against other commands. This indicates a slight possibility of making an incorrect choice during the classification. The difference between the percentage of true and false predictions is markedly high, which indicates a negligible probability of making wrong predictions, and the difference reached more than 150%, as presented in Table 7. The real-time performance of indoor/outdoor navigation was evaluated in public places, as shown in Figure 10, which presents the planned route navigation versus the actual route (outbound navigation). Table 8 presents the coordinate nodes of the planned and actual paths while navigating. The root-mean-square deviation (RMSD) was adopted to represent the differences between the planned and actual nodes of this experiment. RMSD appears to be equal to 1.721 × 10 −5 and 1.743 × 10 −5 for latitude and longitude coordinates, respectively. Discussion The objective of this study was to design and implement a low-cost and powerful system to drive a powered wheelchair system using a built-in voice recognition app on a smartphone. This design was achieved to facilitate substantial independence among disabled people and, consequently, improving their quality of life. The proposed design of the smart wheelchair increases the capabilities of the conventional joystick-controlled design by introducing novel smart control systems, such as voice recognition technology and GPS navigation systems. Owing to the significant advancements in smartphones, accompanied by high technology for voice recognition and the use of wireless headphones, the voice recognition technology for controlling wheelchairs has become widely adopted [4,29,30]. In general, the proposed system is characterized by the ease of installing the proposed electrical and electronic circuits, along with low economic cost and low energy consumption. Figure 4 shows a simple structure of the electronic circuit connection inside the installed protection case. The design used is highly effective and low cost in terms of the materials and techniques used and their ability to be configured, customized, and subsequently transferred to the end-user. The average response time for processing a single task is approximately 0.5 s, which is sufficient to avoid accidents. All programs and applications in this smart wheelchair can operate offline without requiring access to the Internet. In addition, the proposed program works under conditions of external noise with high accuracy. This study investigated the robustness of the voice recognition model by examining the percentage difference between true words, predictions, and false predictions. The experimental results exhibited a significantly high difference between the percentage values of different categories, which indicates a very low probability of wrong predictions. According to Table 3, the difference between the true and false predictions was approximately over 150%. The second experiment was adopted to evaluate the performance of indoor and outdoor navigation. The user controlled the chair via voice command, and the RMSD was employed to represent the errors in navigation. In general, the technology of the speech recognition module in Android has become widely used in recent years. In this regard, there are many free or commercially online license software available in the market and suitable for our proposed model, such as Google Cloud Speech API, Kaldi, HTK, and CMUSphinx [37][38][39]. However, wheelchairs require more studies in terms of static, motion, and moment of inertia. These studies make the system more suitable for different users. In addition, the current voice recognition model did not implement a speaker identification algorithm. Identifying a speaker could improve the safety of wheelchair users by only accepting specific instructions from the authorized person. Comparing our study with others in terms of efficacy, reliability, and cost, we believe that our design has overcome many complexities. For example, a recent study conducted by Abdulghani et al. implemented and tested an adaptive neuro-fuzzy control to track powered wheelchairs based on voice recognition. To perform a robust accuracy, the design needs to implement a wireless network where the wheelchair is considered a node within the network. Furthermore, this controller is dependent on real-time data obtained from obstacle avoidance sensors and a voice recognition classifier to function appropriately and efficiently [28]. A different study used an eye and voice-controlled human-machine interface technique to drive a wheelchair. In this technique, the authors incorporated a voice-controlled mode with a web camera to achieve congenial and reliable performance for the controller, in which this camera was used to capture real-time images [22]. Conclusions In this study, a low-cost and robust method was used for designing a voice-controlled wheelchair and subsequently implemented using an Android smartphone app to connect microcontrollers via an offline Wi-Fi hotspot. The hardware used in this design consisted of an Android smartphone (Huawei Y9: CPU-Octa-core, 4 × 2.2 GHz), DC electric motors, batteries, relay model, Raspberry Pi4, and an emergency push button in case of an abnormal system response. The system controlled the wheelchair via a mobile app that was built based on Flutter software. A built-in voice recognition model was developed in combination with the CNN model to train and classify five voice commands (yes, no, left, right, and stop). The experimental procedure was designed and implemented with a total of 2000 utterances of five words that were created by 10 native Arabic speakers. The maneuverability, accuracy, and performance of indoor and outdoor navigation were evaluated in the presence of various disturbances. Normalized confusion matrix, accuracy, precision, recall, and F-score of all voice commands were calculated. Results obtained from real experiments demonstrated that the accuracy of voice recognition commands and wheelchair maneuvers was high. Moreover, the calculated RMSD between the planned and actual nodes at indoor/outdoor maneuvering was shown to be accurate. Importantly, the implemented prototype has many benefits, including its simplicity, low cost, self-sufficiency, and safety. In addition, the system has an emergency push button feature to ensure the safety of the disabled individual and the system. Future Work The system can be adapted with GPS location technology, and the user can use this technology to create their own path, i.e., building a manual map. By introducing ultrasound sensors for safety purposes, this system will activate and ignore the user's command if the chair arrives near an obstacle that could lead to an accident. In addition, we were able to investigate the users' preference for a voice-controlled interface against a brain-controlled interface. Moreover, a speaker identification algorithm can be added to the voice recognition model to ensure the safety of the disabled person by only accepting commands from a specific user.
v3-fos-license
2018-12-02T01:14:55.278Z
2016-10-01T00:00:00.000
54506426
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/25/epjconf_meson2016_07022.pdf", "pdf_hash": "c9d8a319abf669e49983da8411ecfce93004e9e3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1491", "s2fieldsofstudy": [ "Physics" ], "sha1": "7e846a73ccc38578ad57ac9947962c7013b9d74a", "year": 2016 }
pes2o/s2orc
Study of the lowest tensor and scalar resonances in the τ → πππν τ decay . In this note we present a new parametrization of the hadronic current for the decay τ → πππν τ derived from the chiral lagrangian with explicit inclusion of resonances. We have included both scalar, vector and axial-vector resonances. For the first time, the lowest tensor resonance ( f 2 (1270)) is included as well. Both single and double-resonance contributions to the hadronic form factors are taken into account. To satisfy the correct high energy behaviour of the hadronic form factors, constraints on numerical values of the vertex constants are obtained. Introduction Hadronic decay modes of τ-lepton gives information about the hadronization mechanism and resonance dynamics in the energy region, where the pQCD methods are not applicable. In the last years substantial progress for the simulation of the process τ → 3πν τ was achieved.The progress [1] was related to a new parametrization of the hadronic current based on the Resonance Chiral Lagrangian (RChL) and to the recent availability of the unfolded distributions from preliminary BaBar analysis [2] for all invariant hadronic masses for the three-prong mode. The lowest-energy scalar resonance was added phenomenologically and, as a result, the corresponding hadronic current does not reproduce the correct chiral low-energy behaviour and the π 0 π 0 π − and π − π − π + amplitudes do not reproduce the isospin relation [3]. Comparison with the data has demonstrated also a hint on the missing tensor resonance ( f 2 (1270)). The goal of this note is to outline a consistent model to describe the tau-lepton decys into three pions based on RChL with scalar (J PC = 0 ++ ) and tensor resonances (J PC = 2 ++ ) and that fulfill the high-energy QCD and low-energy chiral limits for the hadronic form-factors. The detail description of the model and calculation of the hadronic form-factor will be presented [4]. Three pion hadronic current. Axial-vector form-factors related with scalar and tensor resonances. The most general Lorenz invariant current for τ − → π −(0) (p 1 )π −(0) (p 2 )π −(+) (p 3 2 and, due to the Boson symmetry, the hadronic form-factors are related: F 2 (s 1 , s 2 , q 2 ) = F 1 (s 2 , s 1 , q 2 ). The longitudinal form-factor F P is suppressed by m 2 π /q 2 compared to F 1,2 and in this note we will neglect it. In Fig. 1, we show the three relevant diagrams that must be taken into account: a) the direct production; b) the intermediate π − production and c) the double resonance production through the intermediate a 1 axial-vector resonance. To calculate the corresponding diagrams we use the RChL approach [5] for the vector and axial-vector (A) resonances combined wothw the Lagrangian including interaction of a tensor (T ) multiplet and pions [6]. Moreover we add the operators with two resonances: where for the axial-vector field A αβ we apply the antisymmetric tensor representation [5], S is the scalar field, a tensor multiplet is T μν = f μν 2 / √ 2 * diag(1, 1, 0) (we will assume the ideal mixing in the tensor nonet and that the f 2 (1270) resonance is pure uū + dd). The π 0 π 0 π − and π − π − π + amplitudes obey the isospin relation [3] that leads to For the three-pion form-factor caused by the intermediate σ-resonance we have: where the AS π form-factor and propagation of the σ-resonance and its decay into ππ are To include a σf 0 (980) splitting and non-zero width of the resonances we follow [7] 1 where φ S is the scalar mixing angle. For the f 0 parameters we will use the numerical values M f 0 = 980 MeV, φ S = −8 • [7]. As a first approach we also consider the Breit-Wigner function for the σ-propagator in our numerical study. Schematically the form-factor related with the intermediate tensor resonance state is written as where H i (q 2 , s 1 , s 2 ) are non-singular functions. We would like to stress that for q 2 = M 2 A and s 3 = M 2 f 2 our expression (4) reproduces the corresponding contribution of Eq. (A.3) of [8] and that in [9]. However, for an arbitrary off-shell momentum of the intermediate tensor resonance we have a more general momentum structure of the hadronic current, which also ensures the right low energy behaviour and the transversality of the matrix element in the chiral limit. As a result it brings three additional functions H 2,3,4 (q 2 , s 1 , s 4 ) in (4) (see for discussion [4]). The hadronic form-factors (2) and (4) have been implemented in the Monte Carlo Tauola [1]. To get the model parameters the one-dimentional spectra dΓ/ds 1 , dΓ/ds 3 and dΓ/dq 2 with the hadronic form-factors (2) and (4) in addition to [10] have been fitted to the preliminary π − π − π + BaBar data [2]. The results are presented in Fig. 2 (as an example we present the result for the Breit-Wigner σ-meson propagator). For the first approach we have fixed the tensor resonance parameters to their PDG values. The difference between the data and the theoretical distributions is less than 5−7%, except for the lowand high-energy tails, where the statistics is low. The inclusion of the tensor resonance contribution in the fit and the study of the fit stability and systematic uncertainities are in progress. Figure 2. The τ − → π − π − π + ν τ decay invariant mass distribution. The preliminary BaBar data [2] are presented by points and the line corresponds to the model.
v3-fos-license
2023-07-12T07:57:03.206Z
2023-06-25T00:00:00.000
259686802
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/28/13/4987/pdf?version=1687685351", "pdf_hash": "52f3827ccd02e05cc36f268b5d3e65b34be30d09", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1493", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "sha1": "bae7f2c0a33dd0ede6eb66b065eadf681a6e4e3b", "year": 2023 }
pes2o/s2orc
Thermodynamic Characterization of Rhamnolipid, Triton X-165 and Ethanol as well as Their Mixture Behaviour at the Water-Air Interface In many industrial fields, in medicine or pharmacy, there are used multi-component mixtures of surfactants as well as more and more often mixtures containing biosurfactants. Thus, in our study the mixtures of rhamnolipid (RL), ethanol (ET) and Triton X-165 (TX165) were applied. For these mixtures the surface tension of aqueous solutions with constant concentration and composition of ET and RL as well as the variable concentration of TX165 was measured. Based on the obtained results and the literature data, thermodynamic analyses of the adsorption process of ET, RL, TX165, binary mixtures of ET + RL, ET + TX165 and RL + TX165 as well as the ternary mixtures of RL + ET + TX165 at the water-air interface were made. This analysis allows to propose a new equation for calculation of the total ethanol concentration at the water-air interface using the Guggenheim-Adam adsorption isotherm. The constants in the Langmuir and Szyszkowski equations for each component of the studied mixtures as well as the composition of the mixed monolayer at the water-air interface were also successfully analysed based on the contribution of particular surface active compounds to the water surface tension reduction as well as based on the Frumkin isotherm of adsorption. Introduction The use of surfactants in various post-industrial branches as well as in everyday life is so important that it is difficult to imagine modern economy without them. It should be remembered that classic surfactants are the products of chemical synthesis [1]. Despite the fact that they are not a direct threat to human life they can cause allergies. Additionally, surfactants discharged in sewages can penetrate into ground and surface waters accumulating in our environment [2,3]. The accumulation of surfactants in large quantities and their low degradation become a real threat to fauna and flora. In addition, the surfactants adsorption enables their penetration to the cell membranes and causes pathological changes in living organisms. As a result, more and more attention is paid to the protection of our environment, and thus some attempts are made, among others, to eliminate harmful compounds. For this reason, the research aimed at developing new compounds characterized by complete biodegradation in the environment. Moreover, their obtainment from raw materials and renewable sources or waste products is of significant importance. These types of compounds include biosurfactants, which become more and more interesting [4][5][6][7][8]. Unfortunately, the high cost of their production is one of the important barriers to their widespread use in practice. One of the ways of the practical application of biosurfactants, despite the high cost of their production, is to mix them with classic surfactants, such as Tritons which are still find in practical application [9,10]. Among biosurfactants, rhamnolipid (RL) is special importance and among Tritons it is Triton X-165 (TX165). Rhamnolipid is characterized by large surface activity, low CMC and a number of biological activities such as antiviral, anticancer as well as preventing the biofilms formation [11][12][13][14][15][16][17][18][19][20]. In fact, rhamnolipid is a mixture of up to 26 different compounds [21]. Therefore one can find differ opinions on its adsorption and aggregation activity in the literature [22,23]. It should be noted that most of the available papers deal with monorhamnolipid. Besides the above mentioned properties rhamnolipids characterized by the antiviral activity against herpes simplex viruses and HSV. It is effective in the treatment and alleviation of psoriasis, chronic wounds, including burns and advantageous minimizing scarring. According to the recent studies rhamnolipid can be applied in oncology [20]. In turn, Triton X-165, like other surfactants in this group, is neutral, non-toxic as well as compatible with anionic and cationic surfactants [24]. Compared to the ionic surfactants, the nonionic nature of Tritons is considered physiologically inert. For this reason and their low CMC, they are considered the safest drug delivery compounds regardless of dilution in the human body [25]. Taking into account such characteristic functional properties of rhamnolipid and Triton X-165, their mutual influence on the adsorption and aggregation activity as well as the related composition of micelles and the surface layer at the water-air interface was examined [26]. It was stated, among others, that the composition of the mixed monolayer at the water-air interface can be deduced based on the surface tension isotherms of individual RL and TX165 and that there is a synergetic effect in the reduction of the water surface tension by the mixed monolayer and in the micelles formation at a given composition of the RL and TX165 mixtures. As it is commonly known the positive effects of the mutual influence of RL and ethanol (ET) on their adsorption properties is observed which can be insufficient in some practical applications. It should be added that ethanol has been known as a disinfectant for many years, and the coronavirus pandemic has revived that interest. Thus, the mixture of RL + TX165 with ET can be interesting not only from the theoretical point of view of the adsorption process of multi-component mixtures at the water-air interface, but also from its practical application. As a matter of fact, one can find in the literature many papers dealing with the adsorption of ethanol mixtures with various surfactants at the water-air interface [27][28][29][30]. However, there are no papers in which the process of adsorption at this interface of the mixture of the biosurfactant with the classical surfactant in the presence of ethanol is considered. The thermodynamic interpretation of the adsorption process of surfactants and ethanol at the water-air interface in the whole range of their concentrations is difficult, among others, because the Gibbs surface excess concentration of ethanol is not equal to its total concentration in the mixed monolayer and that in both surface region and the bulk phase ethanol and water can be treated as a mixture but not as a solution. The difference in the interpretation of the surface behaviour of ethanol at the water-air interface not only in the mixture with surfactants but also in their absence may be result of the above. Therefore the aim of the our studies was the thermodynamic characterization of the RL + ET + TX165 mixtures behaviour at the water-air interface based on the surface tension data of the aqueous solution of these mixtures at the constant RL + ET concentration at different compositions and variable concentrations of TX165. Some Physicochemical Properties of Water, RL, ET and TX165 To understand the behaviour of the ternary mixture of RL, ET and TX165 at the water-air interface, some of its and water physicochemical properties must be known. The analysis of the bond length between O and H, the angle between the -OH bonds and the average distance between the water molecules indicates that a water molecule at the temperature of 293 K can be inscribed in a regular cube with the edge length of 3.11 Å. Thus, the contactable area of water molecules with other molecules is 9.7 Å 2 . This value is close to that of the contact area determined by Groszek [31] based on the water vapour adsorption on the quartz surface, which is equal to 10 Å 2 . This means that 1 × 16.6 −6 mole of water is needed to cover the given surface by its monolayer. The value of the water molecule contactable area equal to 10 Å 2 is very often used for the thermodynamic consideration of the surfactants adsorption at the water-air interface [26,32]. This adsorption decreases the water surface tension. According to van Oss et al. [33][34][35] water is the bipolar liquid and its surface tension results from the Lifshitz-van der Waals and Lewis acid-base intermolecular interactions. The Lewis acid-base interactions lead to the hydrogen bonds formation between the water molecules. Thus, the water surface tension can be divided into the Lifshitz-van der Waals component (LW) and the acid-base (AB) one, which results from the electron-acceptor and electron-donor parameters. As a result of the surfactants adsorption at the water-air interface, the surface tension of water is reduced, especially its AB component [26,32]. It proved that the adhesion work of the aqueous solution of many hydrocarbon surfactants to the PTFE surface does not depend on their concentration and is equal to the water adhesion work [36,37]. From the Young-Dupre equation and Fowkes approach to the interface tension [38], it follows that in this case the LW component of the surfactants solution and water is the same and its value is equal to 26.85 mN/m at 293 K (Table 1). This value is considerably higher compared with that determined based on the water-nalkane interface tension [38,39]. As a matter of fact, for water it was assumed that the electron-acceptor and electron-donor parameters of the acid-base component of its surface tension are the same [33][34][35]. It should be remembered that the parameters of other liquids or solids surface tension are consistent with this assumption. These parameters for the ethanol surface tension ( γ LV ) are not the same (Table 1). However, the LW component of the ethanol surface tension is close to that of water determined based on the water-n-alkane interface tension [29,30,39]. Table 1. Components and parameters of the ET, TX165 and RL surface tension (γ LV ) at 293 K, the maximal concentration at the water-air interface (Γ max ), limiting concentration at the water-air interface (Γ 0 ) and limiting area occupied by one water, TX165, ET and RL molecule (A 0 ). The ET molecule volume calculated using the bonds length and the angle between them as well as the average distance of molecules in the bulk phase at 293 K is close to 97 Å 3 and to the value obtained from the ET density, which is equal to 97.3 Å 3 . As follows from the calculations the ethanol molecule can be put in a regular cube with the edge equal [30]. This point out that one ET molecule can replace two water molecules in the interface monolayer. Substance Unlike ET and water, the surface tension of RL and TX165 depends on the orientation of their molecules towards the air phase (Table 1). If their molecules are oriented with the hydrophobic group towards the air phase, then we treat the surface tension as the surface tension of the tail. However, with the orientation of the RL and TX165 molecules with the hydrophilic group towards the air, this tension is called the surface tension of the head ( Table 1). The tail surface tension of TX165 and RL is close to LW of ET and water determined from the water-n-alkane interface tension. However, these values are considerably smaller than LW for water determined from the contact angle of water on the hydrophobic solids [40]. Based on the analysis of the chemical bonds length, the angle between them and the average distance between the molecules at 293 K, it appears that the RL and TX165 molecules cannot be entered into one cube, but into different ones, for the head and tails of their molecules, respectively [26]. It follows from this analysis that the contactable area of RL and TX165 molecules at their perpendicular orientation towards the interface is equal to 69.09 Å 2 and 35.7 Å 2 , respectively. At the parallel orientation of RL molecule towards the air phase the contactable area of its tail is equal to 87.3 Å 2 and that of the head to 72.1 Å 2 [26]. In the case of TX165 at the parallel orientation of its molecule at the interface, the contactable area of the tail is equal to 52.12 Å 2 and that of the head to 101.4 Å 2 . Taking into account the contactable area of water and ET it can be stated that in the bulk phase one molecule of ET can be surrounded by 12 molecules of water. The tail of the TX165 can be surrounded by about 20 water molecules, and the head can be bound by strong hydrogen bonds with about 40 molecules and weak ones also with 40 water molecules. In the case of RL its tail can be surrounded by about 30 molecules of water and the head by 28 ones. The number of the water molecules contacted with the tail and the head of surfactant decide about its tendency to adsorb at the water-air interface. In turn the monolayer formed at the water-air interface reduces the water surface tension. In fact, in the case of ET the water surface tension is reduced to that of ET because the ET is infinitely miscible with water ( Figure S1). One would expect that the minimum surface tension of RL and ET aqueous solutions should be close to the surface tension of their tails. In the case of RL, the minimum surface tension of its aqueous solution is not much higher than LW for the water obtained from the contact angle on the surface of hydrophobic solids. However, in the case of TX165 there is a great difference between the minimal surface tension of its solution and LW for water (Figure S1c) ( Table 1). This may be due to the fact that the number of water molecules surrounding the head of the TX165 molecule are much greater than that of water molecules surrounding its tail. The isotherms of the aqueous solution of ET, RL and TX165 can be described by the exponential function of the second order which has the form: where y 0 , A 1 , A 2 , t 1 and t 2 are the constants, C is the concentration. These constants are probably connected with particular intermolecular interactions between the water molecules and the surface active ones. It was earlier stated that the y 0 constant is related to the LW intermolecular interactions and the other constants to the acid-base ones. The possibility to describe the isotherm of the surface tension is very useful for determination of the surface Gibbs excess concentration (Γ). The surface tension isotherms of the aqueous solution of RL, ET and TX165 can be also described by the Szyszkowski equation which has the form [1]: where γ 0 is the solvent surface tension, R is the gas constant, T is the absolute temperature, Γ max is the maximal Gibbs surface excess concentration and a is the constant. This equation can be applied for determination of the constant a related to the maximal Gibbs surface excess concentration. Additionally, the surface tension isotherm of the aqueous solution of ET can be described by the Connors equation [30,41], which is as follows: where γ S is the surface tension of alcohol, α and β are the empirical constants and X b ET is the mole fraction of ET in the bulk phase. In the case of the ET aqueous solution, despite the possibility of describing the surface tension isotherm by the well-defined mathematical function, it is difficult to determine the real Gibbs surface excess concentration at the water-air interface, and more so the total concentration of ET in the surface layer. For this reason one can find conflicting opinions about this issue in the literature [30]. ET is the surface active agent which, as mentioned above, is infinitely miscible with water. Similarly to the classical surfactants, it forms aggregates at a certain concentration in water, however, they cannot be treated as typical micelles [29,30]. Moreover, unlike typical surfactant, at the concentrations above the critical aggregation concentration (CAC), a decrease in the surface tension of the solution is still observed [30]. The number of the ET moles in 1 dm 3 are also changed as a function of its concentration. Depending on the ET concentration it is treated in the practice as the co-surfactant and/or co-solvent [1,42]. This fact causes that the aqueous solution of ET must be treated thermodynamically in a different way from the aqueous solutions of typical surfactants. In the case of non-ideal solution in which the solute concentration is small, the chemical potential (µ) of the component of the solution is defined asymmetrically. For the solute it can be written: In the case of the mixture of solvents the chemical potential is defined symmetrically and can be expressed: where µ * and µ 0 are the standard chemical potentials, which depend only on temperature and pressure, RTlnX b f * and RTlnX b f 0 are chemical potentials of mixing, X b f * = a * and X b f 0 = a 0 are the activities, f * and f 0 are the activity coefficients and X b is the mole fraction of the solute. It is known that in the concentration range from zero to 0.01 mol/dm 3 most surfactants are present in the bulk phase in the monomeric form which decides about their concentration in the bulk phase [1]. In this range of surfactants concentration it be can assumed that with a small error X b = C ω (C is the concentration of surfactants and ω is the number of the water moles in 1 dm 3 ) and f * ∼ = 1. Indeed, in the considered concentration range of the surface active agent ET fulfills such conditions if its chemical potential is defined asymmetrically. In such case it can written: where n is the constant which depends on the type of the surface active agent and for ET is equal to 1. The studies by Chodzińska et al. [30] proved that in the ET concentration range from 0 to 0.01 mol/dm 3 , the Γ values calculated using C and X b do not differ much. However, the difference increases as the increasing ET concentration. It was concluded by them that the most reliable values of the Gibbs surface excess concentration of ET at the solution-air interface can be obtained from the following equation [26,32]: Indeed, in the case of ET the Γ values are not equal to its total concentration in the surface layer at the solution-air interface. Moreover, they do not tend to zero as a 0 tends to 1. To determine the total concentration of ET at the solution-air interface (Γ tot ) the values of Γ should be recalculated as the Guggenheim-Adam ones (Γ GA ). For this purpose there was applied the following equation [43]: where V S is the average molar volume of the ET aqueous solution and can be expressed as: where V W and V A are the molar volumes of water and ET, respectively. Taking into account the Γ GA values for ET which were going to zero if the ET molar fraction achieved unity it was possible to determine the Γ tot using the expression [30]: where h is the ET molecule length. It appeared that the Γ tot values calculated in this way are not linear from C corresponding to the maximum of Γ GA of the pure ET ( Figure 1a). It is possible that due to the fact that not the total part of ET is in the air phase the h value for ET equal 4.6 Å should be slightly smaller. The problem of Γ tot determination based on the Γ GA values can be solved in another way. Taking into account the limiting contactable area of water A 0 W and ET A 0 A molecules as well as the number of water (ω) and ET (n A ) moles in 1 dm 3 , it is possible to calculate the surface area occupied by the water and ET molecules covering the surface by monolayer. Next, it is possible to determine the two dimensional concentration of ET in the monolayer (Γ s ) corresponding to its concentration in the bulk phase based on the equation: Introducing this equation to Equation (10) instead of C × h one obtains: As follows from Figure 1a the ET total concentration at the solution-air interface determined from Equation (12) is quite real. This equation can be also applied for the aqueous solution of surfactants if instead of Γ GA Γ is used. Then: where the index S refers to the surfactants. The values of Γ tot calculated for RL and TX165 are not much higher than those of Γ (Figure 1b is practically equal to the total one. The total concentration of the surfactant and also ET can be determined using the Frumkin equation which has the form: where γ W is the water surface tension. It appeared that the isotherm of the ethanol concentration in the surface layer calculated from Equation (14) is similar to that determined based on Equation (12) (Figure 1a). There is also agreement between the isotherms of RL and ET concentrations in the surface region determined from Equations (13) and (14) (Figure 1b,c). For RL and TX165 there is also agreement between the Γ values calculated from Equations (6) and (14). where the index S refers to the surfactants. The values of  calculated for RL and TX165 are not much higher than those of  (Figure 1b,c), which confirms that the Gibbs surface excess concentration of the surfactants is practically equal to the total one. The total concentration of the surfactant and also ET can be determined using the Frumkin equation which has the form: where is the water surface tension. It appeared that the isotherm of the ethanol concentration in the surface layer calculated from Equation (14) is similar to that determined based on Equation (12) (Figure 1a). There is also agreement between the isotherms of RL and ET concentrations in the surface region determined from Equations (13) and (14) (Figure 1b,c). For RL and TX165 there is also agreement between the  values calculated from Equations (6) and (14). (8)) as well a plot of the ET total concentration (Γ tot ) (curves 2-4 correspond to the values calculated from Equations (10), (12) and (14), respectively) vs. its concentration (C ET ) (a) and a plot of the RL (b) and TX165 (c) Gibbs surface excess concentration (Γ) (curve 1 correspond to the values calculated from Equation (6)) as well as its total concentration Γ tot ) (curves 2 and 3 correspond the values calculated from Equations (12) and (14) vs. the logarithm of the surfactant concentration (logC). Knowing the total two dimensional concentration of ET, RL and TX165 it is possible to determine the fraction of surface occupied by their molecules (X S ). The fraction of the surface area occupied by the ET, RL and TX165 molecules can be established from the following expression: In fact, X S differs from the surfactant molar fraction (X s M = Γ S Γ W +Γ S ), which can be determined from the following equation: The fractions X S and X S M are applied for determination of the chemical potential in the surface region. In the Langmuir isotherm adsorption equation the value of X S is associated with the constant a [1]. Using the X S values the chemical potential can be defined as: In the equilibrium state the chemical potential of a given compound in the surface region is equal to that in the bulk phase. Based on Equations (4) and (17) one can obtain: where ∆G 0 ads is the standard Gibbs free energy of adsorption. As mentioned above at small surfactants concentration Equation (18) assumes the form: where C X S = a (a is the adsorption constant). In order to examine in which concentration range of ET, RL and TX165 in their aqueous solution the constant a has the same value, the C X S values were calculated from Equation (15) using their total concentration determined from Equation (12) as well as from the Frumkin equation (Equation (14)). In the case of TX165 the contactable area of its molecules at the perpendicular and parallel orientations was calculated. The values of a 0 X S were also determined for ET. It proved that for ET the values of C X S and a 0 X S are not constant and depend on C ( Figure S2) regardless of whether they were calculated using the values X S determined based on the Frumkin isotherm or the equation proposed by us (Equation (12)). This confirms that probably only in the range of ET concentration from 0 to 0.01 mol/dm 3 the values of C X S can be constant. Unfortunately, in this range of ET C it is difficult to measure reliable values of the surface tension. It should be mentioned that above C = 1 mol/dm 3 the changes of C X S and a 0 X S a as a function of C are almost linear. From these dependences one can deduce that f 0 is not equal to unity and increases with the increasing C. It is interesting that the ET concentration in the surface region determined from the Frumkin equation (Equation (14)) in the range of C in the bulk phase from zero to the value corresponding to the maximal Gibbs surface excess concentration at the solution-air interface fulfils the linear form of the Langmuir equation [1]: The value of the constant a determined from Equation (20) is similar to that of C X S at the low ET concentration in the aqueous solution. As the matter of fact, in a such case the ∆G 0 ads of ET calculated from Equation (19) based on C X S and a is similar (Table S1). It should be mentioned that the a 0 X S values are close to those determined from Equation (2) in the whole range of ET concentration. For the aqueous solution of RL and TX165 the values of Γ obtained both from the Frumkin equation as well as our equation fulfills Equation (20). The constants a obtained from Equation (20) are similar to C X S which are constant in the range of TX165 and RL concentration in which they are present in the bulk phase in the monomeric form ( Figure S2). In the case of TX165 X S was calculated using the limiting contactable area of TX165 molecule at its perpendicular and parallel orientations of TX165 molecule tail. From the calculation of X S using the contactable area of TX165 at the perpendicular orientation it appeared that the X S maximal value is smaller than 0.5 and the area occupied by the water molecules is larger than the contactable area of TX165 molecule tail. There are different possibilities of TX165 molecule orientation at the water-air interface. As it was mentioned above the hydrophilic oxyethylene groups are strongly hydrated. Moreover, it is possible that H 3 O + ions are joined with the oxyethylene group [44,45]. For this reason the hydrophilic long part of TX165 molecule cannot be oriented perpendicularly towards the water-air interface at the perpendicular orientation of tail. This way of orientation increases the area occupied by one TX165 molecule at the interface. This is also another way of TX165 molecule orientation. The tail is oriented parallel to the water-air interface and the head perpendicularly. This way of TX165 molecule orientation also increase its contactable area. However, the two ways of TX165 molecules orientation give almost the same values of C X S which are close to that of a determined from Equation (20) as well as from the Szyszkowski equation (Table S1). The same dependences as for TX165 take place in the case of RL ( Figure S2) (Table S1). However, the maximal fraction of the surface occupied by the RL molecules is close to its perpendicular orientation. Indeed, the values of ∆G 0 ads calculated from Equation (19) using the values of constant a determined from the linear form of the Langmuir equation, Szyszkowski equation and calculated from C X S are similar for a given surfactant. Surface Behaviour of ET + RL + TX165 Mixtures Behaviour of the RL + ET + TX165 mixture at the solution-air interface was considered based on the surface tension measurements of the aqueous solution of this mixture at the constant concentration of the RL mixture with ET and the variable TX165 concentration from zero to that higher than its CMC. For better understanding the behaviour of this mixture at the solution-air interface there were considered the surface tension isotherms of the binary mixtures of the components present in the ternary ones at this interface. For these considerations the surface tension isotherms of the aqueous solutions of TX165 mixtures with RL as well as RL with ET were taken from the literature [26,46]. The surface tension isotherms of the aqueous solution of the TX165 mixtures with ET were determined by the surface tension measurements as a function of TX165 concentration at the constant ET concentration (Figures 2 and S3). The constant concentration of RL and ET mixture applied for measurements of the surface tension of solution of RL + ET + TX165 mixture at the TX165 variable concentration was selected based on the RL and ET individual concentration at the solution-air interface. The chosen ET concentrations were equal to 1.07, 3.74, 6.69 and 10.27 mol/dm 3 . These concentrations in the bulk phase were close to the ET concentration corresponding to its unsaturated monolayer at the solution-air interface C unsat ET the maximum Gibbs surface excess concentration C max ET , critical aggregation concentration (CAC) and to that higher than CAC, respectively. In the case of RL the constant concentration were equal to 0.01 (1.98 × 10 −8 mol/dm 3 ), 0.5 (9.92 × 10 −7 mol/dm 3 ), 5 (9.92 × 10 −6 mol/dm 3 ) and 20 mg/dm 3 (3.96 × 10 −5 mol/dm 3 ). The RL concentrations in the bulk phase equal to 0.01, 0.5 and 5 mg/dm 3 corresponded to the unsaturated monolayer at the water-air interface C unsat RL , the first concentration at which the saturated RL layer was formed C f ,sat RL and to that smaller than CMC but larger than C f ,sat RL , respectively. The RL concentration equal 20 mg/dm 3 is close to its CMC [47,48]. As mentioned above, the surface tension of the TX165 and RL tails does not differ much from the LW component of the ET surface tension. However, their ability to reduce the water surface tension by the formed adsorption layer is different. Adsorption tendency of ET, RL and TX165 depends, among others, on the number of water molecules that can contact with them and the energy effect of this contact. The ET molecules in the aqueous environment can contact with each other as well as with water ones. Due to the small difference between the surfaces of water molecules and ET, the energy effect of their contact is not great, as evidenced by the small absolute value of the Gibbs adsorption free energy. It is different in the case of the behaviour of water molecules with those of RL and TX165. Molecules 2023, 28, x FOR PEER REVIEW 10 of 18 3.74, 6.69 and 10.27 mol/dm 3 . These concentrations in the bulk phase were close to the ET concentration corresponding to its unsaturated monolayer at the solution-air interface ( ) the maximum Gibbs surface excess concentration ( ) , critical aggregation concentration (CAC) and to that higher than CAC, respectively. (1) and (2), respectively. In the case of RL the constant concentration were equal to 0.01 (1.98 × 10 −8 mol/dm 3 ), 0.5 (9.92 × 10 −7 mol/dm 3 ), 5 (9.92 × 10 −6 mol/dm 3 ) and 20 mg/dm 3 (3.96 × 10 −5 mol/dm 3 ). The RL concentrations in the bulk phase equal to 0.01, 0.5 and 5 mg/dm 3 corresponded to the unsaturated monolayer at the water-air interface ( ), the first concentration at which the saturated RL layer was formed , and to that smaller than CMC but larger than , , respectively. The RL concentration equal 20 mg/dm 3 is close to its CMC [47,48]. As mentioned above, the surface tension of the TX165 and RL tails does not differ much from the LW component of the ET surface tension. However, their ability to reduce the water surface tension by the formed adsorption layer is different. Adsorption tendency of ET, RL and TX165 depends, among others, on the number of water molecules that can contact with them and the energy effect of this contact. The ET molecules in the aqueous environment can contact with each other as well as with water ones. Due to the small difference between the surfaces of water molecules and ET, the energy effect of their contact is not great, as evidenced by the small absolute value of the Gibbs adsorption free energy. It is different in the case of the behaviour of water molecules with those of RL and TX165. The changes of energy of the aqueous solution of RL and TX165 result from the orientation of water molecules relative to the tail and head of surfactant molecules. The orientation of the water molecules around the head of the RL and TX165 molecules causes a decrease in the solution energy, which depends on the number and strength of hydrogen bonds. The number of water molecules hydrogenbound to the head of the RL molecule are much smaller than that of water molecules connected to the TX165 head as mentioned earlier. The number of water molecules that can contact with the tail of the RL molecule are also much smaller than in the case of the TX165 molecule. However, as mentioned above, the ratio of water molecules surrounding the head of the TX165 molecule to those surrounding the tail is much higher than in the case of RL Probably for this reason, the effect of the water surface tension reduction by adsorption of RL molecules at the waterair interface is greater than that of TX165. Moreover, RL is a weak organic acid and (1) and (2), respectively. The changes of energy of the aqueous solution of RL and TX165 result from the orientation of water molecules relative to the tail and head of surfactant molecules. The orientation of the water molecules around the head of the RL and TX165 molecules causes a decrease in the solution energy, which depends on the number and strength of hydrogen bonds. The number of water molecules hydrogenbound to the head of the RL molecule are much smaller than that of water molecules connected to the TX165 head as mentioned earlier. The number of water molecules that can contact with the tail of the RL molecule are also much smaller than in the case of the TX165 molecule. However, as mentioned above, the ratio of water molecules surrounding the head of the TX165 molecule to those surrounding the tail is much higher than in the case of RL Probably for this reason, the effect of the water surface tension reduction by adsorption of RL molecules at the water-air interface is greater than that of TX165. Moreover, RL is a weak organic acid and repulsive electrostatic interactions can play a role in the adsorption of its molecules at the water-air interface. However, due to the strong bond between the oxyethylene group and H 3 O + , the head of the TX165 molecule can become ionic. In this case weak repulsive intermolecular interactions can occur. In the case of the RL and TX165 mixture synergism in the reduction of water surface tension was present but it was smaller than expected [26]. The mutual influence of RL with ET as well as TX165 with ET mixtures on the reduction of the water surface tension is different from that of the mixture of RL with TX165 (Figures 2 and S3a,b). If ET is treated as co-solvent, then there is formed the aqueous and ET solution of RL or TX165 and the RL mixture with TX165. In such case the RL or TX165 molecules adsorb at the water + ET-air interface and the mixture of RL with TX165 adsorbs at the water-air one. The tendency to adsorb RL and TX165 molecules at the water + ET-air interface depends on the competition of water and ET molecules in the solution bulk phase for the contact with the tail and head of RL or TX165 molecules [26]. If the ET molecules substitute for the water molecules surrounding only the head of RL or TX165 molecules, then the tendency to adsorb at the interface of RL or TX 165 should increase. In the case the water molecules surroundings tail of RL or TX165 ones are displaced by ET molecules it should decrease. Due to the fact that more water molecules surround the head of TX165 molecule than RL, the ET has a greater effect on the adsorption of TX165 than on RL. This conclusion is confirmed by the surface tension isotherms of the mixtures of RL with ET and TX165 with ET at the ET concentrations equal to C unsat ET (Figures 2 and S3b). However, at the concentration of ET higher than that of CAC [30], the surface tension of the aqueous solutions of RL and ET as well as ET and TX165 mixtures is close to that of the ET aqueous solution (Figure 2, Figures S1a and S3b). In this case, it is difficult to assess the mutual influence of ET and RL as well as of ET and TX165 on the water surface tension reduction. However, the adsorption of RL and TX165 at the water + ET-air interface is not excluded due to the surface tension of RL and TX165 tail similar to that of ET [26,30]. In the case of the aqueous solutions of the RL + ET + TX165 mixture, with the constant concentration of RL + ET mixture equal C unsat RL and C unsat ET , respectively, and the variable TX165 concentration, ET has a greater effect on the water surface tension reduction than RL (Figures 3-6). For this arrangement, the minimum surface tension is smaller than that of individual TX165 [49]. The surface tension isotherms for this system can be described not only by the secondorder exponential function but also by the Szyszkowski equation (Figures 3-6). molecules surroundings tail of RL or TX165 ones are displaced by ET molecules it should decrease. Due to the fact that more water molecules surround the head of TX165 molecule than RL, the ET has a greater effect on the adsorption of TX165 than on RL. This conclusion is confirmed by the surface tension isotherms of the mixtures of RL with ET and TX165 with ET at the ET concentrations equal to (Figures 2 and S3b). However, at the concentration of ET higher than that of CAC [30], the surface tension of the aqueous solutions of RL and ET as well as ET and TX165 mixtures is close to that of the ET aqueous solution (Figures 2, S1a and S3b). In this case, it is difficult to assess the mutual influence of ET and RL as well as of ET and TX165 on the water surface tension reduction. However, the adsorption of RL and TX165 at the water + ET-air interface is not excluded due to the surface tension of RL and TX165 tail similar to that of ET [26,30]. In the case of the aqueous solutions of the RL + ET + TX165 mixture, with the constant concentration of RL + ET mixture equal and , respectively, and the variable TX165 concentration, ET has a greater effect on the water surface tension reduction than RL (Figures 3-6). For this arrangement, the minimum surface tension is smaller than that of individual TX165 [49]. Points 1-4 correspond to the measured values, curves 1′-4′ and curves 1′′, 2′′ correspond to the values calculated from Equations (1) and (2), respectively. Points 1-4 correspond to the measured values, curves 1′-4′ and curves 1′′, 2′′ correspond to the values calculated from Equations (1) and (2), respectively. With the increase of ET concentration to or above CAC and RL to the concentration close to its CMC, the surface tension of the RL + ET + TX165 aqueous solution does not change much as a function of the TX165 concentration and its values do not differ much from the surface tension of the individual ET aqueous solution (Figures 3-6 and S1a) [30]. However, the values of the surface tension of aqueous solutions of the RL + ET + TX165 mixture at the constant ET concentration equal to 10.27 mol/dm 3 similar to the surface tension of an individual ET aqueous solution does not prove that there is no adsorption of RL and TX165 at the solution-air interface. As mentioned above the surface tension of RL and TX165 tails is close to that of ET. Therefore, it is difficult to determine directly the lack of adsorption of RL and TX165 at the high concentration of ET based only on the surface tension. Our previous studies [26,32] proved that the composition of the adsorption monolayer at the first approximation, can be predicted from the surface tension isotherms of aqueous solutions of the individual mixture components. Thus, it is possible to explain the presence in the adsorption monolayer of not only ET molecules but also RL as well as TX165 at the ET concentration at which the surface tension of the mixture solution is close to that of the ET itself. According to this suggestion the relative molar fraction of particular components of the mixture X R can be expressed by X R ET = π ET π ET +π RL +π TX165 , X R RL = π RL π ET +π RL +π TX165 and X R TX165 = π TX165 π ET +π RL +π TX165 , respectively. Based on the relative molar fraction of ET, RL and TX165 in the mixed monolayer one can calculate their contribution to the water surface tension reduction from the following expression: Taking into account the contribution of particular components to the reduction of the water surface tension it was possible to determine the concentration of all components of the binary and ternary mixtures in the mixed surface layer using the Frumkin equation (Equation (14) (Figures S4a-S10d)). In the case of TX165 it was also possible to calculate its surface excess concentration from the Gibbs isotherm equation (Equation (6)). It proved that the Gibbs surface excess concentration of TX165 in the systems in which it was possible to determine is close to that calculated from the Frumkin equation. As follows Equation (21) is useful for determination of the concentration of all components of the binary and ternary mixtures at the solution-air interface. The calculations indicate that the same values of the surface tension of the aqueous solution of the binary and ternary mixtures as that of the aqueous solution of individual ET does not prove the absence of RL and TX165 in the surface region. The sum of the Frumkin isotherms adsorption of the binary and ternary mixtures suggests that the packing of mixed monolayers is greater than that of single one of particular components of the mixture or that the TX165 or RL or RL + TX165 mixtures are adsorbed not at the water-air but at the water + ET-air interface ( Figures S4-S10). From the Frumkin isotherms of the surface concentration of the particular components of the mixed monolayer it was also possible to determine the adsorption constants for all studied systems as well as the standard Gibbs free energy of adsorption using Equations (15) and (19). Such calculations were made both for the binary and ternary systems (Figures S11-S14). The adsorption constants were compared to those calculated from the Langmuir linear equation as well as from the Szyszkowski equation (Figures S11 and S12). The C X S values for TX165 both in the binary and ternary mixtures are constant only in the range of its low concentration and are almost the same independently of the constant concentration of ET and RL (Figures S11a and S12a). However, the C X S values for RL and ET practically do not depend on the TX165 concentration but on their constant concentration values to a large extent and in the case of ET and RL mixture also on its composition (Figures S11b,c and S12b,c). The values of C X S for TX165, RL and ET differ a little from the constant adsorption obtained from the Szyszkowski and linear Langmuir equations. Thus, in the case of TX165 it takes place only in the range of its low concentration. However, it was not possible to determine the constant adsorption for all studied systems using the linear Langmuir and Szyszkowski equations. Moreover it should be emphasized that in some cases both for the binary and ternary mixtures the total values of X S are greater than unity. This confirms the above mentioned conclusion that the layer of RL or TX165 or RL + TX165 can be formed at the water + ET mixed solvent-air interface. Thus, the surface region can be treated as the sum of two monolayers, the ET and mixed RL + TX165 one. Such ET, RL and TX165 behaviour explains why the sum of the surface fraction occupied by particular components of the ternary system is greater than zero. Taking into account the adsorption constants the standard Gibbs free energy of adsorption of ET, RL and TX165 was calculated from Equation (19) (Figures S13 and S14) (Table S1). As follows from the calculations both RL and ET influence on TX165 tendency to adsorb only in a small extent and the absolute value of ∆G 0 ads both for the binary and ternary systems is close to that of individual TX165 at its low concentration (Figures S13a and S14a) (Table S1). In the case of RL and ET the ∆G 0 ads values depend on their individual mutual constant concentrations as well as that of TX165 (Figures S13b,c and S14b,c). For the surface tension measurements there were prepared four series of the aqueous solution of TX165 mixtures with ET and sixteen series for the aqueous solution of the RL + ET + TX165 mixture. The series of the aqueous solution of TX165 mixtures with ET had the constant concentration of ET equal 1.07, 3.74, 6.69 or 10.27 mol/dm 3 and the variable concentration of TX165 from 0 to 4 × 10 −3 mol/dm 3 . The series of the aqueous solutions of RL + ET +TX165 mixture included the constant concentration of the ET + RL sum and the variable concentration of TX165 as mentioned above. The constant sum concentration of ET + RL was prepared from all possible combinations of ET at the concentrations equal 1.07, 3.74, 6.69 and 10.27 mol/dm 3 and RL at the concentrations equal 0.01 (1.98 × 10 −8 mol/dm 3 ), 0.5 (9.92 × 10 −7 mol/dm 3 ), 5 (9.92 × 10 −7 mol/dm 3 ) and 20 mg/dm 3 (3.96 × 10 −5 mol/dm 3 ), respectively. Methods The surface tension (γ LV ) of the aqueous solution of the ET + TX165 mixture at the constant concentration of ET and variable concentration of TX165 as well as the aqueous solution of ET + RL + TX165 mixture and TX165 variable RL + ET + TX165 was measured by the Krüss K9 tensiometer (Krüss, Hamburg, Germany) according to the platinum ring detachment method (du Nouy's method) at 293 K. Before the surface tension measurements of the studied aqueous solutions the tensiometer was calibrated based on the measurements of water and methanol surface tension. For each concentration of the TX165 at each series of the solutions, the surface tension measurements were repeated at least ten times. The standard deviation was ±0.1 mN/m and the uncertainty of the surface tension measurements was in the range from 0.3% to 0.7%. Conclusions Based on the obtained results and their thermodynamic analysis, a number of conclusions can be drawn. The surface tension isotherm of the aqueous solution of particular components of the ET + RL + TX 165 mixture can be described by the exponential function of the second order. However, taking into account the exponential function equation and Gibbs isotherm of the surface excess concentration one it is impossible to obtain the real surface excess concentration in whole range of its concentrations in the bulk phase. The more real seems to be the Guggenheim-Adam isotherm of excess concentration of ET at the water-air interface in the whole range of its concentrations in the bulk phase. We have proposed a simple equation for calculation of the total ET concentration at the water-air interface, taking into account the isotherm of the excess ET concentration calculated from the Guggenheim-Adam equation. The total concentration of ET calculated by our equation is in accordance with that obtained from the Frumkin equation. On the basis of our equation it was proved that the Gibbs surface excess concentration of RL and ET at the water-air interface is practically equal to the total one. The total concentration allows to determine the fraction of the surface area occupied by ET, RL and TX165 at the water-air interface in their individual solutions as well as in the solution of the binary and ternary mixtures of these compounds. The ratio of the mole fraction of the ET, RL and TX165 to that of the surface area occupied by them is close to the adsorption constant determined from the linear Langmuir and Szyszkowski equations in a range of ET, RL and TX165 concentrations. Based on the surface fraction occupied by ET, RL and TX165 it was deduced that RL and TX165 can be adsorbed at the mixed solvent (water + ET) -air interface. It proved that adsorption of RL and TX165 can take place even when the surface tension of the aqueous solution of the binary or ternary systems including ET is close to that of ET individual solution. At its low concentration ET influences on the standard Gibbs free energy of RL and TX165 adsorption to a small extent. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/molecules28134987/s1, Table S1. The constant of adsorption (a) from the Szyszkowski and linear Langmuir equations as well as Gibbs standard free energy of adsorption. Figure S1. A plot of the surface tension (γ LV ) of the aqueous solution of ET as function of its concentration (C ET ) (a), surface tension of the aqueous solution of RL (b) and TX165 (c) as function of the logarithm of its concentration (logC). Figure S2. A plot of C X S , a and a ET X s for ET vs. its concentration (C ET ). Figure S3. A plot of the surface tension (γ LV ) of the aqueous solution of the RL + TX165 mixture (a) at the constant RL concentration equal to 0.01 mg/dm 3 , 0.5 mg/dm 3 , 5 mg/dm 3 and 20 mg/dm 3 vs. the logarithm of TX165 concentration (logC TX165 ) as well as plot of the surface tension of the aqueous solution of the RL + ET mixture (b) at the constant ET concentration equal to 1.07 mol/dm 3 , 3.74 mol/dm 3 , 6.69 mol/dm 3 and 10.27 mol/dm 3 vs. the logarithm of the RL concentration (logC RL ). Figure S4. A plot of the Frumkin concentration at the solution-air interface (Γ) of RL, ET and their sum at the constant ET concentration equal to 1.07 mol/dm 3 (a), 3.74 mol/dm 3 (b), 6.69 mol/dm 3 (c) and 10.27 mol/dm 3 (d) vs. the logarithm of the RL concentration (logC RL ). Figure S5. A plot of the Gibbs concentration (Γ) of TX165 as well as the Frumkin concentration (Γ) of TX165, RL and their sum at the constant RL concentration equal to 0.01 mg/dm 3 (a), 0.5 mg/dm 3 (b), 5 mg/dm 3 (c) and 20 mg/dm 3 (d) vs. the logarithm of the TX165 concentration (C TX165 ). Figure S6. A plot of the Gibbs concentration (Γ) of TX165 as well as the Frumkin concentration (Γ) of TX165, ET and their sum at the constant ET concentration equal 1.07 mol/dm 3 (a), 3.74 mol/dm 3 (b), 6.69 mol/dm 3 (c) and 10.27 mol/dm 3 (d) vs. the logarithm of the TX165 concentration (C TX165 ). Figure S7. A plot of the Gibbs concentration (Γ) of TX165 as well as the Frumkin concentration (Γ) of TX165, RL, ET and their sum vs. the logarithm of the TX165 concentration (C TX165 ) at the constant RL concentration equal to 0.01 mg/dm 3 and constant ET concentration equal to 1.07 mol/dm 3 (a), 3.74 mol/dm 3 (b), 6.69 mol/dm 3 (c) and 10.27 mol/dm 3 (d). Figure S8. A plot of the Gibbs concentration (Γ) of TX165 as well as the Frumkin concentration (Γ) of TX165, RL, ET and their sum vs. the logarithm of the TX165 concentration (C TX165 ) at the constant RL concentration equal to 0.5 mg/dm 3 and constant ET concentration equal to 1.07 mol/dm 3 (a), 3.74 mol/dm 3 (b), 6.69 mol/dm 3 (c) and 10.27 mol/dm 3 (d). Figure S9. A plot of the Gibbs concentration (Γ) of TX165 as well as the Frumkin concentration (Γ) of TX165, RL, ET and their sum vs. the logarithm of the TX165 concentration (C TX165 ) at the constant RL concentration equal to 5 mg/dm 3 and constant ET concentration equal to 1.07 mol/dm 3 (a), 3.74 mol/dm 3 (b), 6.69 mol/dm 3 (c) and 10.27 mol/dm 3 (d). Figure S10. A plot of the Gibbs concentration (Γ) of TX165 as well as the Frumkin concentration (Γ) of TX165, RL, ET and their sum vs. the logarithm of the TX165 concentration (C TX165 ) at the constant RL concentration equal to 20 mg/dm 3 and constant ET concentration equal to 1.07 mol/dm 3 (a), 3.74 mol/dm 3 (b), 6.69 mol/dm 3 (c) and 10.27 mol/dm 3 (d). Figure S11. A plot of C X S and a for TX165 (a), RL (b) and ET (c) vs. the logarithm of the TX165 concentration (C TX165 ). Figure S12. A plot of C X S and a for TX165 (a), RL (b) and ET (c) vs. the logarithm of theTX165 concentration (C TX165 ). Figure S13. A plot of the standard Gibbs free energy of adsorption (∆G 0 ads ) calculated from Equation (19) for TX165 (a), RL (b) and ET (c) vs. the logarithm of the TX165 concentration (C TX165 ). Figure S14. A plot of the standard Gibbs free energy of adsorption (∆G 0 ads ) calculated from Equation (19) for TX165 (a), RL (b) and ET (c) vs. the logarithm of the TX165 concentration (C TX165 ).
v3-fos-license
2016-05-31T19:58:12.500Z
2015-05-26T00:00:00.000
17494872
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13355-015-0346-7.pdf", "pdf_hash": "071a20c353da1fa0456f4c01526f249e8c1e9a41", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1494", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "071a20c353da1fa0456f4c01526f249e8c1e9a41", "year": 2015 }
pes2o/s2orc
The effect of pre-analytical treatment on the results of stoichiometric measurements in invertebrates Growing interest in the application of stoichiometric approaches to community ecology has resulted in an increasing number of studies examining invertebrate body composition. Our experiments demonstrate various sources of possible error related to the use of pre-analytical procedures. We examined the effects of different preservatives (ethanol and formaldehyde) used in pitfall traps, time of preservation (2 weeks or 3 days) and drying method (vacuum drying at 50 °C and freeze-drying) on the determination of body composition in invertebrates representing taxa often used in such studies: earthworms and five species of insects (adults or larvae). The contents of C, N, S, P, Fe, Zn, Cu, Mn, Ca, Mg and K in each animal were measured. The use of solvents (ethanol or formaldehyde) in pitfall traps and for preservation significantly affects the body composition and stoichiometry of earthworms, even during short exposure times. Insects (both adults and larvae) were affected only during a 2-week exposure; 3 days of exposure did not significantly change their chemical composition. Vacuum-oven drying of animals at 50 °C does not affect their body composition relative to freeze-drying. Introduction Growing interest in the application of stoichiometric approaches to community ecology has resulted in numerous formaldehyde solution (Braun et al. 2009;Gibb and Oseto 2006;Knapp 2012). These chemicals prevent animals from escaping, kill them quickly and effectively preserve them for further investigations. However, preservation can also affect the chemical body composition-some elements can be washed out as a result of the partial dehydration of organisms; nevertheless, some authors continue to use ethanol to kill animals for elemental analysis (e.g., . Among numerous papers on the impact of pitfall traps on collected animals, only a few address this problem. Zödl and Wittmann (2003) studied the effects of pitfall traps on metal concentrations in selected invertebrates. Braun et al. (2009), 2012 studied the consequences of the composition and concentration of reagents used in pitfall traps and the preservation of firebugs (Pyrrhocoris apterus Fallén) on elemental analysis, while Knapp (2012) investigated the impact of preservative fluid and storage conditions on the estimation of body mass in carabid beetles [Anchomenus dorsalis (Pontoppidan)]. Prior to elemental analyses, samples have to be dried. Drying can take place at high temperatures in standard driers, vacuum driers (which allow samples to dry at lower temperatures than standard ones) or by dehydration at low temperatures (freeze-drying; Couture et al. 2010;Cross et al. 2003). The most commonly used temperatures for drying animals are 60 °C and 50 °C (Bertram et al. 2006(Bertram et al. , 2008Kagata and Ohgushi 2007;Kay et al. 2006;Larsen et al. 2009Larsen et al. , 2011Marichal et al. 2011;Villanueva et al. 2011;Woods et al. 2004). Numerous papers that report on the chemical body composition of invertebrates do not provide complete information on the sampling and analytical prepping procedures used (Hambäck et al. 2009). The drying procedure may change elemental composition (e.g., due to the vaporization of some compounds). Preservation methods differ in their effects on particular invertebrate taxa and developmental stages (e.g., soft-bodied, high water content animals, such as annelid worms, and chitinized arthropods, such as beetle imagines). The goal of our experiments was to identify sources of error resulting from pre-analytical procedures in physiologically different species of invertebrates. We employed commonly used, commercially available animals for our analysis, including earthworms, representing an important and abundant group of soil animals used in ecotoxicological tests, and various insect species at the adult and larval stages. Animals differed in their sensitivity to desiccation, water and fat content and their degree of chitinization (Finke 2002(Finke , 2007Finke and Winn 2004). We tested two killing agents routinely used in the traps: ethanol and formaldehyde. We excluded ethylene glycol from the study because this agent forms a sticky layer on the animals' body that may affect both body mass and composition, especially when it is used in the form of automobile antifreeze (Braun et al. 2009(Braun et al. , 2012Knapp 2012). This property, therefore, makes use of ethylene glycol unsuitable for preserving animals for chemical analysis. The aims of the study were to answer the following questions: 1. How do different preservation methods influence the concentrations of C, N, S, P, Fe, Ca, Mg, Mn, K, Zn and Cu in invertebrates? 2. How do the effects observed in soft-bodied invertebrates (i.e., earthworms) differ from those observed in invertebrates with chitinized bodies (e.g., insects)? 3. How does the time of preservation affect the body composition of invertebrates? 4. How do drying methods affect the determination of invertebrate body dry mass and composition? Experiment description To assess the impact of different preservatives (ethanol and formaldehyde) on invertebrate body composition, the impact of preservation time and the impact of the drying method on the determination of the body composition of invertebrates, we conducted three experiments: Long exposure simulation (2 weeks of pitfall trap exposure) Each of the above-listed animal species was divided into three groups treated in the following ways: --group 1 (reference): frozen (in −20 °C, after taking from culture), to determine the body composition of untreated animals (N = 5); --group 2: stored in 4 % formaldehyde for 2 weeks (N = 5); --group 3: stored in 70 % ethanol for 2 weeks (N = 5). Before the experiment, the earthworms were kept on moist blotting paper for 48 h to clean the gut. Before weighing, the animals preserved in ethanol or formaldehyde were gently dried on blotting paper. Thereafter, the animals were dried in a vacuum drier at 50 °C for 48 h. Short exposure simulation Based on the results of the previous experiment, the short exposure simulation was performed on selected groups of animals. Only adult D. veneta, the species most strongly influenced by preservation methods, and M. domestica larvae, the species least influenced by preservation methods, were used in this experiment as representative insects. Animals were treated as in the previous experiment, but only for 3 days: (1) frozen (N = 9), (2) stored in 4 % formaldehyde (N = 9) and (3) stored in 70 % ethanol (N = 9). Earthworms were kept before the experiment for 48 h on moist blotting paper to clean the gut. After treatment, the animals were dried in a vacuum drier at 50 °C for 48 h. Drying method simulation Again, only earthworms (N = 5) and house fly larvae (N = 8) were used. The animals were treated as described for the short exposure simulation, then subdivided into two groups: one dried in a vacuum drier at 50 °C for 48 h and the other freeze-dried for 7 days. Analytical procedures Dried animals were powdered in a porcelain mortar before analysis. Because body size differs among the species studied, different numbers of individuals were taken per sample: one (earthworms, Zophobas larvae, cockroaches, crickets), two (fly larvae) or three (Trenebrio larvae), so that the total sample mass (not lower than 0.1 g dry mass) would allow all studied elements to be analyzed in one individual or in one sample. The contents of C, N, S, Fe, Zn, Cu, Mn, Ca, Mg and K were measured in the long-exposure simulation, and P, Fe, Zn, Cu, Mn, Ca and Mg were analyzed in the short-exposure simulation. Total C and total N contents were determined using the Perkin-Elmer CHN analyzer. The metal content was determined by atomic absorption. Dried samples were digested in 5 ml boiling, concentrated (65 %) nitric acid (Suprapur, Merck). When the fumes were white and the solution was completely clear, the sample evaporated, cooled to room temperature and was filled with up to 30 ml of deionized water. The digested samples were analyzed for Zn, Fe, Ca, Mgand K by flame AAS (Perkin-Elmer AAnalyst 800) and for Mn and Cu by graphite furnace AAS (Perkin-Elmer AAnalyst 800). Five blank samples of nitric acid accompanied every analytical run. Phosphorus concentrations were determined in nitric acid digested samples by the colorimetric method using a flow injection analyzer (FIA-System MLE GmbH). The analytical precision of all analyses was confirmed against certified standard material (NCS ZC81001, pork liver; NCS ZC73016, chicken). Statistical analysis For comparisons of the body composition between taxa (only frozen animals), we used a one-way ANOVA separately for each micro-and macroelement. If the data did not meet normality and homogeneity of variance (C:N, C:S), we used a nonparametric test (Kruskall-Wallis). To compare the elemental concentrations in animals preserved using various methods, we used a one-way ANOVA separately for each taxon, preservation time and element. For short-term preservation, only D. veneta and M. domestica larvae were used. Pooling together two or three individuals in one sample may mask the individual variation of the results, but at any statistical comparisons of averages this effect is compensated by the reduced number of the degrees of freedom (the confidence limits of the means remain unaffected). To analyze the differences between groups of samples preserved using three methods and two exposition times with regard to all analyzed elements, we performed a principal component analysis (PCA) on the correlation matrices. We only used the data for earthworms and fly larvae and only the elements analyzed in both long-and shortterm exposure times (i.e., only microelements). We then conducted a two-way ANOVA on the scores of the first and second principal component axes to test for significant differences between exposure time and preservation method and to test for interaction effects between exposure time and preservation method. The effect of drying and preservation method on body mass loss was analyzed only for earthworms and fly larvae. We compared the initial fresh body masses of the experimental animals using a one-way ANOVA. The differences in the loss of fresh body mass during 2 weeks of preservation between animals preserved in ethanol and in formaldehyde were analyzed using a t test. To analyze the effect of both preservation and drying method on body mass loss, a two-way ANOVA was used. Results are presented as mean ± SE. Statistical analyses were performed using the statistical packages STATISTICA 10 and CANOCO 5 (Ter Braak and Šmilauer 2012). Body composition and stoichiometric relations of experimental animals Only frozen individuals (control samples) were used for comparisons of body composition between taxa. Macroelements The animals studied differed in both concentrations of macroelements and their stoichiometric relations (Table 1). Earthworm (D. veneta) tissues were richer in nitrogen than all other animals studied, whereas the lowest concentration of nitrogen was found in Z. morio larvae. Z. morio larvae differed clearly from all other animals in carbon content. The concentration of sulfur was the highest in earthworms. C:N and C:S ratios were the lowest in earthworms and the highest in Z. morio larvae and T. molitor larvae (Table 1). Microelements Some microelements differed significantly in their concentrations in the species studied ( Table 2). The earthworms contained much higher concentrations of iron and calcium, T. molitor larvae contained higher concentrations of magnesium, and T. molitor and G. assimilis contained higher concentrations of copper than any other animals studied here ( Table 2). Impact of preservation methods and time of preservation on body composition Long exposure simulation Macroelements The preservation method affected the measurement of macroelement content in animal bodies and their stoichiometric relations (Fig. 1, results of ANOVA in Table 3). For earthworms, significant differences were found in carbon (higher concentration in formaldehyde preserved individuals) and nitrogen (higher concentration in ethanol preserved individuals). For insects, statistically significant differences between samples preserved in different ways were observed for carbon content (B. dubia, higher Microelements Microelement contents in the earthworms were strongly affected by the preservation method (ANOVA results in Table 3). Significant differences were observed in all elements except manganese (Table 3). Potassium was the most easily washed out during preservation of all animals studied (except B. dubia) (Fig. 2). Statistically significant differences between preservation methods were also found for iron in T. molitor larvae (Fig. 2); however, the difference was only significant between animals preserved in ethanol and formaldehyde. Short exposure simulation Macroelements In this simulation, only the phosphorus concentrations were measured. Microelements The results of microelement analyses in D. veneta are similar to those in the long-exposure simulation (Fig. 2). Significant differences were found between treatment groups in all elements except manganese and copper (Table 3). In M. domestica larvae (Fig. 2), a significant difference was found only in magnesium content between ethanolexposed and frozen individuals; however, such effects were not found in the long-term preservation experiment (Table 3). Preservation time The effect of preservation time on the earthworms and fly larvae was examined with respect to all the microelements analyzed in long and short exposures. In both species, the samples preserved in formaldehyde for long time periods clustered separately on the PCA plot from those exposed Table 3 ANOVA table for differences between elemental concentrations in animals preserved using various methods: frozen (M), preserved in ethanol (E) and in formaldehyde (F) Values that indicate significance probability levels (p) are given. For p < 0.05, the results of Tukey's HSD post hoc tests of differences between treatments are given (≠ denotes a significant difference between the groups indicated) for short time periods (Fig. 3). This effect for alcohol is only visible in fly larvae (Fig. 3). A two-way ANOVA performed on the scores from the first and second principal component axes provides a more detailed picture of the effect of different preservation methods on the elemental content of samples. Earthworms Along the first (horizontal) axis, a significant interaction was detected between the preservation mode and time (preservation p < 0.00001, time p < 0.00001, interaction p < 0.00001) (Fig. 3). Samples preserved in formaldehyde differed greatly from frozen samples in the content of Mg, K and Ca (variables with the highest loadings on the first axis). Among formaldehyde samples, a clear difference was visible between samples preserved for long and short times. Among the samples frozen and preserved in ethanol, there was no detectable effect of exposure time on elemental composition (Fig. 3). The second axis shows the differentiation of samples according to preservation method. The samples preserved in ethanol differed (higher concentration of Zn and Cu) from the samples preserved in formaldehyde as well as the frozen samples (preservation p < 0.00001, time ns, p = 0.05, interaction ns, p = 0.18). Fly larvae Samples preserved for long time periods differed from samples preserved for short time periods (first axis: preservation ns, p = 0.96, time p < 0.00001, interaction ns, p = 0.26; Fig. 3). Long-term preserved samples contained less Mg and Ca but more Cu. The second axis separates samples according to preservation method (preservation p < 0.002, time ns, p = 0.09, interaction p < 0.0007). Samples preserved in formaldehyde for long time periods had little K and Mn. Earthworms Before the experiments, the body masses of the earthworms did not significantly differ between experimental groups (Table 4). During preservation, the earthworms lost their body mass, but the difference in the wet mass of animals preserved in ethanol and in formaldehyde was not significant (t 16 = −1.7, p = 0.1) ( Table 4). After drying, the dry body masses (% of fresh mass) of the earthworms preserved before drying were significantly lower than those frozen before drying in all experimental groups (Table 4). A two-way ANOVA indicated that there was an insignificant effect of drying method (F 1,33 = 0.56 p = 0.46) and a highly significant effect of preservation method (F 2,33 = 22.32, p < 0.0001). Frozen individuals differed from both the ethanol-and formaldehyde-preserved individuals (Tukey HSD p < 0.001), while no difference was observed between individuals preserved in ethanol and in formaldehyde (Tukey HSD p = 0.89). The interaction between the preservation method and drying method was insignificant (F 2,33 = 1.20, p = 0.31). Fly larvae Body mass loss during preservation was much lower in the fly larvae than in the earthworms (Table 4), but the difference between ethanol-and formaldehyde-preserved individuals was significant (t 26 = 9.3, p < 0.0001). After drying, the fly larvae lost much less of their mass than the earthworms, and neither the preservation method nor the drying method affected the dry masses of individuals from all groups (two-way ANOVA: drying method F 1,39 = 0.82, a b p = 0.37; preservation method F 2,39 = 0.53, p = 0.59) ( Table 4). The interaction between preservation method and drying method was insignificant (F 2,39 = 0.34, p = 0.71). Discussion The results of our study show that the use of solvents (ethanol or formaldehyde) in pitfall traps as killing agents and for preservation significantly affects the body composition and stoichiometry of earthworms, even during short exposure times. Insects (both adults and larvae) were affected only during a long (2 weeks) exposure; a 3-day exposure did not significantly change their chemical composition. Preservation time may play an important role in the migration of elements from animal bodies. When using pitfall traps to sample invertebrates for chemical analyses, the time of exposure must be reduced-collecting animals from the traps every day is recommended. The effect of preservation in ethanol on body composition appears to be weaker than that of formaldehyde. Drying of animals at low temperatures (50 °C) in a vacuum does not affect their body composition compared with freeze-drying. The results of the present study are compatible with previous studies (Braun et al. 2009(Braun et al. , 2012Zödl and Wittmann 2003), although the methods and animals used were different. Among the animals studied, insects (both adults and larvae) appeared to be more resistant to the impact of preservation on body composition than oligochaetes. The short preservation time (3 days) did not significantly affect their element contents. The main factors determining the susceptibility of body chemistry to preserving liquids are the permeability of the body cover, the mobility of elements in the body tissues and the duration of exposure. Insects like arachnids are adapted to terrestrial environments, and their body cover prevents water loss in dry air, whereas oligochaetes (earthworms and enchytraeids), which inhabit the soil, escape dry conditions by moving into deeper soil layers or going into diapause (Edwards and Bohlen 1996;Lavelle and Spain 2005). Additionally, oligochaetes excrete high amounts of mucus, which allows them to keep their body moist even under dry environmental conditions. Another group of terrestrial animals that appears to lose elements quite easily during preservation is isopods, as has been shown by Zödl and Wittmann (2003). However, 2 weeks of exposure to formalin in pitfall traps affected the metal concentrations in only one (Zn) or two (Cu) of the four populations sampled by Zödl and Wittmann (2003). Because of the differential permeability of the integument, animals preserved in ethanol or formaldehyde lose their body liquids at different rates. Because body liquids may contain certain amounts of elements (e.g., nitrogen, potassium, sodium and in oligochaetes) as well as products of nitrogen metabolism (containing urea, NH 3 , uric acid) that are excreted with mucus (Laverack 1963), their loss alters the chemical composition of their body. After 3 days of preservation, the earthworms lost over 40 % of their wet body mass, while the fly larvae lost less than 10 % (Table 4). The proportion of water lost during preservation in earthworms observed here is significantly higher than that lost in Lumbricus terrestris L. reported by Satchell (1971) and Raw (1962;cited after Satchell 1971). This inconsistency may be explained by the differences in water loss between species (Satchell 1971;Wetzel et al. 2005). Finally, the different proportion of the preservation agent relative to the mass of the preserved organism may provide another possible explanation. The dry masses of the fly larvae did not differ significantly between treatment groups (frozen, preserved in ethanol or formaldehyde; Table 4) or between drying methods (freeze-dried vs. 50 °C vacuum-dried, Table 4). In contrast, the dry masses of the earthworms preserved in solvents were significantly lower than the dry masses of the frozen earthworms (Table 4). The body mass loss during preservation is mainly due to the excretion of mucus, a reaction to stress in earthworms. The excretion of mucus may affect the body composition of earthworms in two ways: some elements may be more condensed in the dry mass of particular tissues, while other elements may have lower concentrations as a result of the loss of mucus (Scheu 1991). It is often difficult to judge which solvent-ethanol or formaldehyde-more strongly influences body composition (Figs. 1, 2; Table 3). The overall patterns in the changes in invertebrate body stoichiometry due to the exposure of preservation agents can be visualized using multidimensional analysis. The PCA plot on the two major axes suggests that the effect of formaldehyde is more pronounced (Fig. 3). Another important factor is the time of exposure (Wetzel et al. 2005). For the insects studied (except cockroaches), during 2 weeks of preservation, potassium declined rapidly, as this element is very mobile and can be easily washed out, but the concentrations of other elements were not affected (Table 3). Three days of exposure did not result in any significant difference in potassium concentration in the fly larvae (Table 3). In contrast, the preservation of earthworms in both ethanol and formaldehyde resulted in significant changes in the concentration of almost all the elements studied, even after only 3 days of preservation. The potassium concentration in the earthworms preserved for 2 weeks in ethanol amounted to only 1/8 of those in the freeze-dried individuals, whereas the earthworms contained approximately 1/3 of the amount in the freeze-dried ones after 3 days of preservation in ethanol (Figs. 1, 2). Our results show that preservation time plays an important role in the migration of elements from animal bodies. However, collecting insects for chemical analyses in pitfall traps filled with preservation solvent may be employed if the time of exposure is reduced to a minimum (collecting animals from the traps every day), but oligochaetes should be collected alive. Our results suggest that there is no effect of drying method (vacuum in 50 °C and freeze-drying) on the results of invertebrate body composition analyses. Experiments by Zödl and Wittmann (2003) suggested that heat drying (at 105 °C) did not affect the concentration of heavy metals in invertebrate bodies; however, the authors noted that in the case of volatile elements (Hg, As) high temperature may result in erroneous estimations. Some authors have used even higher temperatures (70-130 °C, Alves et al. 2010;Braun et al. 2009;Jelaska et al. 2007;Visanuvimol and Bertram 2011). This may result in the evaporation of some organic compounds (e.g., lipids), resulting in the loss of dry mass and spuriously increasing the concentration of other elements. The high drying temperatures above the point of protein denaturation (approximately 70 °C) may cause a release of volatile forms of nitrogen; therefore, we recommend freeze-drying or heat-vacuum drying at lower temperatures. Conclusions Our results show that the methods of handling animals before chemical analyses (sampling, pitfall traps, method of preservation, time of preservation and drying) may significantly affect results of the body composition and stoichiometry of invertebrates. However, the pitfall traps with an appropriate solvent may be used for sampling animals for chemical analyses if the exposition time is minimized. We recommend using ethanol as a solvent and exposition of traps limited to 1 day only.
v3-fos-license
2017-11-08T09:44:06.882Z
2018-01-01T00:00:00.000
3916053
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.aclweb.org/anthology/W18-0108.pdf", "pdf_hash": "a49dbac55a02165f41a8a94b26763ae32345ee70", "pdf_src": "ACL", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1495", "s2fieldsofstudy": [ "Linguistics" ], "sha1": "a49dbac55a02165f41a8a94b26763ae32345ee70", "year": 2018 }
pes2o/s2orc
Exactly two things to learn from modeling scope ambiguity resolution: Developmental continuity and numeral semantics Behavioral data suggest that both children and adults struggle to access the inverse interpretation of scopally-ambiguous utterances in certain contexts. To determine whether the causes of both child and adult difficulty are similar, we extend an existing computational model of children’s scope ambiguity resolution in context. We find that the same utterancedisambiguation mechanism is active in both children and adults, supporting the theory of developmental continuity. Moreover, because adult behavior requires an exact semantics for numerals, we also provide empirical support for this theory of linguistic representation. Introduction Consider a scenario where two out of three horses jump over a fence. Is the utterance in (1) a reasonable description? (1) Every horse didn't jump over the fence. a. ∀ ¬ (surface scope): None of the horses jumped over the fence. b. ¬ ∀ (inverse scope): Not all of the horses jumped the fence. Adults typically endorse the every-not utterance as true, while children typically do not (Musolino, 1998;Lidz and Musolino, 2002;Musolino and Lidz, 2006;Musolino, 2006;Viau et al., 2010). This utterance is scopally ambiguous, involving multiple quantifiers (i.e., every and n't). Children's behavior is non-adult-like at five years old: though the inverse interpretation in (1b) is true, five-year-olds still do not endorse the utterance. Now, consider a scenario with only two horses, one of which successfully jumps. Is the two-not utterance in (2) a reasonable description? a. ∃ 2 ¬ (surface scope): There are two horses that didn't jump. b. ¬ ∃ 2 (inverse scope): It's not the case that there are two horses that jumped. Most adults would not endorse the utterance, despite the inverse interpretation in (2b) being true (Musolino and Lidz, 2003)-that is, it is not the case that two horses jumped (only one did). This pair of findings underscores that not endorsing a scopally-ambiguous utterance when only the inverse interpretation is true occurs in both children and adults in different contexts. We might therefore wonder about continuity in the development of scope ambiguity resolution: is the cause of child utterance non-endorsement in an every-not scenario qualitatively similar to the cause of adult non-endorsement in the two-not scenario? If so, this similarity supports developmental continuity: children use the same mechanism as adults when understanding ambiguous utterances in context. The only difference would be that adults are better-equipped to deploy this mechanism, owing perhaps to increased domaingeneral knowledge and/or cognitive capacities, or to language-specific experience. In contrast, if the underlying causes are different for child and adult utterance non-endorsement, this would suggest developmental discontinuity: children are engaging in a fundamentally different process as they understand ambiguous utterances. So, the development of adult-like behavior would involve ac-quiring a new mechanism for resolving ambiguity. To choose between these accounts, we must understand utterance (non-)endorsement behavior. To that end, Savinelli et al. (2017) articulated a computational model of ambiguity resolution within the Rational Speech Act (RSA) framework (Goodman and Frank, 2016). The model demonstrated the central role of pragmatic factors over processing factors in explaining children's non-adult-like behavior in every-not contexts like (1). Here, we extend this same model to capture two-not utterance endorsement behavior in adults, identifying the factors that yield the experimentally-observed patterns of behavior. We begin by reviewing the scope ambiguity resolution findings from Savinelli et al. (2017), together with the experimental results that informed the design of the computational model. Next, we consider the experimental findings from Musolino and Lidz (2003), where adults seem to behave like children in specific contexts. We then extend the model from Savinelli et al. (2017) to capture these new data, and demonstrate support for developmental continuity, with the same utterancedisambiguation mechanism active in both children and adults. Importantly, the complete range of experimentally-observed behavior can only be captured if adults represent two with an exact interpretation, an unexpected finding that informs the debate on numeral semantics. Previous work: Modeling every-not In the basic truth-value judgment task (TVJT) meant to assess children's scope disambiguation behavior, children first watch a scene acted out and hear a puppet produce a scopally-ambiguous utterance; then they are asked whether they would endorse the utterance as a true description of the scenario. Children typically do not endorse the ambiguous every-not utterance in the critical context where the surface interpretation is false but the inverse interpretation is true (e.g., a NOT-ALL scenario where two out of three horses jumped over a fence). This behavior has been interpreted as children failing to access the inverse scope interpretation that would make the utterance true. Interestingly, various alterations to the task setup have yielded more adult-like behavior in children, with higher rates of endorsement for the every-not utterance. These experimental manipulations highlight at least three core factors (two pragmatic, one processing) that underlie children's behavior in the TVJT: (i) pragmatic: expectations about the experimental world (e.g., how likely successful outcomes are), (ii) pragmatic: expectations about the Question Under Discussion (QUD; e.g., were all outcomes successful?), and (iii) processing: the accessibility of the inverse scope (i.e., the ease by which the logical form is either derived or accessed in real time). To capture and independently manipulate the contributions of each of these factors, Savinelli et al. (2017) modeled ambiguity resolution for every-not utterances within the Bayesian RSA framework (Goodman and Frank, 2016). They found that when it comes to understanding nonadult-like behavior in the TVJT, there is likely a stronger role for the pragmatics of context management (as realized in prior beliefs about world state and QUD) than for grammatical processing (as realized in the prior on scope interpretations), although there may be a role for both. So, children's failure to endorse scopally-ambiguous every-not utterances in NOT-ALL contexts likely stems from their beliefs about the experimental world (e.g., whether actors are a priori likely to succeed) and about the topic of conversation (e.g., whether the conversational goal is to determine if all the actors succeeded), rather than an inability to grammatically derive or access the inverse scope interpretation in real time. Perhaps most interesting was the prediction that the highest rates of utterance endorsement (i.e., adult-like behavior) occur when resolving the scope ambiguity is irrelevant for communicating successfully about the NOT-ALL world. This occurs when expectations about the world state favor total success, or when the QUD asks if all? of the actors succeeded. In either case, both scope interpretations serve to inform a listener, either that the a priori likely total-success world state does not hold or that the answer to the all? QUD is no. The explanation for utterance non-endorsement (i.e., non-adult-like behavior) is similar: Savinelli et al. (2017)'s model predicts the lowest rates of utterance endorsement in NOT-ALL scenarios when neither interpretation is useful for successful communication, either because the interpretation is false (surface) or because beliefs about the pragmatic context render the interpretation uninformative (inverse). Thus, the TVJT utterance non-endorsement data previously used to demon-strate children's difficulty with inverse scope calculation in fact require no disambiguation at all if the goal is informative communication. Instead, children simply need the ability to manage the pragmatic context so they can recognize the potential informativity of these ambiguous utterances. Notably, considerations of pragmatic context have long played a role in the design and interpretation of the TVJT (e.g., Crain et al., 1996). Savinelli et al. (2017) take the extra step of formally articulating specific pragmatic factors and the role they play in children's apparent difficulty with ambiguous utterances in the TVJT. 3 Experimental two-not results Musolino and Lidz (2003) (ML2003) demonstrated that adults are sensitive to some of the same experimentally-manipulated factors as children when it comes to endorsing scopally-ambiguous utterances. Like us, ML2003 were interested in developmental continuity: are child and adult ambiguity resolution behavior in context qualitatively similar? To investigate this, they conducted three TVJTs. The goal of the first TVJT was to determine which interpretation adults preferred when they endorsed a scopally-ambiguous utterance in context. For example, adults heard "Cookie Monster didn't eat two pizza slices" in a context where both interpretations were true, such as Cookie Monster eating one of three available pizza slices (surface: it's not the case he ate two = true; inverse: there are two he didn't eat = true). Importantly, they were then asked to explain why they endorsed the utterance so that their preferred scope interpretation could be inferred. For example, if their answer referred to Cookie Monster eating only one slice, then it was assumed that they accessed the surface interpretation (surface: he only ate one, so it's not the case he ate two). However, if their answer referred to the two slices Cookie Monster did not eat, then it was assumed that they accessed the inverse interpretation (inverse: there are two he didn't eat). All participants endorsed the utterance, and their explanations indicated a strong surface scope bias (75% surface, 7.5% inverse, 17.5% unclear from explanation). ML2003 interpreted this finding as evdence that adults prefer the surface scope interpretation when both interpretations are true in context. It could then be that children's non-endorsement behavior, if due to a preference for the surface scope interpretation, is driven by a stronger version of this same preference. In the second TVJT, adults heard an utterance like (2) (e.g., Two frogs didn't jump over the rock) in two different contexts. The first context included two actors (e.g., frogs), with one actor successfully completing the action (e.g., frog 1 jumping over the rock while frog 2 does not). In this 1-OF-2 context, the surface interpretation is false (only frog 2 did not jump, so it is false that two frogs didn't jump), but the inverse interpretation is true (only frog 1 did jump, so it is indeed not the case that two frogs jumped). Yet, adults had low endorsement (endorsement rate: 27.5%). In the second context, there were four actors. For example, four frogs attempted to jump over a rock; two jumped (frog 1 , frog 2 ) and two did not (frog 3 , frog 4 ). In this 2-OF-4 context, the surface interpretation of the scopally-ambiguous utterance is true because frog 3 and frog 4 did not jump. However, the inverse interpretation is false because frog 1 and frog 2 did indeed jump. Here, adults had an endorsement rate of 100%. ML2003 interpreted this asymmetry of endorsement between the two contexts as a strong surface scope preference in adults. According to this explanation, non-endorsement occurs in the 1-OF-2 context because only the inverse scope is true; in contrast, endorsement occurs in the 2-OF-4 context because only the surface scope is true. That is, both these patterns would result because adults favor the surface interpretation. While we find this account compelling, we note that there are other differences between the two contexts that might lead to the observed asymmetry. For example, it could be that the seemingly benign change from two to four total actors affects the pragmatic context. Another variable is the potential ambiguity present in the numeral semantics, which only occurs in the 2-OF-4 context. 1 In either case, exploring the effects of these factors in a formal model of TVJT behavior can clarify the process underlying utterance disambiguation. Returning to the question of continuity, while the observable behavior appears qualitatively the same in children and adults (i.e., a nonendorsement preference when only the inverse scope is true), it remains unclear whether the underlying cause of this behavior is the same. To evaluate this, ML2003 conducted a third TVJT with adults in 1-OF-2 contexts, involving an experimental manipulation from Lidz and Musolino (2002) that children are known to be sensitive to. This manipulation is implemented as an explicit linguistic contrast clause before the scopally-ambiguous utterance, such as the bolded material in (3). (3) Two frogs jumped over the fence but two frogs didn't jump over the rock. Adults responded the same way as the children from Lidz and Musolino (2002), shifting to strong endorsement in the 1-OF-2 context (endorsement rate: 92.5%; cf. 27.5% endorsement without the explicit contrast). Yet, as ML2003 note themselves, it is not obvious why the adult endorsement rate increases when the linguistic contrast is present. According to ML2003, the linguistic contrast creates the positive expectation necessary to make the negation in the later clause felicitous (Wason, 1965;Musolino and Lidz, 2003). However, it remains unclear how exactly the context creates the positive expectation. There are multiple ways this information could impact the context. For example, the positive expectation could arise because of a change either in the pragmatic factor of world knowledge or in the pragmatic factor of the QUD. Specifically, the affirmative statement could alter the listener's beliefs about how successful frogs are known to be in the experimental world. This affirmative statement also potentially changes the listener's expectations about the QUD: because both frogs were successful before, the topic of conversation might now be focused on whether both frogs were successful again. Both these effects could generate a context that makes the negated clause more informative. Without knowing the factors responsible for endorsement behavior, it is difficult to determine whether the same factors are operating in both children and adults, and whether the underlying representation of two matters. Computational modeling can help determine why these two behavioral patterns occur: (i) adult sensitivity to the pragmatic contrast manipulation, and (ii) asymmetry in endorsement behavior between 1-OF-2 and 2-OF-4 contexts in the absence of that pragmatic contrast. In the next section, we extend Savinelli et al. (2017)'s model of utterance disambiguation to handle these empirical data. 4 Modeling two-not Savinelli et al. (2017)'s model of ambiguity resolution is conceived within the Bayesian Rational Speech Act (RSA) framework (Goodman and Frank, 2016), which views language understanding as a social reasoning process. A pragmatic listener L 1 interprets an utterance by reasoning about a cooperative speaker S 1 who is trying to inform a literal listener L 0 about the world. The model is a "lifted-variable" extension in which the ambiguous utterance's literal semantics gets parameterized by interpretation-fixing variables (e.g., the relative scope of the quantificational elements; Bergen et al., 2012;Lassiter and Goodman, 2013;Scontras and Goodman, 2017). Hearing an ambiguous utterance, the pragmatic listener L 1 reasons jointly about the true state of the world (e.g., how many frogs successfully jumped), the scope interpretation speaker S 1 had in mind (i.e., surface, inverse), as well as the likely QUD that the utterance addresses (e.g., did all frogs succeed?). To generate testable predictions, participant TVJT behavior is modeled as a pragmatic speaker S 2 's (relative) endorsement of an utterance about an observed situation (cf. Degen and Goodman, 2014;Tessler and Goodman, 2016). That is, this model predicts whether a speaker S 2 would endorse the scopally-ambiguous utterance as a description of the observed state. S 2 decides this by reasoning about whether a pragmatic listener L 1 (who is reasoning about a speaker S 1 reasoning about a literal listener L 0 ) would arrive at the correct world state after hearing the utterance. We take world states w ∈ W to consist of a collection of n individuals (e.g., frogs), each of which either succeeds or fails at the relevant task (e.g., jumping over a rock). The world success baserate b suc determines the probability that an individual will succeed. We assume a simple truthfunctional semantics where an utterance u denotes a mapping from world states to truth values (Bool = {true, false}). We parameterize this truth function so that it depends on the scope interpretation i ∈ I = {inverse, surface}, [[u]] i : W → Bool. We consider two alternative utterances u ∈ U: the null utterance (i.e., saying nothing at all, and so choosing not to endorse the utterance) and the scopally-ambiguous utterance amb (e.g., "Two frogs didn't jump over the rock"). To fix the utterance semantics, we must consider potential ambiguity introduced by the nu-meral in cases where the number of relevant individuals n exceeds the numeral's value. For example, consider the positive utterance "Two frogs jumped over the rock." If we assign an exact (=) semantics to two, the sentence will be true only when two frogs succeeded. If we assign an at-least (≥) semantics, the sentence will be true when two or more frogs succeeded. In worlds with only two frogs, the = vs. ≥ distinction makes no difference: the sentence will be true in the world where both frogs succeed, and false in all other worlds. However, in a world with four frogs, the numeral semantics will define different truthfunctional mappings. With the = semantics, the sentence is true in any world where two frogsbut not more-succeed. With the ≥ semantics, the sentence is true in a larger set of worlds, where two or more frogs succeed. To evaluate the potential contribution of utterance semantics to the 1-OF-2 vs. 2-OF-4 asymmetry, we consider two different sets of utterance alternatives, one with amb = and another with amb ≥ . So, U = = {null, amb = } and U ≥ = {null, amb ≥ }. The utterance semantics in (4) We consider five potential QUDs q ∈ Q, three from the original Savinelli et al. (2017) model: (i) "What happened with the frogs?" (what-happened?), (ii) "Did all the frogs succeed?" (all?), and (iii) "Did none of the frogs succeed?" (none?). We also consider two additional QUDs specific to the two-not utterance: (iv) "Did exactly two frogs succeed?" (two = ?), and (v) "Did at least two frogs succeed?" (two ≥ ? To capture the notion that communication proceeds relative to a specific QUD q, L 0 must infer not only the true world state w, but also the value of the QUD applied to that world state, [[q]](w) = x. Speaker S 1 chooses an utterance u in proportion to its utility in communicating about the true world state w with respect to the QUD q, [[q]](w) = x. Thus, the speaker maximizes the probability that L 0 arrives at the intended x from u. This selection is implemented via a softmax function (exp) and free parameter α, which controls how rational the speaker is in utterance selection. Utterance interpretation happens at the level of the pragmatic listener L 1 , who interprets an utterance u to jointly infer the world state w, the interpretation i, and the QUD q. We model ambiguity resolution as pragmatic inference over an underspecified utterance semantics (i.e., the interpretation variable i). To do this, L 1 inverts S 1 's model, and so the joint probability of w, i, and q is proportional to the likelihood of S 1 producing utterance u given world state w, interpretation i, and QUD q, as well as the priors on w, i, and q. P L 1 (w, i, q|u) ∝ P S 1 (u|w, i, q) · P(w) · P(i) · P(q) To model the utterance endorsement implicit in TVJT, we need an additional level of inference. Pragmatic speaker S 2 observes the true world state w and selects u by inverting the L 1 model, thus maximizing the probability that a pragmatic listener would arrive at w from u by summing over possible interpretations i and QUDs q for world w. To generate model predictions for adult sensitivity to the pragmatic contrast manipulation and the 1-OF-2 vs. 2-OF-4 asymmetry, we fix various model parameters. For 1-OF-2 data, we set the number of individuals n to 2; for 2-OF-4 data, we set n to 4. The S 1 speaker rationality parameter α > 0 is set to 2.5 (i.e., the same value in the everynot simulations in Savinelli et al., 2017). The priors P(w) and P(q) correspond to expectations for the discourse context (i.e., likely world states or QUDs). In the default case, we set these priors to be uniform over their possible values, with the individual success baserate b suc set to 0.5 and the relevant QUDs having equal probability. The interpretation prior P(i) corresponds to how easy it is to access the inverse scope interpretation. In the default case, P(inverse) = P(surface) = 0.5. Importantly, to better understand utterance endorsement behavior with scopally-ambiguous utterances, we can independently manipulate the values of the priors on W , Q, and I, and observe their impact on utterance endorsement. Results Recall the empirical phenomena we are trying to capture: (i) the dramatic increase in endorsement rates in the 1-OF-2 context when an explicit contrast is present, and (ii) the stark asymmetry in utterance endorsement rates between 1-OF-2 and 2-OF-4 contexts in the absence of that explicit contrast. We report results for each in turn. The explicit contrast effect for 1-OF-2 Following Savinelli et al. (2017), we attempt to capture the increase in ambiguous utterance endorsement rates by systematically manipulating the pragmatic and processing factors, as implemented in the relevant priors. For the world state prior (Figure 1, left), we manipulate baserate b suc , which determines an actor's chance of success. Holding the QUD and scope priors at their default values, we see a marked in-crease in endorsement of the ambiguous utterance in the 1-OF-2 context as prior beliefs about frog success increase. Utterance endorsement is at its lowest (33%) when prior knowledge suggests that frogs are particularly unlikely to succeed; endorsement is at its highest (86%) when frogs are very likely to succeed. For the QUD prior (Figure 1, center), we selectively favor specific QUDs by assigning a 0.9 probability to the favored QUD and dividing the remaining probability equally among the others. Since the two? QUDs are equivalent to the all? QUD in the 1-OF-2 context, we omitted the two? QUDs in the 1-OF-2 context. Holding the other priors at their default values, endorsement rates increase from favoring the none? QUD (35%) to favoring the what-happened? QUD (46%) to favoring the all? QUD (64%). For the scope prior (Figure 1, right), we manipulate the prior probability of the inverse interpretation while holding the other factors at their default values. We see an increase in utterance endorsement as the probability of inverse increases, from a low of 40% to a high of 57%. Each manipulation qualitatively captures the response pattern from ML2003, and replicates the results of Savinelli et al. for every-not. However, as observed by Savinelli et al., the pragmatic factors controlling world and QUD beliefs have a much more pronounced effect than the processing factor controlling scope access; the model's world prior baserate manipulation comes closest to capturing the experimentally-observed effect of explicit contrast manipulation (i.e., 27.5% base endorsement vs. 92.5% endorsement with the explicit contrast). We can amplify the effect of the world baserate manipulation by allowing it to interact with the other factors. As discussed in Section 3, the early success explicit contrast manipulation possibly affects two aspects of the disambiguation calculus: it could increase expectations for success and shift the topic of conversation to whether total success was achieved again. Figure 2 plots the interaction of the world and QUD priors, together with the effect of scope. The low-endorsement baseline (27.5%) most likely results from low expectations for success (b suc = 0.1) and QUD uncertainty (QUD: uniform), together with a moderate to low probability of accessing the inverse scope (P(inv) = 0.1 or 0.5). From this baseline, we implement the effect of the explicit contrast manipulation by increasing success expectations (b suc = 0.9) and shifting the topic of conversation to whether total success occurred (QUD: all?). This manipulation results in a dramatic increase in utterance endorsement, irrespective of scope. To summarize, if the explicit contrast clause impacts a listener's beliefs about the frogs' chance of success (increasing b suc ) or the QUD (favoring all?), then the model predicts the endorsement rate should increase. Notably, both of these manipulations make the two-not scopally-ambiguous utterance more informative for a listener. In the case of the the world state manipulation, twonot-under either scope interpretation-informs the listener that her prior beliefs about total frog success do not hold. Similarly with the QUD manipulation favoring all?, both scope interpretations answer this question in the negative (i.e., it is not the case that all (two) frogs succeeded). The 1-OF-2 vs. 2-OF-4 asymmetry If the factors identified for capturing the experimentally-observed effect of the ex- QUD uncertainty (QUD: uniform). To model the 2-OF-4 context, we change the number of actors n to 4 and additionally manipulate whether the exact (=) or at-least (≥) semantics applies, as they diverge when there are more than two actors in the context (see section 4). This decision impacts both the utterance semantics and the relevant set of QUDs (e.g., if ≥ semantics gets used, then the two ≥ ? QUD is included in the set of potential QUDs). As shown in Figure 3, we do indeed predict high endorsement with the same parameter value baseline, but only with exact utterance semantics and a low probability of accessing the inverse scope (P(inv) = 0.1). In this case, we find an endorsement rate of 92%. Discussion Our model of ambiguity resolution in context captures the effect of the explicit contrast manipulation observed in adults in ML2003, and notably also captured the same effect in children (Savinelli et al., 2017). This parallelism-sensitivity to the pragmatic context in both children and adults across different contexts-suggests that the same disambiguation mechanism is active in both children and adults. Adults seem better able to charitably interpret less supportive pragmatic contexts (i.e., the original every-not scenarios); yet, there remain scenarios (i.e., certain two-not contexts) where even adult abilities are exceeded. We interpret the common underlying mechanism as support for developmental continuity in scope ambiguity resolution, with no qualitative shift required. In addition to supporting the developmental continuity hypothesis, this model also suggests why manipulations like the explicit contrast clause work. The pragmatic variables capture the explicit contrast manipulation because they create a situation where the ambiguous two-not utterance is still informative despite the ambiguity. When the utterance provides the listener with information that diverges from her prior beliefs, the ambiguous two-not utterance becomes more informative, more useful, and therefore more endorsable. The model also seamlessly captures ML2003's results from the 2-OF-4 context: with the very same parameter values that yield low endorsement rates for 1-OF-2 contexts, the model predicts the high endorsement observed for 2-OF-4 contexts. The only change is increasing the number of relevant individuals from two to four. This exploration of the 1-OF-2 vs. 2-OF-4 contexts allows us to refine our understanding of the potential sources of child and adult behavior. Savinelli et al. (2017)'s findings suggested that pragmatic factors alone are capable of capturing the non-adult-like behavior in children and the extension in the current model captures the explicit contrast effect in adults; however, the processing factor of scope (in particular, disfavoring the inverse scope) is needed to account for ML2003's 2-OF-4 results. This finding supports ML2003's conclusion, namely that adults have a strong preference for surface interpretations of two-not utterances. Combined with the appropriate pragmatic context, that preference has the potential to drive the endorsement asymmetry between the 1-OF-2 and 2-OF-4 contexts. Whether this surface interpretation preference in two-not contexts is also something children share remains an open empirical question; experimental results for every-not do not answer this ques-tion definitively (Viau et al., 2010;Savinelli et al., 2017). Importantly, the present model requires one more ingredient to account for the 1-OF-2 vs. 2-OF-4 difference in adult behavior: an exact numeral semantics (in contrast to an at-least semantics; cf. Geurts, 2006;Breheny, 2008;Spector, 2013;Kennedy, 2015). While the underlying utterance semantics is not something easy to manipulate in an experiment, it is exactly the kind of variable we can systematically explore in a computational model. By doing so here, we are able to show the necessity of an exact semantics in generating observable adult behavior. This provides empirical support, coming from computational modeling, for theories about the semantics of numerals. In particular, the only way to account for the observed adult behavior is if adults interpret two utterances as meaning exactly two. To sum up, these findings underscore the complexity of information involved in interpreting scopally-ambiguous utterances, including the literal semantics of the utterances involved, processing factors that affect interpretation accessibility, pragmatic factors that affect the potential informativity of the utterance, and the recursive social reasoning between speakers and listeners. Here, we find evidence for the impact of both pragmatic and processing factors, and in particular how a specific confluence of values for these factors yields the observed adult utterance endorsement behavior in multiple contexts. The fact that pragmatic factors can have such a pronounced effect on their own accords with previous computational findings about the cause of children's utterance endorsement behavior in context, thereby highlighting the developmental continuity in pragmatic reasoning from childhood to adulthood. Moreover, the fact that the processing factor of scope access is crucial for explaining adult behavior in certain contexts motivates experimental work with children to see if their behavior is likewise affected by this processing factor in similar contexts. The fact that only the exact utterance semantics is capable of yielding the observed behavior provides empirical support in favor of this theory of representation for numerals. More broadly, we have demonstrated how computational modeling can help us refine our theories about different aspects of language, including theories of language understanding, language development, and language representation.
v3-fos-license
2023-03-15T15:15:18.613Z
2023-03-01T00:00:00.000
257528464
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9032/11/6/823/pdf?version=1678681557", "pdf_hash": "80fbee8138cae60c98f6d7409faa3ee1a5f2a58f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1496", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "71c08a87b2d18a6e78b70a49f2cc031134d60a22", "year": 2023 }
pes2o/s2orc
Reducing Violence in Riyadh’s Emergency Departments: The Critical Role of Healthcare Providers Emergency department staff are at high risk of experiencing violence and aggression from patients and visitors, which can have negative impacts on healthcare providers in the ED. The aim of this study was to explore the role of healthcare providers in addressing local violence in Riyadh EDs and investigate their preparedness for managing violent incidents. We used a descriptive, correlational design with survey methodology to collect data from a convenience sample of nurses, ED technicians, physicians, and advanced practice providers in Riyadh city’s EDs. To examine the associations, we used an analysis of variance (ANOVA) for unadjusted relationships and an analysis of covariance (ANCOVA) for adjusted associations. Measures included a demographic survey, and clinicians responded to an online survey. A total of 206 ED staff participated in the questionnaire, and 59% reported experiencing physical violence during an ED shift, with 61% of incidents being caused by relatives. Additionally, 32% of the participants witnessed workplace violence. Our findings revealed that male healthcare workers, physicians, and those working in the governmental sector were at the highest risk of experiencing violence. We also found a statistically significant association between the rate of patients seen in the ED and the frequency of assault (physical or verbal) in the ED. Our results suggest that the rate of workplace violence in Riyadh EDs is high, and more efforts are needed to protect the health and well-being of healthcare providers. Senior management should take a position against ED domestic violence and reinforce managerial and healthcare provider resources by adopting policies and procedures that protect healthcare workers’ safety. This study provides valuable insights into the nature and prevalence of violence in Riyadh EDs and highlights the critical role of healthcare providers in reducing violence in EDs. Introduction According to the International Labour Organization (ILO), workplace violence is defined as "any action, incident or behavior that withdraws from reasonable conduct in which a person is assaulted, threatened, harmed, injured in the course of, or as a direct consequence of, their work." Emergency department staff are particularly vulnerable to workplace violence and violent and aggressive acts committed during violent incidents. Violent and aggressive acts committed by patients and visitors in emergency departments (EDs) remain a worldwide problem [1][2][3]. However, when initiating a career in healthcare, most healthcare providers do not anticipate that there may be concerns about their wellbeing every working day [4]. ED healthcare providers have reported experiencing a range of negative emotions, such as fear, confusion, anger, depression, guilt, embarrassment, helplessness, and disappointment due to violence [1,5]. The most frequently studied effects of such violence include a decline in job satisfaction and an elevated risk of burnout [6]. Moreover, instances of violent behavior encountered by ED staff are frequently not reported systematically. In qualitative research, employees have characterized workplace violence as a commonplace occurrence and an expected aspect of their job [7,8]. These risk factors for violent incidents in emergency departments include, for example, situations where patients or their family members may be in a highly emotional state, leading them to become aggressive or violent. Some patients may have mental health or substance abuse issues, which can increase their risk for violent behavior. Emergency department staff may be at increased risk for violence because they are often the first point of contact for patients in crisis. Some patients may feel frustrated or disrespected by the healthcare system, leading them to take out their anger on staff. Crowded and overworked emergency departments can also contribute to a high-stress environment, which can lead to incidents of violence [9][10][11][12]. Besides physical injuries, acts of violence (including verbal abuse) can result in severe negative effects on the mental health and well-being of healthcare workers, leading to a higher risk of burnout [3,[13][14][15][16][17][18]. However, despite these risks, few violence prevention measures are currently in place, leaving employees feeling ill equipped to handle violent situations. To increase awareness of this issue, it is crucial to gain an understanding of the prevalence and impact of violence in emergency care. Patient/visitor violence and aggression in the ED occur nearly every day with few evidence-based interventions that decrease the incidence. However, exposure to violence and aggression varies by position. Wong et al. observed that self-reported exposure to violent episodes was higher for technicians, nurses, and officers than for other healthcare professionals. Somani and colleagues [19] assessed the effectiveness of training in deescalation and multicomponent interventions to decrease violence and hostility in the ED. There is a developing consensus that multicomponent interventions, including all stakeholders and the use of community advisory boards, are necessary to combat violence and aggression in the ED [9,19,20] Patient and visitor violence and aggression are significant issues in healthcare settings that can threaten the safety and well-being of healthcare providers. There is a need for effective strategies to prevent and manage such incidents, which can help employees feel better prepared and more secure in their workplace. The aim of this study was to investigate the role of healthcare providers in addressing patient and visitor violence and aggression, as well as the attitude of their healthcare facilities toward these incidents. The novelty of our work lies in exploring these issues in the specific context of emergency departments in Riyadh, Saudi Arabia, where little research has been conducted on this topic. Our findings provide insights into the unique challenges faced by healthcare providers in this context and offer practical implications for improving the prevention and management of patient and visitor violence and aggression in emergency departments. Materials and Methods In this study, we utilized a descriptive, correlational design with a survey methodology to investigate the occurrence of patient/visitor violence and aggression against ED clinicians in Riyadh city's EDs. Our sample consisted of nurses, ED technicians, physicians, and advanced practice providers who were conveniently recruited. To collect data, we used a demographic survey, and clinicians responded to an online survey. To evaluate the associations between different variables, we employed statistical analyses including analysis of variance (ANOVA) for unadjusted relationships and analysis of covariance (ANCOVA) for adjusted associations. We used correlation coefficients to measure the strength and direction of the relationship between variables. Specifically, we calculated Pearson's correlation coefficient for continuous variables and Spearman's rank correlation coefficient for ordinal variables. We also performed ANCOVA while controlling for potential confounding variables. Sample and Procedures To gather information about patient/visitor violence and aggression toward emergency department clinicians in Riyadh city, the present study utilized an online survey with a cross-sectional design. The survey was distributed to all ED staff, who received an informational flyer outlining the study's purpose, procedures, participation conditions, and data management. Participants were encouraged to share the survey link with their emergency medicine colleagues using the snowball sampling technique. Follow-up reminders were sent to boost response rates. Data were collected from various EDs in Riyadh, Saudi Arabia, between September and November 2022. Study participants had to be at least 21 years old, work in EDs of hospitals or emergency services, and engage in direct interactions with patients and their families as part of their job. The sample included nurses, ED technicians, paramedics, physicians, and other healthcare providers. Measures In addition to collecting sociodemographic information, the online survey employed a comprehensive approach to capture data on various variables related to violent incidents. These included the frequency and nature of the incidents, the targets of aggression, and the measures taken to address it. The survey also assessed the stress and physical and psychological impact of such incidents, as well as the level of support from supervisors and colleagues. Additionally, the survey evaluated personal coping mechanisms, the ability to continue working, potential role changes, and mental health outcomes. Moreover, the survey explored the level of preparation provided by the workplace, and the availability of support after violent incidents, such as preventive measures, de-escalation training, reporting systems, and follow-up care. By covering these diverse measures, the survey aimed to provide a comprehensive evaluation of the experiences of emergency department clinicians who have encountered patient/visitor violence and aggression. Statistical Tools Statistical analyses were performed using SPSS v.28. The basic features of the data were described using frequency and percentage distributions. Spearman's rank coefficient of correlation was used to assess the association between ordinal variables. Spearman's rho is a nonparametric measure of rank correlation, denoted by the Greek letter ρ, which measures the strength and direction of the association between two ranked variables. This approach is appropriate for both continuous and discrete ordinal variables (Lehman, Ann; 2005). To examine the association between variables and sociodemographic factors, Pearson's chi-squared test (χ 2 ) was used. This statistical test is applied to sets of categorical data to test the independence of two variables, expressed in a contingency table. Independence means that knowing the value of the row variable does not change the probabilities of the column variable (and vice versa). Another way of looking at independence is to say that the row percentages (or column percentages) remain constant from row to row (or column to column). The strength of the correlation was evaluated using the following descriptors: very weak (0.0-0.19), weak (0.20-0.39), moderate (0.40-0.59), strong (0.60-0.79), and very strong (0.80-1.0). Ethical Considerations This study was approved by King Saud University Research Centre Institutional Review Board (Ref No.: KSU-HE-23-043), and informed consent to participate in this study was taken from each participant before answering the questionnaire. Descriptive Data The sample size comprised 206 individuals, as 34 participants were excluded due to incomplete answers. In terms of demographic data, 53.9% of the survey responders worked for the governmental sector, 38.8% worked for the Ministry of Health (MOH), and 7.3% worked for the private sector. Physicians represented 47.6% of the sample, 27.7% were nurses, 14.1% were paramedics, and 10.7% were categorized as other healthcare providers. In terms of gender, 56.8% were male and 43.2% were female. Regarding experience, 38.8% had 1-5 years of experience, 26.2% had 6-10 years of experience, and 35% had more than 10 years of experience working in emergency departments (Table 1). Local Violence in the Emergency Department The participants responded to several questions about local violence in the ED; the highest percentage (27.2%) of the total sample examined 20-50 patients during their ED shifts. Our results showed that 58.7% of the participants had been physically assaulted during their work in the ED, 31.6% had witnessed another assault, and 9.7% had not witnessed or experienced assault. Of the total sample, 48.1% of the participants were physically or verbally assaulted 2-5 times during their work in the ED. The assault was perpetrated by patients' family members or friends in 61.3% of the cases. The hospital administration responded to assaults in 35.5% of the cases, and 72.8% of the hospitals had clear policies and regulations to deal with assaults. Finally, assaults affected 41.7% of the total sample (physically or emotionally), and 84.5% indicated that violence in the ED could affect patient care (Table 2). Frequency of Assault in Relation to Patient Exposure The results of Spearman rho revealed the association between the rate of seeing patients during the emergency shift and the frequency of assault (physical or verbal) in the ED. The results indicated that there was a statistically significant association between the rate of patients seen during the emergency shift and the frequency of assault (physical or verbal) in the ED (r = 0.238, p < 0.01). This correlation coefficient indicated a positive association between the two variables, with higher patient volumes associated with a greater frequency of assault. While an r-value of 0.238 may not be considered a strong association in all contexts, it was considered statistically significant at the p < 0.01 level, indicating a meaningful relationship between these variables. Being Assaulted in Relation to Demographic Variables The test results of chi-square for an association between demographic variables and being assaulted (physically or verbally) or witnessing any assault while working in the ED indicated that there was a statistically significant association according to the sector, profession, and gender (p < 0.05). The governmental sector had a higher percentage of assaults during work in the ED (63.1%). Compared with other healthcare providers, physicians had a higher percentage of assault during their work in the ED (71.4%). Additionally, 66.7% of the male responders indicated that they had been physically assaulted, versus 48.3% of female responders. There was no statistically significant association between being assaulted (physically or verbally) or witnessing an assault while working in the ED and years of experience (p > 0.05) (Table 2, Figure 1). Healthcare 2023, 11, x FOR PEER REVIEW 6 of 12 assaults during work in the ED (63.1%). Compared with other healthcare providers, physicians had a higher percentage of assault during their work in the ED (71.4%). Additionally, 66.7% of the male responders indicated that they had been physically assaulted, versus 48.3% of female responders. There was no statistically significant association between being assaulted (physically or verbally) or witnessing an assault while working in the ED and years of experience (p > 0.05) (Table 2, Figure 1). Table 3 shows the test results of chi-square tests, which showed a statistically significant association between profession and gender (p < 0.05). Physicians reported a higher percentage of assaults (6 to 10 times during their work in the ED (23.5%)). In terms of gender, 20.5% of male healthcare workers were more likely to have experienced physical assault than their female counterparts (6 to 10 times for males versus 7.9% for females). Contrastingly, there was no statistically significant association between the number of assaults in the emergency room and sector or experience (p > 0.05) (Table 3, Figure 2). M I N I S T R Y O F H E A L T H G O V E R N M E N T A L P R I V E T P H Y S I C I A N N U R S E P A R A M E D I C O T H E R S M A L E F E M A L E 1 -5 Y E A R S 6 -1 0 Y E A R S M O R E T H A N 1 0 S E C T O R M A J O R G E N D E R E X P E R I E N C E HAVE YOU EVER BEEN (PHYSICALLY OR VERBALLY) ASSAULTED OR WITNESSED OTHER ASSAULT DURING YOUR WORK IN EMERGENCY DEPARTMENT? No Yes, I have witnessed another assault Yes, I have been physically assaulted Table 3 shows the test results of chi-square tests, which showed a statistically significant association between profession and gender (p < 0.05). Physicians reported a higher percentage of assaults (6 to 10 times during their work in the ED (23.5%)). In terms of gender, 20.5% of male healthcare workers were more likely to have experienced physical assault than their female counterparts (6 to 10 times for males versus 7.9% for females). Contrastingly, there was no statistically significant association between the number of assaults in the emergency room and sector or experience (p > 0.05) (Table 3, Figure 2). Discussion Numerous studies have thoroughly documented that healthcare providers are frequently exposed to physical and verbal abuse by patients and their relatives [3,[10][11][12]. High prevalence rates of verbal and physical abuse were also observed in this survey among ED healthcare workers (90.3%). This study also found a high prevalence of male participants who had been assaulted 11 M I N I S T R Y O F H E A L T H G O V E R N M E N T A L P R I V E T P H Y S I C I A N N U R S E P A R A M E D I C O T H E R S M A L E F E M A L E 1 -5 Y E A R S 6 -1 0 Y E A R S M O R E T H A N 1 0 S E C T O R M A J O R G E N D E R E X P E R I E N C E HOW MANY TIMES HAVE YOU BEEN (PHYSICALLY OR VERBALLY) ASSAULTED DURING YOUR WORK IN EMERGENCY DEPARTMENT? None Once 2-5 times 6-10 times Discussion Numerous studies have thoroughly documented that healthcare providers are frequently exposed to physical and verbal abuse by patients and their relatives [3,[10][11][12]. High prevalence rates of verbal and physical abuse were also observed in this survey among ED healthcare workers (90.3%). This study also found a high prevalence of male participants who had been assaulted in the emergency department (53%), and most of them had 1-5 years of working experience in the ED (39%). In this study, there was a moderate association between being assaulted or witnessing an assault and being male, a physician, and working in the governmental sector. Nevertheless, according to different meta-analyses, gender, professional status, and closer interaction with patients and visitors may all contribute to female nurses encountering more physical abuse [3,14]. In a recent local study that examined the prevalence of healthcare workers' exposure to violence in the Eastern Province of Saudi Arabia, it was found that health practitioners (46.9%) working in primary care centers were commonly exposed to different forms of abuse, including physical abuse. The study concluded by emphasizing that there was relatively little awareness and education on how to manage and report violence in healthcare institutions, thus stressing the need to establish a national program to track and prevent workplace violence [21]. Only a few review papers particularly addressed the extent of physical violence committed by patients or visitors in EDs against healthcare staff. According to a metaanalysis, 19.3% of medical professionals globally reported experiencing workplace violence committed by patients or visitors [3]. In our study, more than half of the healthcare workers had been assaulted either verbally or physically, 32% had witnessed someone else being assaulted, while 10% did not experience any domestic violence in the ED. In a crosssectional multi-institutional study, it was found that emergency nurses (87.4%) were most frequently exposed to violence and (62%) of the violent encounters were perpetrated by the relatives of the patients [19]. Additionally, and as noted in two Saudi-based studies, one in Riyadh and one in Abha, some of the risk factors of violence in the Saudi healthcare setting included overcrowding, long wait times, culture and personality issues, understaffing, and most importantly, the lack of an encouraging environment for healthcare workers to submit official violence reports [22,23]. Several researchers have investigated the extent to which the experience of violent events at work may increase the chances of burnout, including fear, anger, and depression [1,6,20,[24][25][26][27][28][29][30][31][32] A recent study found that non-physical violence, mainly verbal aggression, was associated with emotional exhaustion, cynicism, and reduced professional efficacy [15]. Likewise, the participants in our study agreed that the assault had an emotional or physical impact on them, and it also impacted patients' treatment and care. On the other hand, hospital administrations responded to assaults for 36% of the total sample. Regarding hospital policies and regulations that deal with workplace violence, 73% had clear instructions and policies implemented in their healthcare system, which had a strong positive correlation (p = 0.01). However, only 39% of senior employees who responded to the survey indicated that the top management level had clearly taken a stand against violence. The significant role of healthcare professionals and hospital policies regarding violent situations was one of the study's important considerations. This study showed that in the local environment, male doctors and both sexes in other medical professions working in the ED suffered from an increased risk of verbal and physical abuse. Such situations may escalate and potentially cause harm. Thus, lower-level management employees may find it challenging to implement preventive measures or foster an open culture of conversation if the problem does not seem to be a priority for higher-level management. This deserves attention because preventing violence is a crucial management activity that greatly aids in establishing a secure work environment. This can be achieved by fostering an encouraging environment to report abuse incidents, conducting frequent risk assessments, and implementing preventative training and awareness programs. According to a study conducted by Chen et al. in 2015, on workplace violence in Chinese hospitals, the prevalence of verbal and physical abuse was higher among female healthcare professionals [33]. Our study, on the other hand, found that male healthcare professionals were at a higher risk of experiencing verbal and physical abuse in the ED. This difference in results could be attributed to the cultural and contextual variations between China and Saudi Arabia. In interpreting our results, it is evident that healthcare professionals and hospital policies play a significant role in managing and preventing violent situations. In Riyadh EDs, healthcare professionals, especially male doctors and both sexes in other medical professions, are at a heightened risk of experiencing verbal and physical abuse. Such incidents can quickly escalate and potentially cause harm, thus underscoring the need for preventive measures. It is important to note that lower-level managers may find it challenging to implement preventive measures or foster an open culture of conversation if preventing violence is not a priority for higher management levels. Therefore, it is crucial to establish a secure work environment by creating a culture of reporting abuse incidents, conducting frequent risk assessments, and implementing preventive training and awareness programs [34,35]. This approach could potentially reduce the prevalence of workplace violence in Riyadh EDs and create a safer work environment for healthcare professionals. On a behavioral level, this entails providing behavioral training for staff members and managers that includes strategies and procedures, such as de-escalation instructions and self-defense methods. These can assist healthcare providers to improve their capability to handle dangerous or vital circumstances safely and competently [16,[36][37][38]. Our hospital has taken steps to implement such training programs and has seen positive results in reducing the incidence of workplace violence. Additionally, it is worth mentioning that the Kingdom of Saudi Arabia, represented by the Ministry of Health's Legal Affairs, has indicated that it will spare no effort to protect the country's healthcare workers against abuse and will take the necessary legal measures to secure their rights. In fact, it is underlined in the Saudi Judicial System that "the right of all abused staff will be protected, indicating that verbal and physical abuse against health practitioners is a crime punished by law, with imprisonment up to 10 years and a fine up to one million riyals" [39]. Our hospital has also taken steps to work with local authorities to ensure that any incident of workplace violence is reported and investigated promptly and that perpetrators are held accountable for their actions. By implementing these solutions, our hospital has been successful in reducing the incidence of workplace violence in the emergency department. We believe that other hospitals facing similar challenges can benefit from our experiences and implement similar strategies to protect their healthcare workers and create a safer workplace for all. Limitations It is necessary to explain a few of our study's limitations. Our research used an anonymous online survey. No causal connections could be found because of the crosssectional design. The respondents were recruited for the survey using a link that was provided by Google Forms. As a result, response rates could not be determined. A selection bias may have resulted from this limitation. However, this made it possible to conduct the survey all around Riyadh city. Lastly, the survey of violent episodes in the ED in our cross-sectional study was recollected over a period of 3 months, which is quite short and can produce better results if it is extended for longer periods. Conclusions The rate of workplace violence in Riyadh EDs is high, and male healthcare workers, physicians, and those working in the government sector were at the highest risk of violence. Furthermore, there was an association with the rate of patients seen in the ED. We conclude that managing workplace violence is a difficult problem in the healthcare system and that management staff plays a crucial role to prevent, de-escalate, and deal with violent incidents. Future studies should assess how senior management should take a position against ED violence and reinforce the resources for managerial personnel and healthcare providers by the adoption of policies, procedures, and preventative training programs that protect healthcare workers' health and well-being. Informed Consent Statement: The information included the study's purpose, the voluntary nature of their participation, strict confidentiality, and secure data storage. The survey had an anonymous nature and all respondents agreed to participate in the survey. Written consent was obtained from participants who completed the online questionnaire. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
v3-fos-license
2018-05-18T13:15:34.445Z
2018-05-17T00:00:00.000
21689443
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-018-25655-7.pdf", "pdf_hash": "893b8c9c0a4dfc45c37f47a5f0dcff8d39dba323", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1500", "s2fieldsofstudy": [ "Biology" ], "sha1": "893b8c9c0a4dfc45c37f47a5f0dcff8d39dba323", "year": 2018 }
pes2o/s2orc
Layer-by-layer siRNA/poly(L-lysine) Multilayers on Polydopamine-coated Surface for Efficient Cell Adhesion and Gene Silencing For tissue engineering applications, small interfering RNA (siRNA) is an attractive agent for controlling cellular functions and differentiation. Although polyionic condensation of nucleic acids with polycations has been widely used for gene delivery, siRNA is not strongly associated with cationic carriers due to its low charge density and rigid molecular structures. The use of an excess amount of cationic carriers is often used for siRNA condensation, though they can induce severe cytotoxicity. Here we introduce the self-assembly of siRNA with mild polyelectrolytes into multilayers for efficient gene silencing during cell proliferation. The multilayers were prepared through the sequential layer-by-layer deposition of siRNA and poly-L-lysine (PLL) on a polydopamine-coated substrate. The cells, grown on the siRNA/PLL multilayers, exhibited a remarkable inhibition of the expression of target genes as compared to the use of scrambled siRNA. The gene silencing efficiency depends on the number of siRNA layers within a multilayer. This result indicates that siRNA/PLL multilayers can be potentially utilized for efficient surface-mediated siRNA delivery. catechol-based moieties that can strongly bind to a wide range of organic and inorganic surfaces. PDA coating can be implemented simply by dip-coating in an alkaline solution of dopamine 29 . The surface of PDA coating can serve as an anchor for the loading of functional groups, including amine-and thiol-bearing compounds via Michael addition or Schiff-based reactions 34,35 . In addition, catechol has a redox potential of +530 mV vs. a normal hydrogen electrode at pH 7, which makes PDA attractive for electrochemical applications [36][37][38][39] . For example, the PDA layer can mediate the on-surface reduction of metal precursor ions into solid nanostructures because their redox potentials are relatively higher than that of catechol. This property also allows PDA to strongly bind to various inorganic surfaces. Therefore, the PDA-coated layer can serve as an effective platform when surface modification needs to be independent of the properties of the underlying materials. Recently, the PDA-coated substrates showed an effective immobililization of stable pDNA complexes for surface-mediated gene delivery 40 . In this study, we employed LBL self-assembly to prepare a siRNA/PLL multilayer on the PDA-coated substrate for surface-mediated siRNA delivery. The surface of siRNA/PLL multilayer induced the effective cell adhesion, spreading and proliferation without any severe cytotoxicity. Notably, the cells grown on the siRNA/PLL multilayers exhibited the remarkable inhibition of the expression of target genes. Interestingly, the gene silencing effect is correlated with the number of a siRNA layer within a siRNA/PLL multilayer. Results and Discussion Preparation of siRNA/PLL multilayers loaded on the PDA-coated substrates. The siRNA/PLL multilayer, consisting of siRNA and PLL on the PDA-coated substrate, was prepared via LBL which facilitated the effective cell adhesion and proliferation as well as gene silencing effect (Fig. 1). The number of siRNA/PLL bilayers was denoted by "n" in the (siRNA/PLL) n multilayer. The surface of the glass was coated with PDA via the oxidative self-polymerization of dopamines to build up the siRNA/PLL multilayer. During the polymerization of dopamine, it is well known that there is a color change from transparent to dark on the PDA-coated surface due to the catechol oxidation 41 . The surface of glass became dark after the PDA coating process, which indicated the successful PDA coating on the glass substrate (Fig. S1). The resultant PDA-coated layer can serve as a substrate for the sequential adsorption of siRNA and PLL via electrostatic self-assembly because of adhesive functional groups on the surface of PDA coatings. Notably, the last layer of the (siRNA/PLL) n multilayers was coated with PLL because PLL can facilitate the efficient cell adhesion, spreading and proliferation as well as protect siRNA from enzymatic degradation, resulting in enhanced surface-mediated gene silencing. Characterization of siRNA/PLL multilayers. The surface morphology of the (siRNA/PLL) n multilayers was observed using scanning electron microscopy (SEM). As shown in Fig. 2, the surface of (siRNA/PLL) n multilayer was much rougher with increasing the number of siRNA/PLL bilayers. However, the pristine glass surface did not show a roughened substrate under the same experiment condition (Fig. S2). Interestingly, the surface of (siRNA/PLL) 1 multilayer exhibited the even distribution of aggregated particles with average sizes of approximately 900 nm, which may be attributed to the formation of siRNA/PLL complexes during LBL self-assembly process. This result suggests that siRNA molecules can be adsorbed in the form of siRNA/PLL complexes within the siRNA/PLL multilayer. Furthermore, the thickness of (siRNA/PLL) n multilayers steadily increased with increasing the number of siRNA/PLL bilayers, which indicates the successful formation of (siRNA/PLL) n multilayers on the PDA-coated glass surface (Fig. S3). To further verify whether siRNA was effectively incorporated within the (siRNA/PLL) n multilayer, we used fluorescently-labeled siRNA for the fabrication of (siRNA/PLL) n multilayer (Fig. 3). The confocal laser scanning microscopy image of siRNA/PLL multilayers showed the scattered red fluorescent dots, suggesting the successful adsorption of siRNA molecules within the (siRNA/PLL) n multilayer. Notably, the fluorescent intensity of red dots became much higher with increasing the number of a layer. The result indicates that the sequential deposition of siRNA and PLL through LBL self-assembly can be effective to form the (siRNA/PLL) n multilayer on the PDA-coated surface. In addition, the loading efficiencies of siRNAs were 91.3 ± 2.8%, 93.4 ± 3.1% and 92.2 ± 2.7% for n = 1, 3 and 6 of the (siRNA/PLL) n multilayers, respectively. This result indicates that the loading amounts of siRNA could be precisely controlled by adjusting the number of the siRNA layer in the (siRNA/ PLL) n multilayer. Notably, siRNA embedded in the (siRNA/PLL) 6 multilayers did not release after incubated in phosphate-buffered saline (PBS) for 3 and 5 days, which can facilitate surface-mediated intracellular uptake and gene silencing (Fig. S4). Cytotoxicity and gene silencing efficiency of siRNA/PLL multilayers. The cytotoxicity of the prepared (siRNA/PLL) n multilayer was determined by measuring the cellular viability of HeLa-GFP cells grown on the various (siRNA/PLL) n multilayer (Fig. 4). No significant cytotoxicity was observed up to 8 layers of the siRNA/PLL bilayers, (siRNA/PLL) 8 . However, the (siRNA/PLL) 10 multilayer decreased the cell viability to 85.2% ± 5.3%, which might be related to its higher positive surface charge. It has been known that the cytotoxicity of cationic polymers mainly depends on their high surface charge density, surface hydrophobicity and composition 42 . In addition, there was no significant difference in the adherent cell density between pristine PDA-coated glass surfaces and (siRNA/PLL) n multilayers after short incubation of 3 h: 2.91 × 10 4 ± 0.31 cells/cm 2 (n = 0), 3.02 × 10 4 ± 0.25 cells/cm 2 (n = 2), 2.83 × 10 4 ± 0.24 cells/cm 2 (n = 4) and 2.91 × 10 4 ± 0.15 cells/cm 2 (n = 8) of the (siRNA/PLL) n multilayers on the PDA-coated substrates (Fig. S5). This result indicates that the surface of (siRNA/PLL) n multilayers did not any affect the cell adhesion as compared to that of pristine PDA-coated glass substrates. According to the results described above, we investigated the gene silencing efficiency of the (siRNA/ PLL) n multilayers (i.e. 1, 3 and 6) that did not show any cytotoxicity. The gene silencing of (GFP siRNA/PLL) n multilayer was evaluated by the fluorescence intensity of HeLa-GFP cells loaded on various (GFP siRNA/PLL) n multilayers using confocal laser scanning microscopy. Confocal images exhibited that GFP fluorescence of the cells grown on all of the (GFP siRNA/PLL) n multilayers was significantly decreased compared to the PDA-coated substrate (Fig. 5a). This result suggests that siRNA/PLL multilayers provide an efficient cell proliferation and surface-mediated gene silencing at the same times. Furthermore, the remarkable reduction in the GFP fluorescence of the cells grown on all of the (GFP siRNA/PLL) n multilayers exhibited no significant cytotoxicity, which indicates that the gene silencing effects were derived from the RNAi activity. Notably, the observed reduction in the GFP fluorescence depended upon the number of siRNA/PLL bilayers due to the increased amounts of siRNA adsorbed to the surface. To confirm that the observed suppression of GFP fluorescence resulted from the degradation of the target-specific GFP mRNA, we measured the level of intracellular mRNA expression using reverse transcriptase-polymerase chain reaction (RT-PCR) for the HeLa-GFP cells grown on the (siRNA/PLL) n multilayers (Fig. 5b). The band intensity of the GFP/β-actin mRNA was 106.54 ± 3.25% for the cells loaded on the PDA-coated substrates, while the (GFP siRNA/PLL) n multilayers showed 90.12 ± 2.31% (n = 1), 64.54 ± 5.56% (n = 3) and 32.56 ± 3.21% (n = 6) for the cells loaded on the (GFP siRNA/PLL) n multilayers. However, the (scrambled siRNA/PLL) n multilayers exhibited no significant degradation of the target mRNA without increasing the layer number under the same experimental conditions. It should be noted that the target-specific gene silencing efficiencies were consistent with the corresponding reduction in the GFP fluorescence of the cells on the (siRNA/PLL) n multilayer (Fig. 4). Interestingly, (GFP siRNA/PLL) n multilayers coated with PLL at the last layer showed enhanced gene silencing as compared to siRNA-coated multilayers, which might be related to the siRNA protection by PLL outer coating from serum degradation (Fig. S6). These results indicate that the gene silencing of the GFP expression in the cells loaded on the siRNA/PLL multilayer was directly induced by the degradation of the target GFP mRNA via the RNAi processing. Conclusions We developed the polyelectrolyte multilayer of siRNA and PLL through LBL self-assembly on the PDA-coated substrate for effective cell adhesion and gene silencing. The surface of substrates was readily coated by PDA through the oxidative self-polymerization of dopamines. The resulting siRNA/PLL multilayer on the PDA-coated substrate exhibited efficient target-specific gene silencing through RNAi activity. Also, the gene silencing efficiency was increased with increasing the number of the siRNA layer in the multilayer. This approach should possess considerable potential for surface-mediated genes delivery. Experimental Section Materials. All oligonucleotides were purchased from Bioneer Corp. (Daejeon, Republic of Korea). PLL (0.01% w/v, 150-300 kDa) and dopamine hydrochloride were obtained from Sigma-Aldrich (St. Louis, MO, USA). Round cover glasses with a diameter of 18 mm were purchased from Marienfeld-superior (Germany). Coating of Polydopamine on the Glass Surface. A glass was thoroughly washed several times with deionized water to eliminate any contaminations. The cleaned glass was immersed in the dopamine solution (2 mg/mL dopamine hydrochloride in 10 mM Tris buffer at pH 8.5). After 48 h incubation, the resultant product, PDA-coated glass, was rinsed 3 times with deionized water and heated at 120 °C for 1 h. Construction of siRNA/PLL Multilayers. A siRNA stock solution was prepared at a concentration of 5 μg/mL in 10 mM Tris buffer (pH 7.5). One milliliter of the siRNA solution was dropped on the PDA-coated substrate and incubated for 30 min at room temperature. After washed with deionized water, One milliliter of the PLL solution was dropped on the siRNA-adsorbed PDA-coated substrates for 30 min at room temperature. The sequential deposition of siRNA and PLL was repeated. Determination of Loading Efficiency. After the siRNA adsorption process, unattached siRNA was obtained by ethanol precipitation 43 . Briefly, 3 M sodium acetate (pH 5.2) and then 100% ethanol were added to the siRNA solution containing unattached siRNA, followed by incubation at −80 °C for 30 min. The pellets were carefully dissolved in diethyl pyrocarbonate-treated deionized water. The loading efficiency of siRNA molecules was determined at 260 nm using a Nanodrop ® ND-1000 Spectrophotometer (Wilmington, DE, USA). Gene Silencing. The resultant (siRNA/PLL) n multilayer on the PDA-coated glass was placed carefully at the bottom of the wells in a 12-well plate. HeLa cells expressing GFP (HeLa-GFP cells) was seeded on the surface of siRNA/PLL multilayers at a density of 1.5 × 10 5 cells per well in a serum-deficient medium. After 6 h incubation, the transfected cells were washed with phosphate buffered saline and further incubated in fresh culture medium for 48 h. The total RNA was extracted from the cell lysates using TRI reagent (Ambion, Inc., USA) and directly transcribed to cDNA using an Ominscript RT-PCR kit (Qiagen, USA) according to the manufacturer's instructions. The resulting cDNA was amplified using Tag polymerase and its specific primer sets following previous 18 . To visualize the suppression of GFP fluorescence, the HeLa-GFP cells were analyzed using confocal laser scanning microscopy (LSM 510, Carl Zeiss). Briefly, the cells grown on the multilayers were washed with PBS and fixed with 3.7 wt% formaldehyde for 10 min at room temperature. The nuclei of the cells were stained using 4′,6-diamidino-2-phenylindole (DAPI, 1.5 mg/mL −1 ) for 5 min. Cell Cytotoxicity. HeLa-GFP cells were seeded on the surface of siRNA/PLL multilayers at a density of 1.5 × 10 5 cells per well in a 12-well plate. After 6 h incubation, the transfected cells were washed with PBS and further incubated in a fresh culture medium. After 48 h incubation, the number of viable cells was analyzed using an EZ-Cytox Enhanced Cell Viability Assay Kit (Daeillab service co. Ltd., Seoul, Republic of Korea). Characterization of siRNA/PLL Multilayers. The surface of various (siRNA/PLL) n multilayers was observed using scanning electron microscopy (SEM, S-4800, Hitachi Ltd., Japan). The specimen was sputter-coated with platinum for 120 seconds. The loading of siRNA was confirmed by preparing a polyelectrolyte multilayer with TAMRA-labeled siRNA, followed by visualization using confocal laser scanning microscopy (LSM 510, Carl Zeiss). Statistical Analyses. All data are presented as the mean standard deviation of n independent measurements. Statistical significance was determined for p < 0.05 using the Student's test.
v3-fos-license
2018-12-11T08:56:59.235Z
2014-03-01T00:00:00.000
55443356
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2014/04/epjconf_efm-13_02105.pdf", "pdf_hash": "950c47c70960fa91d2ff1082fc26bc23773dd5c2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1501", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "950c47c70960fa91d2ff1082fc26bc23773dd5c2", "year": 2014 }
pes2o/s2orc
The influence of the tangential velocity of inner rotating wall on axial velocity profile of flow through vertical annular pipe with rotating inner surface In the oil and gas industries, understanding the behaviour of a flow through an annulus gap in a vertical position, whose outer wall is stationary whilst the inner wall rotates, is a significantly important issue in drilling wells. The main emphasis is placed on experimental (using an available rig) and computational (employing CFD software) investigations into the effects of the rotation speed of the inner pipe on the axial velocity profiles. The measured axial velocity profiles, in the cases of low axial flow, show that the axial velocity is influenced by the rotation speed of the inner pipe in the region of almost 33% of the annulus near the inner pipe, and influenced inversely in the rest of the annulus. The position of the maximum axial velocity is shifted from the centre to be nearer the inner pipe, by increasing the rotation speed. However, in the case of higher flow, as the rotation speed increases, the axial velocity is reduced and the position of the maximum axial velocity is skewed towards the centre of the annulus. There is a reduction of the swirl velocity corresponding to the rise of the volumetric flow rate. Introduction In the oil and gas industries, the drilling fluids play the main role in oil and gas drilling. Following the success of the first rotary drilling well, the technology of the drilling well became significantly important. Nowadays, much attention has been paid on developing the technology of drilling and drilling fluids. One of the main and most important functions of the drilling fluids is the cutting removal from the borehole; the cuttings generated by the bit must be removed immediately to achieve an effective drilling process [1]. Several factors influence the carrying capacity of the drilling fluids, such as annulus velocity, plastic viscosity and yield point of the mud and slip velocity of the generated cuttings . For a cutting to reach the surface, the slip velocity must be lower than the average annular velocity. As an approximate guide, the minimum annular velocities for hole sizes of 15, 12.5, 10.623, 8.75, 7.875 and 6 inches are 80, 90, 110, 120, 130 and 140 ft/min, respectively [2]. This research aims to investigate the characteristics of tangential and axial velocities of flow in a vertical concentric annulus, whose outer cylinder is stationary and inner cylinder is rotating. In order to accomplish this aim, an experiment and numerous calculations have been performed. Furthermore, data collected from these approaches will be compared to investigate the effect of swirl velocity on the axial velocity profile, and how well the computation predicts this flow: -Experimental approach: experimental data will be collected by using a rig that is available in the Fluid Dynamics Lab at Newcastle University. -Numerical approach: calculated data based on Computational Fluid Dynamics (CFD) software Gambit and FLUENT. This approach is widely accepted for engineering problems, as it is reliable, cost effective and less time consuming. Computational fluid dynamics (CFD) The last two decades have seen very rapid growth in the understanding of computational fluid dynamics, or CFD modeling, which is extensively used to predict and analyze the behaviour of fluid flows. "CFD is the analysis of the systems involving fluid flow, heat transfer and associated phenomena such as chemical reactions by means of computer-based simulation" [3]. Since CFD is powerful, it spans a wide range of industrial and non-industrial applications, and an example of these applications is the flow through an annulus pipe. "[a]nnular pipe flow is important in engineering applications such as heat exchangers, gas-cooled nuclear reactors and drilling operations in the oil and gas industry" [4]. They studied a turbulent concentric annular pipe flow for two radius ratios (R1/R2=0.1 and 0.5), by using the direct numerical simulation. Experimental and numerical studies have been done on fully developed turbulent flow through concentric annuli (Knudsen and Katz, 1950;Brighton and Jones, 1964;Quarmby, 1967;Rehme, 1974). The radial positions between zero shear stress and maximum velocity were the main focus on these studies. It was reported by Knudsen and Katz (1950), Brighton and Jones (1964), and Quarmby (1967) that the radial location of maximum velocity is synchronized with the location of zero shear stress. However, Rehme (1974) remarked that the position of zero shear stress is closer to the inner wall than maximum velocity. An LDV experiment in concentric annuli was conducted by Nouri et al (1993) and Escudier et al (1995) [4]. A lot of studies have been done about flow through the tube and annulus pipe, but far too little attention has been paid to flow through the annulus pipe with rotation of the inner wall. However, a study of flow through an annulus with rotating the inner cylinder was done by Nouri and Whitlaw (1994) [5]. The latest study on this problem was done by Essiwi (2006), to investigate the validity of CFD modeling for oil well drilling fluid flows; he considered the axial and swirl velocities as they affect the process of lifting the generated cuttings [2]. Experimental model The experimental measurements were established by using the rig available (fig 2.1) to measure the axial and swirl velocities of the flow through the vertical annular pipe, whilst rotating the inner one. The velocities will be measured by using a Laser Doppler Velocimeter (LDV) at different distances from the inner pipe. The measurements were taken at 1.4 m from the bottom of the module, because at this location, the axial velocity is fully developed [2]. The axial flow through the annulus give axial Reynolds numbers in the range of Turbulent model In this study, two-equation models are used which is standard ε − k model, and for simplification reasons, the geometry is solved as two-dimensional axisymmetric simulation. Fluid flows are usually unsteady, threedimensional and involve fluids that are to some degree compressible. Many simplifying assumptions are frequently made, for example that the flow is steady or restricted to less than three diminutions, or that it is practically incompressible [6]. The standard ε − k is a semi-empirical model which has become the workhorse of practical engineering flow calculations in the time since it was proposed by Launder and Spalding (1972) [7]. The model is based on model transport equations for the turbulent kinetic energy, k, and its dissipation rate, ε . The turbulent viscosity is computed from these scalars. The ε − k formulation is derived using a high Reynolds number hypothesises; also, near wall treatment is based on the application of wall functions, rather than solving governing equations inside the boundary layer. distribution and zero gauge pressure were set in the pressure outlet panel. The outer wall and the bottom were set as stationary walls with no-slip conditions. The inner wall was set as a moving wall with a specified rotation speed (0, 75, 150, 225 and 300rev/min). The working fluid is water with software default parameters (density of 998.2 kg/m3 and viscosity of 100.3×10-5 kg/ms). The magnitudes of the measured axial velocity are from -0.012 to 0.01m/s, but as this indicates a net outflow, the measured and computed profiles are close to each other near the outer wall up to 40% of the annulus gap, but then they diverge towards the inner pipe. The behaviour of the computed axial velocity profiles as the rotation speeds of the inner pipe increases Profiles of measured axial velocity in the cases of zero volumetric flow rate with different rotation speeds of the inner pipe (150, 225 and 300rev/min) are shown in figure 4.5. The axial velocity is directly proportional to the rotation speed, and the absolute magnitude of the axial velocity in the inner region of the annulus gap is higher than in the outer region. The positions of the zero value of the axial velocity in three cases are skewed to the outer wall, rather than the middle of the annulus, at about 0.024m away from the outer wall. Again, the rotation speed of the inner pipe generates a re-circulating flow into the annulus. In the inner half of the annulus, it is obvious that the axial velocity is a function of the rotation speed (inversely proportional to the rotation speed). On the other hand, in the outer half, the axial velocity reduces as the rotation speed increases up to 75rev/min, and then the axial velocity increases. Completely different from the profiles in figure 4.6, in figure 4.7, the profile of the axial velocity becomes flatter as the rotation speed increases. In general, from figure 4.6 and 4.7, for rotation speeds 225, the axial velocity decreases as the rotation speed of the inner pipe increases. Conclusion 1. The experimental velocity profiles were obtained by using the rig which measures the axial and swirl velocities, by using a Laser Doppler Velocimeter (LDV). 2. Measurement uncertainties were seen close to the walls of the annulus gap, especially near the rotating wall, due to laser reflection and orbit motion of the inner pipe, which was assumed to be neglected. 3. The measured swirl velocity profile is directly proportional to the rotation speed. However, the axial flow rate influences this relation, namely, the effect of the rotation speed is reduced by the increase of the axial flow rate. 4. Axial profiles show the value of zero at the outer wall, which corresponds to the on-slip condition at the outer wall and the value of the inner pipe tangential velocity. A sudden increase in the swirl velocity is shown in the region very close to the outer wall, and then the axial velocity steadily increases towards the inner pipe and a rapid increase is again shown nearer the inner wall. 5. Comparisons between computational and experimental results showed that the calculations predicted the qualitative features of the axial and swirl velocity profiles to be satisfactory. 6. Computed axial velocity profiles show good agreement with corresponding measurements in the case of a stationary inner pipe. Similar agreements are shown by the swirl profiles for rotation speeds 75rev/min. On the other hand, for high rotation speeds, and in the region above 30% away from the outer wall, except very close the inner pipe, the computed swirl profiles show smaller values than the measured one. 7. The computed axial velocity profiles do not indicate the maximum measured axial velocity when the rotation speed increases. Also, a discrepancy between the measured and computed profiles of the axial velocity is probably due to the measurements errors, rather than the predictive inabilities of the calculations.
v3-fos-license
2020-03-04T03:02:35.297Z
2019-10-01T00:00:00.000
211750495
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journal.afebi.org/index.php/aar/article/download/221/119", "pdf_hash": "8b87809f06d1ffbacaff9ec39d4a60976cda88ad", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1503", "s2fieldsofstudy": [ "Business" ], "sha1": "3866e91d0ea93f532ca0624e2ec244e7813b9700", "year": 2019 }
pes2o/s2orc
MEDIATION ROLE OF AUDIT GOING CONCERN OPINION ON CORRELATION OF AUDIT TENURE AND MARKET PERFORMANCE This study aims to obtain empirical evidence of the mediating role of going concern opinion audit on the relationship of audit tenure with company market performance. This research is explanatory. The subjects in this study were companies listed on the Indonesia Stock Exchange from 2007 to 2017. The samples in this study were 245 companies. The type of data used is quantitative data. The statistical analysis method uses path analysis with SPSS 13. The results of the study show that going concern audit opinion has a significant negative relationship in mediating audit tenure and company market performance. These results indicate that going concern audit opinion has information that is useful for users' financial data to make financial decisions. INTRODUCTION The audit opinion is an observable result of a series of audit processes, which are issued in the name of the public accountant's office in line with the issuance of audited client financial statements. (Francis, 2011). Research on audit opinion is important because audit opinion is an auditor's communication tool that presents findings during the audit process and many parties rely on that information. (Arens, Elder, & Mark, 2014). In the audit opinion, there is an explanation of information about whether financial statements are presented fairly (Geiger & Raghunandan, 2002;Hay, Knechel, & Willekens, 2014). Audit opinion for a company can show that the auditor has a message relating to the going concern status of the company concerned. (Gibson, 2013). In carrying out audits of financial statements, the auditor must provide confidence in the financial statements, that financial statements are free of material misstatements, whether caused by fraud (fraud) or errors. At the end of the audit phase, the auditor explains whether the financial statements have been fairly prepared in all material matters. Therefore audit opinion is important information because many users depend on the information described in the report. (Arens et al., 2014). An explanation paragraph about going concern is useful information. Previous studies related to the going concern explanatory paragraph have seen the relationship between stock price reactions and going-concern audit opinions. Company stock returns are used to build future return expectations. Return is expected to be compared with the actual return around the period when the going concern audit opinion is released. If the actual return differs significantly from the expected return, it will illustrate that the going-concern audit opinion brings information that is useful to the market. As explained in a study conducted by Dodd (1984), Fleak (1994), Soltani (2000), and O'Rilley (2009) that audit opinions that contain an entity's going concern opinion will negatively affect return stock. On the other hand, if the actual return value does not differ significantly from the expected return, the business continuity audit opinion is considered not to provide useful information for investors. As explained in a study conducted by Martinez et al (2004) and Tahinakis et al (2016), audit opinions do not have a significant effect on stock returns. Audit quality can be measured from the opportunity that the auditor will issue a going concern opinion, assuming that audit quality will decrease if the auditor does not issue an opinion about the continuity of the business when the company goes bankrupt. (Knechel & Vanstraelen, 2007) Audit tenure is the length of the relationship between the client and the auditor that can be measured by the number of years the auditor conducts a client audit (V. E. Johnson, Khurana, & Reynolds, 2002;Myers et al., 2003). The longer the auditor's tenure, the quality of audited financial statements will increase because the auditor's understanding of the client's business and accounting system will be better. Therefore investors will feel less uncertainty in their investment decisions (Ghosh & Moon, 2005;Su, Zhao, & Zhou, 2015). In making investment decisions, investors pay attention to audit tenure as a factor that influences stock return movements. Longer audit tenure can improve the quality of audit work and hence stock returns will be stable. (Jorjani & Safari Gerayeli, 2018). This statement is supported by previous research conducted by Callen and Fang (2016) which explains that auditor tenure is negatively related to the risk of falling stock prices for the following year. (Callen & Fang, 2016) The results of this study show three features including: 1) Audit tenure has a significant positive effect on the going-concern audit opinion. 2) Going concern audit opinion has a significant negative effect on market performance. 3) Audit tenure has a significant positive effect on market performance; and 4) Going concern audit opinion has a significant negative effect on mediating the relationship between audit tenure and company market performance. The contribution of this study is to provide empirical evidence that the longer the audit tenure does not reduce the tendency of auditors to provide going concern audit opinions to clients. Longer audit tenure is seen as an influence that can increase the knowledge and technical capabilities of the auditors concerned so that it can increase market confidence demonstrated by increasing market performance along with increasing audit tenure. When an auditor with long audit tenure publishes a going concern going audit opinion then the market will give a negative reaction to that information which is indicated by a significant negative influence of going concern audit opinion in mediating the relationship between audit tenure to market performance. This shows that the going concern audit opinion does have information that is beneficial for users of financial data to make financial decisions, it is used by the market as the main guideline for making financial decisions that lead to negative market reactions when a going concern opinion is issued by an auditor who has a tenure audit length. This is a reflection of the confidence of users of information that the audit has been carried out in a reasonable manner. LITERATURE REVIEW Auditors have an important role in building public trust in the published financial statements of the company. This statement is explained in one audit theory called "Inspired Confidence Theory" (1932) which is a theory that explains that auditors have an important role in building public trust in the company's financial statements that are published. An audit is a process that is believed enable to be provided in the financial statements presented by management related to the condition of the company which consists of the use of resources owned by the company concerned. (Limperg, 1985). Inspired Confidence Theory explains two features that influence the independent auditor's opinion, namely the technical ability and individual characteristics of the auditor concerned. Technical capabilities are built by elements of experience, while the individual characteristics of auditors consist of empathy, responsiveness, and assurance (Brunelli 2018). Audit tenure is the length of the relationship between the client and the auditor that can be measured by the number of years the auditor carries out an audit on the client (Myers et al., 2003). In the context of an audit, audit experience can be obtained from the interaction between the auditor and his client which can be seen from the audit tenure. The audit opinion is a result that can be observed from a series of audit processes, which are issued in the name of an accountant's office in line with the issuance of audited client financial statements where the information can affect clients and users of financial statement information which ultimately lead to economic consequences (Tritschler, 2013 ) In the audit opinion, there is an explanation of information about whether financial statements are presented fairly (Geiger & Raghunandan, 2002;Hay, Knechel, & Willekens, 2014). Additionally the audit opinion for a company can indicate that the auditor has a message relating to the going concern status of the company concerned. (Gibson, 2013). Going concern audit opinion is one of the independent auditor's communication tools to the public. Blay (2011) explains that markets interpret audit opinions with going concern paragraph as important risk communication. Communication is part of the information that is published to the public when the company in question publishes an annual report. (Blay et al., 2011). Going concern opinion will bring economic consequences, especially for company shareholders, namely the company's market performance (Brunelli, 2018). Market performance is a securities or asset behavior in the market place (O.B. Dictionary). The most common way to measure market performance from a company is to use the return received by the shareholders (stock return). (Koller et al, 2010). If in the stock market the number of buyers of certain shares amounts to more than the number of sellers, then the stock price will rise along with demand (Soenen, 2003). The purpose of this study is to obtain empirical evidence of the role of going-concern audit opinion in mediating the relationship of audit tenure with market performance. HYPOTHESIS DEVELOPMENT Audit Tenure and Going Concern Audit Opinion Based on the Theory of Inspired Confidence Theory (1932), audit opinion is considered to be able can give confidence to the owner of the company because the opinion is composed by an independent party that has no affiliation with the company. Based on Inspired Confidence Theory, audit tenure is an important aspect that affects company market performance because audit tenure is a representation of the experience that the auditor has with clients in similar types of industries. A longer audit tenure means a longer relationship between the auditor and his client, and the more technical knowledge the auditor has regarding the client's business cycle. This theory also explained that the presentation of information in an audit opinion will depend on the technical capabilities possessed by the auditor concerned and in the end the audit opinion provided will be a source of information for the owner of the company in the decision making process. (Limperg, 1985) As explained above, audit tenure is one element of knowledge in the realm of experience. Through adequate experience, auditors can have intensive knowledge that comes from a series of audit tasks that have been carried out. (Bonner & Lewis, 1990). Experience can come from the interaction between the auditor and his client which can be seen from the audit tenure. The quality of audit work can increase along with the audit tenure period where auditors can acquire a better understanding of client systems, client businesses, and industrial environments (Dunham, 2002). Audit tenure can be measured by the number of years in which the company uses the services of the auditor concerned (Myers et al., 2003). A brief audit tenure can reduce the tendency to have specific knowledge relating to clients. Knowledge gained from the audit process for certain clients can be used to audit clients with similar businesses (Hay et al., 2014). Some previous studies explained that a long audit tenure did not reduce audit quality by reducing the tendency to provide going-concern audit opinions (Geiger & Raghunandan, 2002;Jackson et al., 2008;Knechel & Vanstraelen, 2007) H1: Audit tenure has a positive relationship with going-concern audit Going Concern Audit Opinion and Market Performance Inspired Confidence Theory (1932) explained that the audit was carried out to build public trust. The trust of the community rests on audit opinions from the results of the tests carried out during the audit process in progress (Limperg, 1985). The audit opinion is a result that can be observed from a series of audit processes, where the information can affect clients and users of financial statement information which ultimately lead to economic consequences (Francis, 2011) Based on the explanation of Inspired Confidence Theory (1932), when the independent auditor has doubts about the company's ability to continue its business (going concern), the auditor must disclose such uncertainty in the audit opinion. In an efficient market, returns from company shares will be influenced by the expectations of users of information relating to the company's prospects. When there is unexpected information presented in the audit report, it can have an impact on the price of the overall security (Brunelli, 2018) Previous research that examined the relationship of going concern audit opinion and stock prices was presented by Fleak (1994), who explained that there was a significant negative relationship between going concern and stock price movements (Fleak & Wilson, 1994). Soltani (2000) also researched the relationship of going concern audit opinion to stock prices. The results of this study are consistent with the research of Fleak (1994) which shows a significant negative relationship between stock abnormal returns and going-concern audit opinions (Soltani, 2000). H2: Going concern audit opinion has a negative relationship with company market performance Audit Tenure and Market Performance Inspired Confidence Theory (1932) explained that the audit was carried out to build public trust. The audit function itself leads to public trust. The trust of the community is built by their views on the auditor's ability in conducting audits. In the context of an audit, the experience is an element that builds the capabilities possessed by the auditor. The experience gained by auditors from conducting audits in various companies that have been handled by them can be a provision to handle companies in similar industries (Limperg, 1985). Audit tenure is the length of the relationship between the client and the auditor that can be measured by the number of years the auditor audits the client (Myers, et al 2003). Inspired Confidence Theory (1932) has explained that auditor experience can be obtained from the interaction between auditors and audited companies. A longer audit tenure can improve the auditor's understanding of the client's business and accounting system. Therefore investors will feel less uncertainty in their investment decisions (Ghosh & Moon, 2005;Su et al., 2015). In making investment decisions, investors pay attention to audit tenure as a factor that influences the movement of market performance. Longer tenure audits can improve the quality of audit work and therefore market performance will be stable. (Jorjani & Safari Gerayeli, 2018). The quality of audit work will increase along with the audit tenure period where auditors can obtain a better understanding of the client system, client business and industrial environment (Dunham, 2002). The correlation of audit tenure in company market performance as measured by stock returns has been discussed in previous studies as explained by Gosh and Moon (2005) suggested that longer audit tenure is considered to improve the quality of information presented, shown by the increase in the company's stock rating. Ghosh and Moon's (2005) statement is supported by the Su (2015) study which explains that the long audit tenure is considered capable of increasing information credibility so that it can increase stock prices (Su et al., 2015). H3: Tenure audit has a positive relationship with company market performance. Audit Tenure and Market Performance Audit through Going Concern Audit Opinion Inspired Confidence Theory (1932) explains that audits are carried out to build public trust. (Limperg, 1985). The audit opinion is the only result of the audit process that can be observed, where the report can affect clients and users of financial statement information which ultimately drives economic consequences (Tritschler, 2013). Inspired Confidence Theory, also explained that an audit opinion is influenced by the technical capabilities possessed by the auditor concerned and this can be reflected in the size of the public accounting firm, audit tenure (Limperg, 1985). Audit tenure can affect company market performance because it is considered to reduce information asymmetry that can increase the credibility of financial statement information needed by investors in making investment decisions that ultimately affect company market performance. A longer audit tenure is a benchmark of the abilities and knowledge possessed by the auditor concerned and more able to be able to reveal the actual financial condition of the company, especially when there is a problem of business continuity in the company concerned. (Dunham, 2002;Knechel & Vanstraelen, 2007). Hence it can be concluded that when an accountant office with an audit tenure issued a longer paragraph going concern explanation, the market will perceive this as negative information that can reduce the company market performance. H4: Going concern audit opinion has a negative role in mediating the correlation between the tenure of the audit and the company's market performance. RESEARCH METHODOLOGY The research approach used in this study is a quantitative approach. The approach used to test the research hypothesis is to use Path Analysis using multiple regression, which is a technique that can be used to analyze the relationship between the dependent variable and several dependent variables (Hair, Black, & Babin, 2010). The subjects of this study were all companies listed on the Indonesia Stock Exchange from 2007 to 2017. The size of the sample of this study was 245 which had a negative trend in financial ratios so far. The aim of this research is the role of going concern audit opinion in mediating audit tenure and company market performance. Correlation Analysis The variables used in this study are independent variables namely audit tenure (AT) size, intervening variables going concern audit opinion (GO), dependent variables namely market performance (MP) and firm size control variables (COMP SIZE), company age (AGE) , leverage (LEV) and return on equity (ROE). The analysis carried out in this study was used to determine the level of the interrelation of the variables that were used in the path of analysis. The results of the analysis can be presented as follows: Correlation The results of the correlation analysis that occur between the main variables in the research model have concluded that there is a direct and significant closeness at the level of 1% so that these variables are feasible to be used in path analysis. Path Analysis Referring to the research conducted by Dawn Iacobucci (2012) path analysis with the mediation of categorical variables is done through estimating three regression models as follows: I. Effect of X (exogenous) on Y (endogenous) / X → Y is performed using multiple linear regression analysis. II. The influence of X (exogenous) on Z (intervening) / X → Z is done using logistic regression analysis. III. The influence of X (exogenous) and Z (intervening) on Y (endogenous) / X & Z → Y was also carried out using regression analysis. From the explanation above, in estimating the path in models I and III it is necessary to test the classical assumptions of the regression model, while in model II the feasibility of the logistic model is tested. Table Variabel Model Based on the summary results of the path analysis estimation results can be explained the results of proving the research hypothesis as follows: 1. Hypothesis I Hypothesis I of the study states that audit tenure has a positive effect on going concern audit opinion. This hypothesis is proven through the logistic regression model (Model I) which obtained a Wald value of 23,069 with a significance of 0,000. Based on the logistic regression coefficient that is positive and the significance value of Wald shows that the influence of audit tenure on going concern audit opinion is positive and significant with a significance value of 0,000 <0.05. Thus the hypothesis I research can be proven true. 2. Hypothesis II Hypothesis II of the study states that going concern audit opinion has a negative effect on market performance. This hypothesis is proven through Model III which obtained the t value of -2.058 with a significance of 0.041. Based on the negative t value and the significance value of t arithmetic the influence between going concern audit opinion on market performance is negative and significant with a significance value of 0.041 <0.05. 3. Hypothesis III Hypothesis III of the study states that audit tenure has a positive effect on market performance. This hypothesis is proven through Model III which obtained the t value of 4.501 with a significance of 0.000. Based on the positive value of t arithmetic and the significance value of t arithmetic the influence of audit tenure on market performance is positive and significant with a significance value of 0,000 <0.05. Hypothesis IV The mediation test in this study was based on the theory presented by Baron & Kenny (1986) with the results of testing in the logistic regression analysis indicating that the influence of the size of the public accounting firm on going concern opinion was significant (AT AT GO: 0.000 <0.05). In other models, multiple linear regression analysis concludes that going concern opinion has a significant influence on market performance (GO → MP: 0.041 <0.05). Thus, based on the theory of Baron & Kenny (1986), it can be concluded that there is an influence between the size of the public accounting firm on market performance through going concern audit opinion. The mediating nature of the relationship between the size of the public accounting firm against market performance through going concern opinion is concluded to be partial mediation. This is because the results of testing the influence between audit tenure variables directly on market performance in the multiple linear regression model are significant than in the linear regression Model III. The results after entering the going concern opinion mediation variable also remain significant (AT → MP: 0.001 <0.05). DISCUSSION Discussion of the results of hypothesis testing on the overall pathway model can be summarized as follows: 1. The first hypothesis is that the audit tenure has a significant positive effect on the proven concern audit opinion with a significance level of <0.05. The results of statistical analysis in this study support the explanation of Inspired Confidence Theory (1932) that to carry out audits properly, adequate technical capabilities are needed (Limperg, 1985). Experience (experience) is an element in technical capabilities that affects the quality of audit work. The experience of auditors from companies that they have handled can improve the technical capabilities they possess. In the context of an audit, the experience can be obtained from the interaction between the auditor and his client. The length of the auditor's interaction with his client is called audit tenure. A longer tenure audit can improve the auditor's understanding of the client's business and accounting system. Auditors with a longer audit tenure, are perceived to be able to provide good quality audit work. An audit is considered to have good quality if the auditor concerned is able to express the company's true financial condition, especially when there is a problem of going concern (Ghosh & Moon, 2005;Knechel & Vanstraelen, 2007). The results of testing the first hypothesis in this study support the research of Geiger and Raghunandan (2002), Jackson et al (2008) andKnechel Vanstraelen (2007) which describe a positive relationship between audit tenure and going-concern audit opinion. A significant positive relationship between audit tenure and going concern audit opinion means that a longer audit tenure will enable the auditor to have better knowledge regarding the client's condition, and therefore will increase the likelihood of the issuance of going concern audit opinion on companies experiencing financial difficulties. 2. The second hypothesis is that going concern audit opinion has a significant negative effect on proven market performance with a significance level of <0.05. The results of the statistical analysis in this study support the explanation of Inspired Confidence Theory (1932) that the audit was conducted to build public trust. The trust of the community rests on audit opinions from the results of the tests carried out during the audit process in progress (Limperg, 1985). The audit opinion is a result that can be observed from a series of audit processes, where the information can affect clients and users of financial statement information which ultimately leads to economic consequences (Francis, 2011). When an independent auditor has doubts about the company's ability to continue, the auditor must disclose this in the audit opinion. The issuance of going-concern audit opinion reacts to companies that have experienced financial difficulties. Information that was previously private to the company then moves into the public arena. This information can trigger a different reaction among parties interested in the company, one of which is shareholders. In an efficient market, the company's market performance will be influenced by the expectations of information users on the company's prospects. When there is unexpected information presented in an audit opinion, it can have an impact on the company's overall market performance. (Brunelli, 2018). The results of testing the second hypothesis support the research of Dodd (1984), Fleak (1994), Soltani (2000) and O'Rilley (2009) which describe the existence of a negative relationship between going-concern opinions and company market performance. This means that going-concern audit opinion valuable to users of financial information. Users of information view that goingconcern audit opinions imply that the company is experiencing sustainability problems so that it lowers expectations of the company's prospects and ultimately lowers the company's market performance. 3. The third hypothesis is that audit tenure has a significant positive effect on proven market performance with a significance level of <0.05. The results of the statistical analysis in this study support the explanation of Inspired Confidence Theory (1932) that the audit was carried out to build public trust. The audit function itself leads to public trust. The trust of the community is built by their views on the auditor's ability in conducting audits. In the context of an audit, the experience is an element that builds the capabilities possessed by the auditor. The experience gained by auditors from conducting audits in various companies that have been handled by them can be a provision to handle companies in similar industries (Limperg, 1985). The auditor's experience can be obtained from the interaction between the auditor and the audited company. The period of interaction between the auditor and the company is called audit tenure. A longer audit tenure can improve the auditor's understanding of the client's business and accounting system. Therefore investors will feel lower uncertainty in their investment decisions reflected in stable market performance (Dunham, 2002;Ghosh & Moon, 2005;Jorjani & Safari Gerayelo, 2018;Su et al., 2015). The results of testing the third hypothesis support the research by Su et al (2015) and Callen Fang (2016) show a positive relationship between audit tenure to company market performance. The positive relationship between audit tenure and market performance shows that users of financial information believe that the information presented by auditors with longer audit tenure is reliable information. When financial information users have confidence in the information, the company will be more in demand and ultimately can improve the company's market performance. 4. The fourth hypothesis is that audit tenure has a significant negative effect on market performance through going-concern audit opinion is proven with a significance level of <0.05. Inspired Confidence Theory explains that audits are carried out to build public trust. (Limperg, 1985). The audit opinion is the only result of the audit process that can be observed, where the report can affect clients and users of financial statement information which ultimately drives economic consequences (Tritschler, 2013). Inspired Confidence Theory also explains that giving an audit opinion is influenced by the technical capabilities of the auditor concerned which can be reflected in, audit tenure (Limperg, 1985). Audit tenure is something that can affect the company's market performance because it is assessed to reduce information asymmetry that can increase the credibility of financial statement information needed by investors in making investment decisions that ultimately affect the company's market performance. Longer tenure audit is a measure of the ability and knowledge possessed by the auditor concerned and is better able to reveal the true financial condition of the company, especially when there is a going concern problem in the company concerned. (DeAngelo, 1981;Dunham, 2002;Knechel & Vanstraelen, 2007). Hence it can be concluded that when an accountant's office with an audit tenure that publishes a longer opinion with the going concern explanatory paragraph, the market will also believe this as negative information that can reduce the market performance of the company concerned. CONCLUSION Based on the results of the research and discussion previously explained, the conclusions that can be drawn from this study are: audit tenure has a positive and significant impact on the company's market performance with a significance level of 5%. From these results, it can be concluded that the market views audit tenure as a benchmark that a longer audit tenure, will enable auditor to be better able to carry out the audit work because he has greater knowledge than the auditor with a shorter audit period. But when the auditor issues a going-concern audit opinion, the market will respond to this as negative information which ultimately reduces the company's market performance. The negative response is the result of market confidence in the auditor's work. Because the market believes in the results of the auditor's work, then when the auditor issues an audit opinion, the market will respond to this as negative information which ultimately lowers the company's market performance.
v3-fos-license
2024-02-27T18:26:22.378Z
2024-02-19T00:00:00.000
267982592
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1323553/pdf?isPublishedV2=False", "pdf_hash": "eb610c43ccbbe986f2c8e6a1d0196682e414b587", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1504", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "sha1": "11480e94dfae1ac48660ba7801a5c7b1a5d13558", "year": 2024 }
pes2o/s2orc
A novel IgE epitope-specific antibodies-based sandwich ELISA for sensitive measurement of immunoreactivity changes of peanut allergen Ara h 2 in processed foods Background Peanut is an important source of dietary protein for human beings, but it is also recognized as one of the eight major food allergens. Binding of IgE antibodies to specific epitopes in peanut allergens plays important roles in initiating peanut-allergic reactions, and Ara h 2 is widely considered as the most potent peanut allergen and the best predictor of peanut allergy. Therefore, Ara h 2 IgE epitopes can serve as useful biomarkers for prediction of IgE-binding variations of Ara h 2 and peanut in foods. This study aimed to develop and validate an IgE epitope-specific antibodies (IgE-EsAbs)-based sandwich ELISA (sELISA) for detection of Ara h 2 and measurement of Ara h 2 IgE-immunoreactivity changes in foods. Methods DEAE-Sepharose Fast Flow anion-exchange chromatography combining with SDS-PAGE gel extraction were applied to purify Ara h 2 from raw peanut. Hybridoma and epitope vaccine techniques were employed to generate a monoclonal antibody against a major IgE epitope of Ara h 2 and a polyclonal antibody against 12 IgE epitopes of Ara h 2, respectively. ELISA was carried out to evaluate the target binding and specificity of the generated IgE-EsAbs. Subsequently, IgE-EsAbs-based sELISA was developed to detect Ara h 2 and its allergenic residues in food samples. The IgE-binding capacity of Ara h 2 and peanut in foods was determined by competitive ELISA. The dose-effect relationship between the Ara h 2 IgE epitope content and Ara h 2 (or peanut) IgE-binding ability was further established to validate the reliability of the developed sELISA in measuring IgE-binding variations of Ara h 2 and peanut in foods. Results The obtained Ara h 2 had a purity of 94.44%. Antibody characterization revealed that the IgE-EsAbs recognized the target IgE epitope(s) of Ara h 2 and exhibited high specificity. Accordingly, an IgE-EsAbs-based sELISA using these antibodies was able to detect Ara h 2 and its allergenic residues in food samples, with high sensitivity (a limit of detection of 0.98 ng/mL), accuracy (a mean bias of 0.88%), precision (relative standard deviation < 16.50%), specificity, and recovery (an average recovery of 98.28%). Moreover, the developed sELISA could predict IgE-binding variations of Ara h 2 and peanut in foods, as verified by using sera IgE derived from peanut-allergic individuals. Conclusion This novel immunoassay could be a user-friendly method to monitor low level of Ara h 2 and to preliminary predict in vitro potential allergenicity of Ara h 2 and peanut in processed foods. Introduction Food allergy is a growing global health concern, affecting up to 10% of the general population (1).One of the most common and severe food allergies is peanut (Arachis hypogaea) allergy, an immunoglobulin E (IgE)-mediated food allergy with a prevalence of 1%−3% in developed countries (2).Peanut allergy tends to be lifelong and sub-milligram levels of peanut protein can elicit objective reactions in the most sensitive patients (3).Since there is currently no approved curative treatment for this condition, complete avoidance of peanut proteins is the standard of care.This, however, is often difficult to achieve given the widespread use of peanut as food ingredient and maybe absence of detectable peanut in foods labeled with precautionary (advisory) allergen labeling statements for peanut (4,5).In addition, peanut allergenicity mainly depends on its IgE epitopes.In the last decade, food processing is increasingly recognized as a method to enhance food tolerance, but the effect of food processing on the structure and allergenicity of peanut proteins is highly variable and therefore difficult to predict (6).Therefore, reliable methods to detect peanut allergenic epitopes and measure changes in IgE-binding ability of peanut in processed foods are warranted. Analytical methods currently used to detect peanut allergens, such as real-time polymerase chain reaction (7), reversedphase high-performance liquid chromatography (RP-HPLC) (8), liquid chromatography coupled mass spectrometry (9), enzyme-linked immunosorbent assay (ELISA) (10)(11)(12), and lateral flow immunoassay (10), lack the ability to specifically detect allergenic epitopes of the allergens.Traditionally, the method for measurement of IgE-binding capacity variations of peanut allergens is based on patients' IgE antibodies (13)(14)(15).However, the limited and variable sera from peanut-allergic patients makes the standardization of the detection method very difficult for commercial purposes.Hence, there is a need for more efficient and simplistic analytical methods that detect minute traces of peanut allergens and reveal changes in the IgE-immunoreactivity of peanut allergens in foods. One of the analytical methods that can be used for allergen detection and is characterized by high specificity and sensitivity, low cost, and simplicity is ELISA.Recently, an ELISA based on IgE epitope-specific antibodies (IgE-EsAbs) was successfully used for the prediction of IgE-immunoreactivity variations of milk in food samples (16).This technique aims to detect specific IgE epitopes in the allergen, which play vital roles in triggering the allergic cascade and hence may be used to preliminary predict in vitro food potential allergenicity (17,18).One of the most widely characterized allergens in peanut is Ara h 2, which is shown to be the most potent allergen and the best predictor of peanut allergy (19,20).Therefore, IgE epitopes in Ara h 2 could serve as reliable biomarkers for measurement of potential changes in IgE-immunoreactivity of Ara h 2 in foods.Based on this, we hypothesized that an ELISA based on IgE-EsAbs directed against Ara h 2 could be used to accurately detect the IgE epitope content of Ara h 2, thereby revealing the IgE-binding changes of Ara h 2 and peanut in processed foods in a cost-efficient and simplistic manner (18). In this study, our objective was to develop an IgE-EsAbs-based sandwich ELISA (sELISA) for detecting allergenic residues of Ara h 2 and evidencing changes in the IgE-immunoreactivity of Ara h 2 in foods (Figure 1).Briefly, a monoclonal antibody against the major IgE epitope of Ara h 2 and a polyclonal antibody against twelve IgE epitopes of Ara h 2 were generated for use as capture and detection antibodies in the assay (Figures 1A, B).Next, the IgE-EsAbs-based sELISA was used to detect Ara h 2 and its allergenic residues in food samples, and results were compared to those obtained using sera IgE derived from peanut-allergic individuals (Figures 1C, D). Materials and methods . Materials and reagents DEAE Sepharose Fast Flow, Histrap TM HP affinity column (1 mL), and HiTrap TM Protein A HP affinity column (1 mL) were purchased from GE Healthcare (Uppsala, Sweden).Prestained protein marker and 3,3' ,5,5'-Tetramethylbenzidine (TMB) were obtained from Thermo Fisher Scientific (Rockford, USA).Complete Freund's adjuvant, incomplete Freund's adjuvant, gelatin from cold water fish, α-lactalbumin, β-lactoglobulin, casein, goat anti-rabbit HRP-IgG, rabbit anti-mouse HRP-IgG, and biotin-labeled goat anti-human IgE (Bio-IgE) were purchased from Sigma (St. Louis, USA).IgE epitope peptides (purity ≥ 95%, RP-HPLC) were synthesized by GL Biochem (Shanghai, China).Food samples were purchased from local supermarkets.Peanut allergy patients' sera were provided by the First Affiliated Hospital of Gannan Medical University and approve by Gannan Medical University Research Ethics Committee (Reference number 2021105, 8/March/2021), details of which are shown in Supplementary Table S1.All reagents were analytical grade and solutions were prepared using ultra-pure water throughout the experiments. . Purification of peanut allergen Ara h Ara h 2 was isolated from raw peanut protein extract according to the methods described in Hu et al. (21), with minor modifications.Briefly, raw peanut seeds were ground into peanut butter and defatted three times with acetone containing 0.07% βmercaptoethanol at a 1:5 (w/v) ratio while being stirred at 25 • C for 2 h.After centrifugation (12,000 × g for 10 min at 4 • C), the precipitate was collected and air-dried.Next, the protein from the defatted powder (20.0 g) was extracted by addition of 100 mL Tris-HCl buffer (50 mmol/L, pH 7.2), followed by incubation at 25 • C for 2 h while stirring.After centrifugation, the supernatant (peanut protein extract) was collected and Ara h 2 was subsequently isolated from the supernatant by DEAE Sepharose Fast Flow anion exchange chromatography followed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE).Shortly, the chromatographic column (1.6 cm × 50 cm) was equilibrated with Tris-HCl (50 mmol/L, pH 7.2) and subsequently loaded with 10 mL peanut protein extract, after which the loaded column was washed with equilibrating buffer containing 0.04 mol/L NaCl.The proteins were further eluted using 600 mL of 0.04-0.2mol/L NaCl gradient in equilibrating buffer.After dialysis and lyophilization of the collected eluates, the eluates were subjected to SDS-PAGE and the Ara h 2 fraction was excised from the SDS-PAGE gel.The purity of Ara h 2 was analyzed by ImageJ software. . Generation of a monoclonal antibody against IgE epitope of Ara h A mouse monoclonal antibody (mAb, 2K9-1) against the peptide sequence NH2-DRRCQSQLER-COOH (B3), selected based on the sequence of the most dominant IgE epitope of Ara h 2 (22, 23), was prepared by Abmart (Shanghai, China) and used as a capture antibody in the IgE-EsAb-based sELISA. . Generation of a polyclonal antibody against IgE epitopes of Ara h IgE-EsAbs for use as detection antibody in the IgE-EsAbbased sELISA were obtained following inoculation of rabbits with a multiepitope-based vaccine (a recombinant protein), comprising of a T cell epitope, IgE epitopes, and linkers, as detailed below. . . Construction of an expression system for recombinant tAra h A tandem containing twelve IgE-binding epitopes of Ara h 2 (tAra h 2) was designed as described previously (24).In short, twelve IgE epitopes of Ara h 2 (B1-B12) were selected as part of the tandem based on Stanley et al. ( 22) and Shreffler et al. (25) epitope mapping results, and one dominant T cell epitope (AA94-113) of Ara h 2 was selected (26)(27)(28).The epitope sequences are shown in Supplementary Table S2.To construct the tAra h 2, the T cell epitope and B1-B12 were situated on the N-terminal and Cterminal respectively, and four glycines (GGGG) were inserted as a linker between two adjacent epitopes.Next, the gene sequence of tAra h 2 was custom-synthesized and cloned into the pET-28a(+) expression plasmid.Following confirmation of successful cloning by DNA sequencing, the plasmids were transformed into E. coli BL21 (DE3) pLysS cells by Chinapeptides (Shanghai, China). . . Expression and purification of recombinant tAra h Expression of recombinant tAra h 2 by E. coli BL21 (DE3) pLysS cells was induced by incubating the cells with 0.6 mmol/L isopropyl-β-d-thiogalactoside (IPTG) at OD 600 nm ∼0.6 at 26 • C for 4 h.After centrifugation (12,000 × g for 10 min at 4 • C), the cell pellet was resuspended in 10 mmol/L phosphate buffer saline (PBS, pH 7.2) and cells were subsequently lysed by ultrasonication.After centrifugating again, the recombinant tAra h 2 in the supernatant was purified by Histrap TM HP according to the manufacturer's instructions, and the purity of recombinant tAra h 2 was analyzed by ImageJ software. . . Production and purification of a tAra h -specific polyclonal antibody The animal study was approved by Gannan Medical University Animal Care Committee, under the guidelines of China Council for Animal Care (SYXK-Gan 2018-0004, China).Two8-weekold male New Zealand white rabbits were purchased from the Ganzhou Institute of Animal Husbandry (SCXK-Gan 2018-0009, China).After collecting the negative serum from auricular vein, the rabbits were subcutaneously immunized with 1 mg recombinant tAra h 2 (2 mg/mL) emulsified with complete Freund's adjuvant in a total volume of 1 mL as a priming dose.Subsequently, the rabbits received three 1 mL booster injections containing the same dose of antigen emulsified in incomplete Freund's adjuvant in 2week intervals for the production of tAra h 2-specific polyclonal antibody (pAb-tAra h 2).One week after the last immunization, blood samples were taken from the carotid artery and were clotted overnight at 4 • C. The serum was isolated by centrifugation at 4,500 × g for 10 min at 4 • C.Then, the IgG (pAb-tAra h 2) was purified by HiTrap TM Protein A HP according to the manufacturer's instructions, and the obtained pAb-tAra h 2 was stored at −80 • C until use. . Characterization of the generated monoclonal and polyclonal antibodies Target binding and specificity of the generated monoclonal and polyclonal antibodies for use in the IgE-EsAb-based sELISA were evaluated as detailed below. . . Analysis of a nity constant of monoclonal antibody The affinity constant (K aff ) of 2K9-1 to Ara h 2 was analyzed with indirect ELISA as described previously (29).Briefly, a microliter plate was pre-coated overnight at 4 • C with three different concentrations of Ara h 2 (0.5 µg/mL, 1 µg/mL, and 2 µg/mL), after which wells were washed three times with PBS containing 0.05% Tween-20 (PBST).Next, wells were blocked with 3% gelatin in PBS for 1 h at 37 • C.After washing, serial concentrations (2,000 ng/mL, 1,000 ng/mL, 500 ng/mL, 250 ng/mL, 125 ng/mL, 62.5 ng/mL, 31.25 ng/mL, and 15.625 ng/mL) of 2K9-1 was added and incubated for 1 h at 37 • C. The wells were washed and subsequently incubated with 100 µL of rabbit anti-mouse HRP-IgG (diluted 1:10,000 in PBS) for 1 h at 37 • C.After washing again, 100 µL of TMB substrate for HRP was added and incubated for 15 min at 37 • C, followed by addition of 50 µL of 2 mol/L sulfuric acid and immediate measurement of optical density at 450 nm (OD 450nm ) using a microplate reader (Varioskan LUX; Thermo Fisher Scientific, USA).The K aff of 2K9-1 was calculated as follows: . . Analysis of the titer of polyclonal antibody The titers of tAra h 2-specific antibodies in the collected rabbit serum were determined by indirect ELISA.Microplates were coated with 100 µL of recombinant tAra h 2 (1 µg/mL) overnight at 4 • C.After washing three times with PBST, each well was blocked with 250 µL of 3% gelatin in PBS for 1 h at 37 • C.After washing, a dilution series of rabbit serum (100 µL/well) was added and incubated for 1 h at 37 • C. Next, wells were washed and subsequently incubated with 100 µL of goat anti-rabbit HRP-IgG (diluted 1:5,000 in PBS) for 1 h at 37 • C.After washing, wells were incubated with 100 µL of TMB solution for 15 min at 37 • C, after which 50 µL of sulfuric acid (2 mol/L) was added to stop the color development and the OD 450nm was measured using a microplate reader. The serum antibody titer was defined as the maximum dilution factor that yielded P/N > 2.1, and P > 0.2 (n = 3), in which P and N represent the OD 450nm of positive and negative serum, respectively. . . Evaluation of antibody binding to IgE epitope(s) of Ara h Binding of 2K9-1 and pAb-tAra h 2 to the target IgE epitope(s) of Ara h 2 was assessed by competitive ELISA (cELISA), as described previously (30).In short, the plates were coated with 100 µL of purified Ara h 2 (0.25 µg/mL) overnight at 4 • C.After washing and blocking, wells were incubated with 50 µL of varying concentrations of IgE epitope peptide (0.25, 0.5, or 1 µg/mL for 2K9-1; 0.25, 1, or 4 µg/mL for pAb-tAra h 2;) and 50 µL of a fixed concentration of antibody (31.25 ng/mL for 2K9-1; 2 µg/mL for pAb-tAra h 2) for 1 h at 37 • C.After washing, wells were incubated with 100 µL of rabbit anti-mouse HRP-IgG (diluted 1:10,000 in PBS, for 2K9-1) or goat anti-rabbit HRP-IgG (diluted 1:5,000 in PBS, for pAb-tAra h 2) for 1 h at 37 • C, and subsequently washed again.Next, wells were incubated with 100 µL TMB solution for 15 min at 37 • C, followed by addition of 50 µL of 2 mol/L sulfuric acid and immediate measurement of optical density as detailed above. . . Evaluation of antibody specificity The cross-reactivity (CR) of 2K9-1 and pAb-tAra h 2 with various allergens was analyzed by cELISA.First, protein as a source of allergens was extracted from different foods.Protein from egg, soybean, oat, and wheat were extracted as our previously reported method (29).Protein from cashew, macadamia, pistachio, chestnut, almond, sesame, and walnut were first powdered and subsequently defatted using acetone (1:10, w/v).Proteins were then extracted from 1 g defatted powder addition of 20 mL Tris-HCl (50 mmol/L, pH 8.0, containing 2% Tween-20) and subsequent incubation for 4 h at 25 • C while stirring.After centrifugation (12,000 × g for 10 min at 4 • C), the supernatant was collected for use in the cELISA. . Development of the IgE-EsAbs-based sELISA for Ara h detection The microtiter plate was coated with 100 µL of 2K9-1 (capture antibody, 1 µg/mL) and incubated overnight at 4 • C.After washing three times with PBS containing 0.2% Tween-20 (PBST), the wells were blocked with 250 µL of 3% gelatin in PBST and incubated for 1 h at 37 • C. The wells were washed and 100 µL of Ara h 2 (or food samples and blocking buffer as control) was added, followed by incubation for 2 h at 37 • C.After washing again, 100 µL of pAb-tAra h 2 (detection antibody, 4 µg/mL) was added and incubated for 1 h at 37 • C.After removal of unbound pAb-tAra h 2 by washing, 100 µL of goat anti-rabbit HRP-IgG (diluted 1:5,000) was added to the wells and incubated for 0.5 h at 37 • C.After washing, 100 µL of TMB substrate solution was added and color was developed for 20 min at 37 • C. Color development was terminated using 50 µL of 2 mol/L sulfuric acid, after which the OD 450nm was measured using a microplate reader.To reduce non-specific adsorption, the Ara h 2, food samples, pAb-tAra h 2, and goat anti-rabbit HRP-IgG were diluted with a blocking solution (3% gelatin in PBST). . Evaluation of the sensitivity, accuracy, precision, and specificity of the IgE-EsAbs-based sELISA for Ara h detection The limit of detection (LOD) and quantitation (LOQ), accuracy, and precision of the developed IgE-EsAbs-based sELISA were estimated using the Eurachem Guidance on validating analytical methods (31).LOD and LOQ were computed as the concentration of Ara h 2 corresponding to the mean of ten blank values plus three or ten standard deviations (SD), respectively.The accuracy was checked by analyzing the bias (%), which was defined as the difference (%) between the Ara h 2 concentration detected by the developed sELISA and the actual concentration of Ara h 2. The precision of the proposed sELISA was assessed by testing the relative SD of repeatability (RSDr, intra-day) and reproducibility (RSDR, inter-day) at a series of Ara h 2 concentrations.Repeatability and reproducibility were determined by analyzing Ara h 2 at different concentrations in 1 day (n = 5) and in five different days (n = 3), respectively.Results were computed as follows: RSDr or RSDR (%) = SD/mean × 100%. . Evaluation of applicability of the IgE-EsAbs-based sELISA A spike/recovery experiment was performed to investigate the capacity of the IgE-EsAbs-based sELISA to accurately detect Ara h 2 in samples with complex matrices.First, proteins were extracted from different foods.Proteins from boiled peanut, roasted peanut, and fried peanut were extracted as described above for raw peanut.Proteins from cookie, bread, and dry baked cake were extracted by first powdering the food, followed by addition of 20 mL Tris-HCl (50 mmol/L, pH 8.0, containing 2% Tween-20) to the powder (1 g) and agitation for 4 h at 25 • C. Samples were then centrifuged (12,000 × g for 10 min at 4 • C) and supernatants were collected.Protein extracts from beverages were obtained by centrifugation, followed by collection of supernatants.Protein extracts of peanuts and beverages were spiked with 0, 0.25, or 2.0 mg/mL Ara h 2, and those of cookie, bread, and dry baked cake were spiked with 0, 0.25, or 2.0 mg/g Ara h 2. Samples were analyzed using IgE-EsAbs-based sELISA, and the recovery was calculated as follows: Recovery (%) = (A2-A0)/A1 × 100%, where A0 represents the detected concentration of a sample without spiked Ara h 2, A1 the concentration of Ara h 2 used for spiking, and A2 the detected concentration of a sample spiked with Ara h 2. . Assessment of IgE-binding capacity of food samples The IgE-binding capacity of Ara h 2 and peanut in food samples was determined by cELISA.The microplate was coated with 100 µL of Ara h 2 or raw peanut extract (RPE) at 2 µg/mL and incubated overnight at 4 • C.After washing three times with PBS containing 0.1% Tween-20 (PBST), the wells were blocked with 3% gelatin in PBST and incubated for 1 h at 37 • C.After washing, equal volume (50 µL) of food samples and pooled sera (diluted 1:10 for Ara h 2; diluted 1:30 for RPE) were added and incubated for 1 h at 37 • C.After washing thrice, 100 µL of Bio-IgE (diluted 1:2500) was added and incubated for 1 h at 37 • C.After washing again, 100 µL of HRP-streptavidin (diluted 1:60) was added and incubated for 1 h at 37 • C. The subsequent procedures were in accordance with the cELISA described above.To reduce non-specific adsorption, food samples, pooled sera, Bio-IgE, and HRP-streptavidin were diluted in blocking solution (3% gelatin in PBST). . Statistical analysis Data are reported as mean ± SD.Statistical analyses were performed using SPSS 17.0 (SPSS Inc., Chicago, USA) and statistical significance was assessed using Tukey's pairwise comparisons of ANOVA.Differences were considered significant when * p < 0.05 and * * p < 0.01. Results and discussion . Purification of Ara h The raw peanut protein extract was fractionated into three major peaks (a, b, and c) using anion exchange chromatography under linear gradient elution (Figure 2A).Then, the eluted fractions were analyzed by SDS-PAGE (Figure 2B).The eluates of peak "b" contained two distinct bands with molecular masses ranging from 18 to 20 kDa (Figure 2B, lanes 3-8), which are corresponding to Ara h 2.01 and Ara h 2.02, respectively (21).The purity of Ara h 2 in the eluates, however, was only between 46.61% and 80.95% (Figure 2B, lanes 4-7) as a result of co-elution of Ara h 6 (15 kDa), which has a high homology with Ara h 2 and therefore has similar physical and chemical properties (20, 32).To improve the purity of Ara h 2, the eluates between positions "4" and "7" in Figure 2A were collected, dialyzed, lyophilized, and subsequently subjected to SDS-PAGE.Ara h 2 protein extracted from the SDS-PAGE gel showed a purity of 94.44% (Figure 2C), and the obtained Ara h 2 was identified by mass spectrometry (Supplementary Figure S1).These results indicate that high purity Ara h 2 was obtained by the employed two-step purification method. . Expression and purification of recombinant tAra h The amino acid and gene sequences of the designed tAra h 2 are shown in Supplementary Figure S2.Sequencing revealed that the constructed expression plasmid pET28a(+)-tAra h 2 contained the full gene sequence of tAra h 2 in expression strain E. coli BL21 (DE3) pLysS (Supplementary Figure S3, located 225-725 bp), indicating that the expression strain was successfully constructed. To test whether recombinant tAra h 2 could be expressed by the expression strain, cells were incubated with 0.6 mmol/L IPTG at 26 • C to induce expression.Following induction, a major band with an apparent molecular weight slightly below 25 kDa was observed, particularly after 4 h of induction (Figure 3A).The band presumably corresponding to recombinant tAra h 2 appeared at a greater molecular weight than the expected molecular mass (∼18.03kDa).This phenomenon is consistent with other reported His-tag fusion proteins (33)(34)(35).Thus, these results indicate that the recombinant tAra h 2 was successfully expressed. Following induction of expression by incubation with 0.6 mmol/L IPTG at 26 • C for 4 h, the cells were harvested by centrifugation.The pellet was sonicated, and the recombinant tAra h 2 in the supernatant was purified by Histrap TM HP.As shown in Figure 3B, most of the recombinant tAra h 2 was bound to the column after loading the supernatant (lanes 1 and 2), and there was no protein after non-specific elution (lane 3).The His-tagged protein bound to the Histrap TM HP column was eluted using different concentrations of imidazole (Figure 3B, lanes 4-7), and recombinant tAra h 2 was obtained at a purity of 88.56% (Figure 3B, lane 6). . Production and characterization of Ara h -specific antibodies for use in the IgE-EsAbs-based sELISA . . Immunological characterization of capture antibody K - The K aff of mouse monoclonal antibody 2K9-1 against Ara h 2 was analyzed by indirect ELISA.The concentration of 2K9-1 at half of the maximum absorbance in the plate coated with 2, 1, and 0.5 µg/mL of Ara h 2 were 36.37,37.85, and 43.44 ng/mL, respectively.Consequently, the average K aff was calculated as 1.69 × 10 9 L/mol (Figure 4A). The ability of 2K9-1 to bind to its target IgE epitope (B3) of Ara h 2 was assessed by cELISA.The results show that the inhibition increased with increasing peptide concentration (Figure 4B), indicating that 2K9-1 binds its target IgE epitope of Ara h 2. In addition, the epitope B3 can be recognized by sera IgE from most peanut-allergic patients, and has been identified as the most dominant IgE epitope of Ara h 2 (22, 23).This suggests that this epitope remains stable after processing and gastrointestinal digestion.As a result, this epitope can work as a dependable biomarker, and the prepared 2K9-1 can serve as an efficient tool for detecting Ara h 2 and measuring its IgE-binding changes in foods. The specificity of 2K9-1 for Ara h 2 was additionally determined by cELISA (Figure 4C).The IC 50 of Ara h 2 was 4.58 µg/mL.The 2K9-1 showed no binding to cow's milk proteins (αlactalbumin, β-lactoglobulin, and casein) or to proteins from sesame, pistachio, almond, macadamia, cashew, soybean, wheat, oat, and egg when these proteins at a concentration of 128 µg/mL.However, slight CR was observed with walnut proteins and chestnut proteins.The IC 50 of walnut proteins was 110.38 µg/mL, corresponding to a CR of 4.15%.For chestnut proteins, an inhibition rate of 32.13% was observed at a concentration of 128 µg/mL.This inhibition rate is similar to that of Ara h 2 at 2 µg/mL (31.91%).Hence, it can be speculated that the CR with chestnut proteins was ∼1.56%.These might be due to Ara h 2 . . Immunological characterization of detection antibody pAb-tAra h For the production of polyclonal antibodies against recombinant tAra h 2 (pAb-tAra h 2), rabbits were inoculated with the purified recombinant tAra h 2 four times.Following inoculation, the titer values of antisera were determined as 40,000 and 160,000 for rabbits A and B (Figure 4D), respectively.Therefore, the serum from rabbit B was selected for the purification of pAb-tAra h 2 using the HiTrap TM Protein A HP column. As recognition of IgE epitopes of native Ara h 2 by pAb-tAra h 2 is critical for successfully detecting Ara h 2 allergenic residues and measuring potential changes in IgE-immunoreactivity of Ara h 2 in foods (18,30), the binding of the purified pAb-tAra h 2 to twelve selected IgE epitopes of Ara h 2 was analyzed by cELISA (Figure 4E).The results show that the pAb-tAra h 2 recognized all selected IgE epitopes, and the inhibition increased with increasing epitope peptide concentration.These findings suggest that the content of Ara h 2 IgE epitopes in foods can be detected by pAb-tAra h 2. . Performance evaluation of the IgE-EsAbs-based sELISA Using the abovementioned Ara h 2-specific capture and detection antibodies, an IgE-EsAbs-based sELISA for Ara h 2 detection was set up and tested for sensitivity, accuracy, precision, and specificity as detailed below. . . Sensitivity evaluation and comparative analysis of the IgE-EsAbs-based sELISA The sensitivity of the developed IgE-EsAbs-based sELISA was evaluated by assessment of the lowest detectable Ara h 2 concentration.The assay showed a LOD and LOQ of 0.98 ng/mL (0.98 ppb) and 3.91 ng/mL (3.91 ppb), respectively.Generation of a calibration curve (Figure 5A) further revealed a linear working range of 0.125-16 µg/mL (r 2 = 0.9938). Comparative analysis showed that the IgE-EsAbs-based sELISA has a lower LOD than most other analytical methods used for Ara h 2 detection (Supplementary Table S3).Most importantly, our IgE-EsAbs-based sELISA can specifically recognize IgE epitopes of Ara h 2, which makes it able to detect Ara h 2 allergenic residues and with the potential to measure Ara h 2 IgE-binding variations in processed foods (18,30).As shown in Supplementary Table S3, the only analytical method that detects Ara h 2 IgE epitopes and with a significantly lower LOD is the rat basophilic leukemia (RBL-2H3) immune cell-based biosensing platform, with a LOD of 0.1 fmol/L (∼0.002 ppb) (39).However, given that this sensor-based analytical technique requires cells culture, IgE antibodies to trigger an immunoreaction, and specialized knowledge, the IgE-EsAbsbased sELISA may be a more suitable method when lower costs and less complexity are desired. . . The accuracy, precision, and specificity of the IgE-EsAbs-based sELISA Assay accuracy and precision were evaluated by assessment of intra-assay and inter-assay variation, using five replicates of Ara h 2 varying in concentration from 0.125 µg/mL to 16 µg/mL.The average bias of the intra-assay was 0.88%, and the mean RSDr and RSDR were 8.02% (4.13%−12.56%)and 10.68% (3.35%−16.50%),respectively (Supplementary Table S4).These results suggest that the IgE-EsAbs-based sELISA has high accuracy and precision.Assay specificity was evaluated by analyzing the CR with various food allergens at an allergen concentration ranging from 0.125 µg/mL to 8 µg/mL.A minor CR was observed for proteins of cashew, macadamia, pistachio, almond, and walnut, but not for any of the other nine food allergens (Figure 5B).These results indicate that the IgE-EsAbs-based sELISA is applicable for Ara h 2 detection with high specificity. . The applicability of the IgE-EsAbs-based sELISA for Ara h detection in food samples To assess the suitability of the IgE-EsAbs-based sELISA for detection of Ara h 2 in samples with a complex matrix, recovery experiments were conducted using samples extracted from various foods.As shown in Table 1, Ara h 2 was detected in all tested peanut-containing food samples.Analysis of spiked food samples demonstrated recoveries ranging from 79.00% to 120.78%.These results suggest that the developed immunoassay is a suitable method for the detection of Ara h 2 in food samples. . Validation of the IgE-EsAbs-based sELISA for measurement of Ara h IgE-binding variations in food samples The IgE-EsAbs-based sELISA was tested for its capability to measure potential changes in IgE-immunoreactivity of Ara h 2 and peanut in various processed foods using sera IgE.The IgE-binding ability was quantified by competitive ELISA using pooled sera from peanut-allergic individuals.Ara h 2 immunoreactivity variations in different foods are illustrated in Figure 6A, the IC 50 of RPE, boiled peanut extract, roasted peanut-1 extract, roasted peanut-2 extract, and fried peanut extract were found at dilution factors of 3236.88 (5.62 µg/mL protein), 381.62 (7.13 µg/mL protein), 1158.16(3.73 µg/mL protein), 868.81 (4.60 µg/mL protein), and 440.40 (17.44 µg/mL protein), respectively.Taking into account that Ara h 2 comprises about 10% of total peanut proteins (40), the IC 50 of Ara h 2 in these extracts corresponds to ∼0.56 µg/mL, 0.71 µg/mL, 0.37 µg/mL, 0.46 µg/mL, and 1.74 µg/mL, respectively, which is lower than the IC 50 of the purified Ara h 2 (1.87 µg/mL, Figure 6B).This deviation might be due to the presence of Ara h 6 and Ara h 7 in these extracts, which have a high homology with Ara h 2 (20), and may thus cross-react with patients' sera, resulting in lower IC 50 .Compared with the IC 50 of RPE, the IC 50 of roasted peanut-1 extract and roasted peanut-2 extract were lower, while the IC 50 of boiled peanut extract and fried peanut extract were higher.This indicates that roasting enhances Ara h 2 IgE-immunoreactivity, while boiling/frying reduces it, which is consistent with other reports (6,13,41).The IC 50 of beverage-1, beverage-2, and cookie were at dilution factors of 553.04, 123.18, and 28.36, respectively, indicating that the Ara h 2 IgE-immunoreactivity in these sample extracts was different.Finally, for bread and dry baked cake, slight inhibition was observed at dilution factors lower than 4, despite these foods being labeled to contain no peanuts.This slight inhibition may be explained by the possibility that the pooled sera used to assess the inhibition contained serum of an individual that was allergic to other food allergens alongside peanut, leading to a cross-reaction at low sample dilutions (42).Alternatively, this may be explained by the relatively high concentration (i.e., 2%) of Tween-20 present in the buffer used for protein extraction, which can suppress the antigen-antibody reaction (43). The IgE-binding variations of peanut in different food samples as measured using sera IgE from peanut-allergic patients is shown in Figure 6C.The IC 50 of RPE, boiled peanut extract, roasted peanut-1 extract, roasted peanut-2 extract, and fried peanut extract were observed at dilution factors of 6462.21 (2.82 µg/mL protein), 914.47 (2.97 µg/mL protein), 1776.33 (2.43 µg/mL protein), 1734.93 (2.31 µg/mL protein), and 1270.52 (6.04 µg/mL protein), respectively.These findings indicate that roasting enhances human IgE-immunoreactivity of peanut, while boiling/frying reduces it, which is consistent with the findings on Ara h 2 IgEimmunoreactivity described above and to those of previous reports (40,44,45), suggesting that Ara h 2 could serve as a useful biomarker for predicting IgE-binding changes of peanut.The IC 50 of beverage-1, beverage-2, and cookie were observed at dilution factors of 802.81, 4497.0, and 69.84 respectively, indicating that the IgE-binding ability of peanut in these sample extracts was different.Finally, similar to the observations on Ara h 2 IgEimmunoreactivity described above, slight inhibition was observed for samples of bread and dry baked cake at low dilution factors (4 or lower; Figures 6A, C). To validate the reliability of our developed sELISA in measuring IgE-binding variations of Ara h 2 and peanut in foods, the relationship between Ara h 2 IgE-binding ability and peanut IgE-binding ability, and the dose-effect relationship between the Ara h 2 IgE epitope content and Ara h 2 (or peanut) IgE-binding ability were established (Figure 6D).The detected Ara h 2 concentration (C RPE ) and the IgE-binding ability of RPE [IC 50(RPE) ] were used as positive controls.Regarding the relationship between Ara h 2 IgE-binding ability (Figure 6D, blue line) and peanut IgE-binding ability (Figure 6D, red line), although significant differences were observed between the ratios of IC 50(roasted peanut-1, fried peanut, and beverage-1) to IC 50(RPE) (Ara h 2) and the ratios of IC 50(roasted peanut-1, fried peanut, and beverage-1) to IC 50(RPE) (peanut), they had the similar trend, except for beverage-1.These results further indicate that Ara h 2 can serve as a reliable marker for predicting peanut IgE-binding capacity.In addition, as indicated in the dose-effect relationship between the Ara h 2 IgE epitope content (Figure 6D, green line) and Ara h 2 (or peanut) IgE-binding ability (Figure 6D, blue or red line), only fried peanut, beverages, and cookie showed significant difference, but they had the similar trend, except for beverage-1.Therefore, these findings highlight that there is a good dose-effect relationship between the Ara h 2 IgE epitope content and Ara h 2 (or peanut) IgE-binding ability, indicating that the developed immunoassay can reliably reveal and measure potential changes in immunoreactivity of Ara h 2 and peanut in food samples and overcome the shortcomings of the IgE-binding capacity test, which depends heavily on the sera IgE (limited and variable) from peanut allergy patients (6,13,41). In addition, the allergenicity of peanut allergens in food products can be established by basophils/mast cells degranulation and skin prick testing.Studies have shown that the results of IgEbinding experiments are usually in good agreement with these results obtained by basophils/mast cells degranulation assays or skin prick testing (13,45,46), which indicate that the IgE-binding capacity has the ability to preliminary predict potential peanut allergenicity (18).Therefore, the good dose-effect relationships obtained in this study suggest that our developed IgE-EsAbs-based sELISA could be used as a preliminary test to predict in vitro Ara h 2 and peanut potential allergenicity in processed foods.Also, a more complete validation should be performed in further study. Conclusion This study describes the development and validation of a novel IgE-EsAbs-based sELISA for detection of Ara h 2 and measurement of its immunoreactivity variations in foods.First, it was demonstrated that the monoclonal and polyclonal antibodies generated for use as capture and detection antibodies in the assay, respectively, could specifically recognize the target IgE epitope(s) of Ara h 2. Using these antibodies, the IgE-EsAbs-based sELISA exhibited high sensitivity (LOD = 0.98 ng/mL), specificity, and recovery (79.00%−120.78%)for Ara h 2 in food samples.Moreover, immunoreactivity changes of Ara h 2 in various food samples as tested by the IgE-EsAbs-based sELISA was consistent with that evaluated using sera IgE derived from peanut-allergic individuals.Together, these findings indicate that the developed immunoassay could serve as a sensitive, accurate, and relatively simplistic method for detecting Ara h 2 and measuring IgE-binding changes of Ara h 2 and peanut in food samples. FIGURE FIGURE Schematic illustration of the development and validation of the IgE-EsAb-based sELISA for detection of Ara h and prediction of peanut IgE-immunoreactivity in foods.(A) Generation of monoclonal and polyclonal antibodies specifically against IgE epitope(s) of Ara h for use as capture and detection antibodies in the immunoassay, respectively.(B) Schematic representation of the IgE-EsAb-based sELISA approach for detection of Ara h .(C) Assessment of IgE-binding capacity in food samples using sera IgE from peanut-allergic individuals for use in assay validation.(D) Assay validation by comparing the results obtained using the IgE-EsAb-based sELISA with those obtained using sera IgE.Results compared are the relationship between Ara h IgE-binding ability and peanut IgE-binding ability, and the dose-e ect relationship between the Ara h IgE epitope content and Ara h (or peanut) IgE-binding ability. where n = [Ag]/[Ag'], [Ag] and [Ag'] are two different coating concentrations of Ara h 2, and [Ab]t and [Ab']t are the concentrations (in mol/L) of 2K9-1 at which 50% of the maximum OD 450nm values were obtained for plates coated with [Ag] and [Ag'], respectively. FIGURE FIGURE Purification of Ara h by two-step method.(A) Chromatogram of raw peanut protein extract using DEAE-Sepharose Fast Flow anion-exchange chromatography.(B) SDS-PAGE patterns of Ara h purified by anion-exchange chromatography.M: markers; lanes to : fractions of to in anion-exchange chromatography profile.(C) SDS-PAGE patterns of isolated Ara h from the SDS-PAGE gel.M: markers; lane : purified protein from the gel.The bands of Ara h .(a) and Ara h .(b) are indicated by arrows.Letters a-c: three major peaks. FIGURE FIGURESDS-PAGE analysis of recombinant tAra h .(A) Expression of recombinant tAra h under di erent induction conditions.M: markers; lanes , , , , and : incubation without IPTG for h, h, h, h, and h, respectively; lanes , , , and : induction with .mmol/L IPTG for h, h, h, and h, respectively.(B) Purification of recombinant tAra h by Histrap TM HP.M: markers; lane : supernatant of E.coli lysates after centrifugation; lane : flow-through protein of the column; lane : non-specific elution with column volumes of mmol/L imidazole in PBS ( mmol/L, containing .mol/L NaCl, pH .); lanes -: specific elution with column volumes of , , , and mmol/L imidazole in PBS, respectively.The recombinant tAra h is indicated by arrows. FIGURE FIGURE Immunological characterization of mAb K -and pAb-tAra h .(A) A nity constant of K -to Ara h .The dash lines are the concentrations of K -at % of the largest absorbance in the plate coated with di erent concentrations of Ara h .(B) Binding ability of K -to IgE epitope.(C) Cross-reactivity of K -with food allergens.(D) The titers of antisera against recombinant tAra h .(E) Binding ability of pAb-tAra h to IgE epitopes (B -B ).(F) Cross-reactivity of pAb-tAra h with food allergens.Data are presented as mean ± SD (n = ). FIGURE FIGURE Performance analysis of the developed IgE-EsAb-based sELISA.(A) Calibration curves of IgE-EsAb-based sELISA for Ara h detection.(B) Analysis of the specificity of the developed IgE-EsAb-based sELISA by testing the cross-reactivity with food allergens, and the blocking bu er serving as negative control.Data are expressed as mean ± SD (n = ).Statistically significant at *p < .and **p < . . FIGURE FIGURE The capacity of the IgE-EsAb-based sELISA to measure peanut IgE-binding variations in foods assessed by cELISA using pooled sera from peanut-allergic individuals.(A) IgE-binding capacity of Ara h in food samples.(B) IgE-binding capacity of the purified Ara h .(C) IgE-binding capacity of peanut in food samples.(D) Relation between Ara h IgE epitope content, Ara h IgE-binding capacity, and peanut IgE-binding capacity.Data are shown as mean ± SD (n = ).C S and C RPE represent detected Ara h IgE epitope contents in food samples and raw peanut extract, respectively.IC (S) and IC (RPE) denote the IgE-binding capacity of food samples and raw peanut extract, respectively.Statistically significant di erences between Ara h IgE-binding capacity and peanut IgE-binding capacity are indicated by a (p < .) and b (p < .), between Ara h IgE epitope content and Ara h IgE-binding capacity are indicated by c (p < .) and d (p < .), and between Ara h IgE epitope content and peanut IgE-binding capacity are indicated by e (p < .) and f (p < .).ND, Not detected. TABLE Detection and recovery analysis of Ara h concentrations in (spiked) food samples (n = ). TABLE ( These extracts contain protein exclusively sourced from peanut, at concentrations of 18.20 mg/mL for raw peanut extract, 2.72 mg/mL for boiled peanut extract, 4.32 mg/mL for roasted peanut-1 extract, 4.00 mg/mL for roasted peanut-2 extract, and 7.68 mg/mL for fried peanut extract.b These extracts contain proteins from different sources, i.e., peanut and milk for beverage-1; peanut, oat, almond, hazelnut, and walnut for beverage-2; wheat, peanut, egg, almond, milk, and soybean for cookie; wheat, egg, and milk for bread; wheat and egg for dry baked cake.Concentrations of peanut protein in these extracts were not determined.c Applicable to peanut extracts and beverages.d Applicable to cookie, bread, and dry baked cake. a
v3-fos-license
2018-04-03T02:05:32.630Z
1996-10-18T00:00:00.000
32095986
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/271/42/25864.full.pdf", "pdf_hash": "8444b8b5ce8572b93b465d7804915ee000e24572", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1506", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "sha1": "7cf2fcca2b7157ce558fd8bd64ebc4d342848557", "year": 1996 }
pes2o/s2orc
Secobarbital-mediated inactivation of cytochrome P450 2B1 and its active site mutants. Partitioning between heme and protein alkylation and epoxidation. Secobarbital (SB) is a relatively selective mechanism-based inactivator of cytochrome P450 2B1, that partitions between epoxidation and heme and protein modification during its enzyme inactivation. The SB-2B1 heme adduct formed in situ in a functionally reconstituted system has been spectrally documented and structurally characterized as N-(5-(2-hydroxypropyl)-5-(1-methylbutyl)barbituric acid)protoporphyrin IX. The SB-protein modification has been localized to 2B1 peptide 277-323 corresponding to the active site helix I of cytochrome P450 101. The targeting of heme and this active site peptide suggests that the 2B1 active site topology could influence the course of its inactivation. To explore this possibility, the individual SB epoxidation, heme and protein modification, and corresponding molar partition ratios of the wild type and seven structural 2B1 mutants, site-directed at specific substrate recognition sites, and known to influence 2B1 catalysis were examined after Escherichia coli expression. These studies reveal that Thr-302 is critical for SB-mediated heme N-alkylation, whereas Val-367 is a critical determinant of 2B1 protein modification, and Val-363 is important for SB epoxidation. SB docking into a refined 2B1 homology model coupled with molecular dynamics analyses provide a logical rationale for these findings. The sedative hypnotic secobarbital (SB) 1 is an olefinic barbiturate that selectively inactivates rat liver cytochrome P450 2B1. Such inactivation is mechanism-based, entailing the partitioning of the drug between prosthetic heme and protein modification and epoxidation (Fig. 1). We have recently isolated the SB-modified heme and using a variety of mass spectrometric techniques, structurally characterized it as the N-(5-(2hydroxypropyl)-5-(1-methylbutyl)barbituric acid)protoporphyrin IX adduct (1). HPLC-peptide mapping of lysyl endopep-tidase C (Lys-C) digests of the corresponding SB-modified protein coupled with micro Edman degradation has led to the identification of the Lys-C peptide 277-323 as the target of the modification (1). This peptide includes the peptide domain corresponding to the distal helix I of P450 101, an evolutionarily highly conserved region that brackets the heme, nestles the heme-iron-bound oxygen as well as contacts the substrates (2)(3)(4), and thus is an active site feature. The precise residue that is modified is unknown, but this peptide contains several nucleophilic residues that would be good candidates. Nevertheless, identification of the SB-modified peptide as an active site domain suggests that some residues are sufficiently close to the oxidatively activated SB-moiety to intercept it at the least, part of the time, thereby rationalizing the observed partitioning between heme and protein alkylation and epoxidation ( Fig. 1) (1). Several structural mutants of cytochromes P450 (P450s), including cytochrome P450 2B1 (P450 2B1), site-directed at putative substrate recognition sites (SRS) (4,5) have been designed, heterologously expressed in Escherichia coli and functionally characterized with respect to several substrates and/or inhibitors (6 -16). As with other P450 structural mutants (6 -11), the studies with 2B1 mutants have shown that certain site-directed mutations not only profoundly alter the extent of the catalytic competence of the enzymes, but also the regio-and stereoselectivity of these metabolic reactions and the susceptibility of each structural mutant to mechanism-based inactivation by chloramphenicol and its analogs (15,16). Such structure-function relationships suggest that specific SRS alterations can profoundly affect active-site events. However, for a given mutant, the findings markedly differed with each substrate employed, underscoring the relative importance of each individual substrate-active site fit (16). In the studies described below, we have used these site-directed mutants to examine whether mutations of certain SRS residues influence SB:cytochrome P450 2B1 active site interactions and consequently its partitioning between heme and protein modification and epoxidation. [2-14 C]SB was synthesized with minor modifications and its structure was confirmed by 1 H NMR and mass spectrometric analyses as described previously (1,17). Its specific activity was 0.34 Ci/mol, and its radiochemical purity was Ͼ97%. Expression of P450 2B1 and Its Mutants Selected mutants (I114V, F206L, V363A, V363L, V367A, and G478S) previously expressed in COS cells from the pBC12BI vector were constructed in the pKK 233-2 expression vector. Plasmids harboring the P450 2B1 wild type and mutant cDNAs were transformed in Topp3 cells. E. coli were grown and harvested at peak expression of each P450, and CHAPS-solubilized membranes were prepared as described previously (15). Purification of P450 2B1 and P450 Reductase P450 2B1 and cytochrome P450 reductase were purified from liver microsomes of phenobarbital-pretreated male Sprague-Dawley rats by the methods of Waxman and Walsh (18) and Shephard et al. (19), respectively. P450 and heme content was determined as described previously (20). SB-mediated Inactivation of P450 2B1 and Its Mutants P450 2B1 or a P450 2B1 mutant (0.5 nmol) was incubated with P450 reductase (1 nmol), DLPC (60 nmol), catalase (280 units), EDTA (1 mol), [ 14 C]SB (1 mol), and NADPH (1 mol) in 1 ml of 50 mM Hepes buffer (pH 7.5), containing 15 mM MgCl 2 , and 20% glycerol at 37°C for 15 min. At the end of this incubation, a 10-l aliquot of the incubation mixture containing 10 pmol of P450 was used to determine pentoxyresorufin O-deethylase (PROD) activity, as detailed (1). In some cases, when parallel formation of SB-heme adduct or SB-epoxide was monitored, aliquots of the incubation mixture were removed at different intervals and chilled immediately in dry ice to stop the reaction. HPLC Determination of SB-epoxide The SB-epoxide used as the authentic standard was chemically synthesized by the method of Harvey et al. (21), purified by HPLC on a silica column using an isocratic solvent elution system consisting of hexane/ethyl acetate (3:2, v/v), and its identity was confirmed by positive liquid secondary ionization mass spectrometry analysis. The SBepoxide formed during incubation was extracted with ethyl acetate, dried under N 2 , and separated by reversed phase HPLC on a C18column using a solvent of 40% acetonitrile/0.1% TFA/H 2 O (1 ml/min) with UV detection at 210 nm. The HPLC epoxide peak was confirmed both by comparison of its retention with that of the authentic chemically synthesized SB-epoxide and by its [ 14 C]SB-derived radioactivity, when [ 14 C]SB was included as the substrate in the reaction. Determination of SB-Heme Adduct in the Reconstituted System by Difference Visible Absorption Spectroscopy The incubation mixtures were identical to the one described above, except that NADPH was omitted from the control. After incubation, the control (ϪNADPH sample) was used to obtain a baseline between 400 and 500 nm in an SLM-Aminco 2000 TM UV-Vis spectrophotometer. The contents of the sample cuvette were then replaced with aliquots of the ϩNADPH incubation mixture, and the two cuvettes were rescanned using the ϪNADPH sample as the reference. The reversibility of the SB-heme adduct was examined as follows. The absolute spectra or the difference spectra (with the ϪNADPH incubation as reference) of the SB/NADPH-inactivated P450 2B1 were recorded at various intervals over 24 h. The sample was either scanned directly or scanned after removal of NADPH from the incubation by either the addition of NADP (at a 10-fold excess of the NADPH present in the reconstituted system) to competitively inhibit the NADPH-dependent reaction or after dialysis against Hepes buffer containing 0.5% bovine serum albumin (which would also remove excess SB), in order to prevent potential reinactivation of the enzyme after reversal of the SB-heme adduct and regeneration of the enzyme. In some instances, this was further ensured by bubbling CO into the sample. The extent of enzyme regeneration was established by monitoring the relative PROD activity of the incubations. The extent of P450 2B1 structural regeneration after reversal of the SB-heme adduct was also assessed by the relative amounts of heme and SB-heme adduct present after concurrent HPLC and/or MALDI monitoring, at the start and end of SB inactivation as well as after the procedures used to reverse the SB-heme adduct. Determination of SB-Heme Adduct and Irreversible [ 14 C]SB-Protein Binding Wild type P450 2B1 or each of its mutants was incubated in the presence or absence of NADPH as detailed above. The inactivation reaction was stopped with TFA (final concentration, 10%), and 50 l of liver microsomes (containing 1 nmol of P450) from untreated rats were added to the mixture as carrier hemoprotein. The SB-heme adducts were extracted with two equivalent volumes of TFA/2-butanone (10%), and the organic phase was removed by rotary evaporator. The residue dissolved in acetonitrile/acetic acid/H 2 O (4:3:3, v/v/v) was analyzed by reversed phase HPLC on a C8 column, using a solvent system consisting of solvent A (0.1% TFA/H 2 O) and solvent B (90% acetonitrile/0.1% TFA/H 2 O) and a linear gradient elution from 45% B to 75% B over 30 min at a flow rate of 1 ml/min, with 415 nm detection. The fractions were collected every 2 min and subjected to scintillation counting. Irreversible [ 14 C]SB binding to the protein was determined in incubations similar to those described above, except that GSH (2 mM) was included to trap the reactive electrophilic epoxide that escapes the active site. The protein was precipitated by 5% H 2 SO 4 /MeOH, washed extensively with organic solvents, as described previously (16), dissolved in NaOH, and aliquots were subjected to scintillation counting and protein determination. Mass Spectrometric Analysis of the SB-Heme Adduct The reaction mixture containing the inactivated P450 2B1 (100 pmol) was mixed with 200 l of 10% TFA/butanone, and the organic phase was concentrated by speed vacuum. Average masses of the sample were determined using a PerSeptive Biosystems Voyager Linear MALDI/ TOF mass spectrometer (PerSeptive Biosystems, Framingham, MA) equipped with a nitrogen laser (337 nm) and operated in the linear mode. The sample was crystallized with ␣-cyanohydroxycinnamic acid (10 mg/ml in 50% acetonitrile/0.1% TFA/H 2 O). Docking of SB into the Active Site of the P450 2B1 Model The P450 2B1 structure was obtained previously using a consensus strategy and verified with the Profiles-3D program, as described earlier (16). The SB structure was constructed using Builder module of In-sightII (Biosym/MSI, San Diego, CA) and verified with the Cambridge Structural Data Base (CSD) against crystal structures of similar compounds. The structures were displayed on a Silicon Graphics workstation. Energy minimization and molecular dynamics calculations were carried out with the Discover simulation program (Biosym/MSI, San Diego, CA) with the consistent valence force field. The parameters for heme and ferryl oxygen were those described by Paulsen and Ornstein (23,24). Docking Conditions for SB Epoxidation-SB was placed in the active site of the P450 2B1 model, with the internal carbon of the olefinic SB-side chain placed 4.5-4.9 Å from the heme iron aligned with Fe and S of Cys-436, resulting in the double bond paralleling the heme. This distance allows for van der Waals contacts between ferryl oxygen and the internal carbon, and thus apparently leads to SB oxidation at this carbon. The substrate can be oriented with the "si" or "re" face toward the heme. Since the SB molecule can assume numerous conformations, it was docked into the active site of the P450 2B1 model using molecular dynamics. For these simulations, both the C and H atoms of the SB double bond were fixed, while the rest of the molecule was allowed to move. Initially, the docked substrate was minimized with the Discover program, using the steepest descent algorithm and harmonic potential, with a non-bond cutoff of 8 Å, to a maximum gradient of 1 kcal/mol/Å. The dielectric constant used was 1.0; no morse or cross-terms were included (16,22). Subsequently, SB was subjected to one cycle of molecular dynamics using the leap-frog algorithm. The system was equilibrated for 0.1 ps, and the simulations were continued for 1 ps at 300 K using 1-fs time steps. This was followed by minimization using the steepest descent gradient, as described above. Finally, minimization was performed on side chains of P450 2B1 residues that contact the substrate (distance less than 5 Å), using the steepest descent method until the gradient was less than 1 kcal/mol/Å. The non-bond interaction energy between the docked SB and the protein was calculated using the Docking module of InsightII package. Both electrostatic and van der Waals interactions were evaluated using the cutoff of 10 Å. The potential energy of SB was also calculated. Docking Conditions for SB-Heme Adduct Formation-The SB product, with the OH group at the internal carbon of the SB double bond, was placed above heme ring A (the major adduct detected in our studies), at a distance that would allow for the C-N A bond formation (that is between the terminal olefinic C of the SB product and N A of the heme). The chirality of the SB product at the "internal" C is S, since it was formed as a result of "re" face oxidation. Minimization and molecular dynamics of SB docked in an orientation consistent with the heme adduct formation was performed as described in the case of SB epoxidation. However, in the case of the SB-heme adduct, only the terminal olefinic C was fixed. Final minimization was carried out on the side chains of the residues contacting the substrate, as described earlier. The energy of interactions was also evaluated. RESULTS AND DISCUSSION SB Epoxidation by P450 2B1 and Its Mutants-The SB epoxide formed during the incubation was monitored by HPLC with UV detection at 210 nm. A peak with a retention time similar to that of the authentic SB-epoxide eluting at 5.5 min was detected in the NADPH-supplemented incubations but not in the control (NADPH-devoid) incubations (Fig. 2). The identity of the SB-epoxide was further confirmed by its relative [ 14 C]SB-derived radioactivity, when [ 14 C]SB was included in both the NADPH-supplemented and control incubations (Fig. 2). SB-epoxide formation by the wild type E. coli expressed enzyme was comparable to that of the purified functionally reconstituted rat liver enzyme (i.e. 50 and 59 nmol/nmol of P450/15 min, respectively). The mutant 2B1 T302S, F206L, I114V, V363L, V367A, and G478S enzymes catalyzed SB-epoxidation to a roughly comparable extent to the wild type 2B1, while V363A exhibited relatively lower activity (approximately 34%) (Table I). This activity profile was quite different from that of the corresponding PROD activity, wherein I114V retained 100% of the activity, but T302S retained only about 50%, F206L and V363A, 30%, and V363L, V367A, and G478S less than 2% of the activity (Table I). These findings thus indicate that specific mutations in the various SRS regions differentially affect P450 2B1-dependent metabolism of SB and pentoxyresorufin, thereby implicating individual differences in corresponding substrate-active site fits. Detection of SB-Heme Adduct by Visible Electronic Absorption Spectroscopy and Mass Spectrometry-After P450 2B1 was inactivated by SB in a functionally reconstituted system, an electronic absorption difference spectrum for the NADPH-sup- plemented versus the NADPH-devoid incubation was obtained with an absorbance maximum at 445 nm and a trough at 418 nm (Fig. 3). A 445 nm absorbance was also obtained when the SB-inactivated P450 2B1 incubation mixture was scanned against aliquots of this same mixture taken at time 0 and placed in ice (not incubated at 37°C). This spectral absorption was dependent on the olefinic moiety of the drug, because amobarbital, the saturated analog of SB, failed to destroy the enzyme as well as yield a corresponding spectrum, when it replaced SB in the incubation mixture. It is conceivable that this spectrum reflects the presence of SB-N-heme (Fe 3ϩ ) adduct in this in vitro system, because it is similar to that of the purified iron-complexed N-ethylporphyrins, that exhibit absorbance maxima at 442 nm (25). Furthermore, consistent with this possibility, the absolute spectra of phenylacetylene-or 3-alkylsydnone-inactivated P450 2B1 preparations that yield N-modified heme adducts also exhibit a time-dependent increase in a 445 nm shoulder (26,27). It is interesting that no corresponding spectra are detected after inactivation of P450 2B1 by allylisopropylacetamide, or of other P450s by 3,5-dicarbethoxy-2,6-dimethyl-4-ethyl-1,4-dihydropyridene, in spite of the fact that much larger quantities of N-alkylated porphyrins are formed and isolated from such inactivation systems. 2 To our knowledge, this is the first spectral documentation of a P450 heme-drug adduct in situ. The ability to form SB-heme adducts as reflected by the magnitude of the corresponding 445 nm absorbance, varied significantly among different P450 2B1 mutants, indicating that either they were not equally susceptible to SB-heme alkylation or that their SB-heme adduct did not persist sufficiently long after formation to be detected (Fig. 3, Table I). The SB-heme adduct was also detected in the TFA/butanone extracts of the SB-inactivated P450 2B1 incubation mixture (Fig. 4). MALDI/TOF mass spectrometric analyses of the extracts yielded a mass (MH ϩ ) of 818.0 Da, in good agreement with the mass of 816.9 Da expected for the hydroxy-SB-protoporphyrin IX adduct, formed after Fe 3ϩ removal from the SBmodified P450 2B1 heme. No other SB-modified heme-derived species were detected in the TFA/butanone extracts (Fig. 4). The relative intensity of this SB-protoporphyrin adduct signal increased over the 15-min incubation period, in parallel with the decrease in PROD activity of the inactivated enzyme (Fig. 5). Once formed, however, unlike the transient heme adducts generated through N-alkylation of chloroperoxidase by terminal alkenes and alkynes (30), the SB-heme adduct was not reversible. Under similar conditions and within the temporal limits of P450 stability, it could not be reverted back to the parent unmodified heme that would restore the native chromophore and functional activity of the enzyme (not shown). Preliminary 1 H NMR analyses of the SB-heme adducts obtained from SB-treated rats whose liver P450 2B1 had been induced by phenobarbital pretreatment revealed that the majority of the adducts contained SB on the N A pyrrole ring of the heme (not shown). This result is consistent with the known topology of the P450 2B1 active site, wherein the N B and N C pyrrole rings are believed to be largely masked by the protein (28,29). HPLC Determination of the SB-Protoporphyrin Adducts Formed in the Functionally Reconstituted P450 2B1 System-HPLC analyses of the TFA/butanone extracts of the SB-inacti- vated and control incubation mixtures showed a peak with 415 nm absorbance and a retention time of 27-29 min that contained [ 14 C]SB-derived radioactivity (Fig. 6). MALDI/TOF mass spectrometric analyses of this peak confirmed it to be the hydroxy-SB-protoporphyrin IX adduct with a mass (MH ϩ ) of 817.7 Da (not shown). The electronic absorption spectrum of its zinc complex revealed the characteristic absorbances at 430, 541, 582, and 628 nm for an N-alkylated heme species. On the basis of its specific [ 14 C]SB radioactivity, the SB-heme adduct could be quantified in the concentration range of 0.1-2 nmol with a recovery of 94.3 Ϯ 1. 8% (n ϭ 4). SB-mediated Heme and Protein Modification of P450 2B1 and Its Mutants-The above approach was used to quantitate the relative extent of SB-heme adduct formation and protein modification of the wild type 2B1 and its structural mutants. The wild type P450 2B1 formed 1.49 nmol of SB-heme adduct per nmol of P450 inactivated in 15 min. The SB-mediated heme N-alkylation of T302S and V363L mutant enzymes was dramatically decreased, whereas that of I114V mutant was slightly increased, relative to the corresponding wild type value (Table I). On the other hand, the other structural mutants (F206L, V363A, V367A, and G478S) still retained greater than 50% of their wild type susceptibility to SB-heme N-alkylation as revealed by their relative yields of SB-heme adducts. It is noteworthy that these values, albeit lower than that of the wild type, are equivalent to the values observed with the purified rat liver 2B1. Thus, it is unclear whether these lower values reflect poor recycling of available heme and/or poor catalytic efficiency of the enzyme, since the formation of the additional SB-heme adducts from the available heme is dependent both on structural reassembly and fresh inactivation cycles. Furthermore, the differences in SB-heme adduct formation appear to be dictated by the structural differences at the active site rather than by the adventitious heme availability. Accordingly, the T302S membranes contained just as much heme as the wild type, if not more, whereas, although less heme was available in the V363L preparation, its heme content was comparable to that of the G478S preparation, that yielded substantially higher levels of the SB-heme adduct (Table I). When SB-induced protein modification of the wild type 2B1 FIG. 5. The time course of the SB-heme adduct formation and the loss of PROD activity during SB-mediated P450 2B1 inactivation. The formation of the SB-protoporphyrin IX was expressed as the ratio of the intensity of its MALDI mass signal (MH ϩ : 818) to the DLPC mass signal (MH ϩ : 624) (Fig. 4). For experimental details, see "Materials and Methods." The DLPC was extracted from the incubation along with the SB-heme adduct. and its structural mutants was examined, we found that the values for V363L, G478S, and T302S mutants were, respectively, about 60, 50, and 30% lower than that of the wild type P450 2B1 (Table I). Since these mutations reside on SRS-5, SRS-6, and SRS-4, respectively, it appears that the structural intactness of each of these domains is an important determinant of SB-induced protein modification. The critical importance of the SRS-5 domain is further underscored by the finding that the SRS-5 V367A mutant was found to be relatively less susceptible to protein modification during SB-mediated inactivation. Such poor SB modification of the V367A protein does not appear to be due to impairment of its functional capacity, since it was quite capable of concurrently supporting SB epoxidation and of incurring heme N-alkylation. Thus, the low susceptibility of this particular mutant to protein modification is mechanistically different from its resistance to chloramphenicol-mediated inactivation, wherein the observed negligible covalent chloramphenicol-protein binding was attributed to its low chloramphenicol metabolizing capacity (15). Indeed, most likely, the poor SB-induced protein modification of V367A in the SRS-5 domain is due to structural perturbations of its active site microenvironment. It is interesting in this regard, that just 4 residues upstream from the Val-367 site on SRS-5, the Val-363 mutation to Leu but not Ala, also impairs SB-induced protein modification, indicating that the extension of the Val residue by a ϪCH 2 unit in the Leu residue is sufficient to interfere in the SB-induced protein modification. It is noteworthy that the stoichiometry of the inactivation (moles of [ 14 C]SB bound to heme and protein per mol of P450 inactivated) was apparently greater than 1, for the membranebound wild type and some mutant enzymes. Because the heme content of the solubilized membranes from wild type P450 2B1 and its structural mutants I114V, F206L, T302S, V363A, and V367A was greater than 3 nmol per nmol of P450, whereas V363L and G478S exhibited a corresponding value of about 2 nmol of heme per nmol of P450 (Table I), it appears that this adventitious heme could replace the alkylated heme to restore the enzyme activity through sequential futile inactivation/reconstitution cycles, as long as sufficient SB and NADPH are present (31). On the other hand, in purified functionally reconstituted P450 2B1 systems, wherein no adventitious heme is present other than that of the added catalase, the moles of [ 14 C]SB bound to heme and protein is Ϸ0.83 and 0.20, respec-tively, per mol of P450 inactivated, with the stoichiometry of the inactivation event Ϸ1, and a partition ratio for SB-epoxidation to P450 inactivation of the order of Ϸ59:1. The partition ratio for a given mechanism-based inactivator of an enzyme is a measure of its inactivating efficiency, i.e. the relative number of productive turnover cycles per inactivating event (32). The lower this ratio, the more potent the inactivator. Usually, these ratios are used to compare the relative inactivating potentials of various suicide inactivators for a given enzyme. As in the present case, they can also be exploited to gain some insight into the topological influence that individual structural mutations within the active site of the target enzyme P450 2B1 might exert on the partitioning of SB into productive (SB-epoxide) and destructive (heme N-alkylation and protein modification). On the whole, the values for individual partition ratios calculated for SB-epoxidation relative to either SB-induced heme or protein modification observed with the wild type 2B1 or each of its site-directed structural mutants showed similar quantitative trends, with some notable exceptions (Table I). That is, such analyses revealed that the T302S mutant largely favored productive SB metabolism over the destructive pathway, irrespective of whether the ratio is expressed on the basis of SB-epoxide formed to heme-or proteinmodified. The reverse appears to be true of the V363A mutant, which largely favored destructive events over productive metabolism. The salient exception is the SB partitioning catalyzed by the V367A mutant which apparently is quite competent in SB-epoxidation and heme N-alkylation but not protein modification. This to a much lesser extent is also true of the G478S mutant, indicating the importance of these SRS-5 and SRS-6 regions in dictating the specific course of the inactivation event. A very comparable profile emerged when the partition ratios were expressed on the basis of epoxide formed to the cumulative destruction events (heme and protein modification) (Table I). However, the ratio of the V367A mutant so expressed provided no clue to its distinctive recalcitrance to protein modification (Table I). SB Docking into the Active Site of the P450 2B1 Model-To assist us in a reasonable interpretation of the above findings, we docked SB in a refined molecular model of P450 2B1 constructed using consensus strategy and based on the cystallographic structures of P450s 101, 102, and 108 (16). That model has been shown to be consistent with the results from sitedirected mutagenesis of cytochromes P450 2B and provided plausible explanations for alterations in regio-and stereospecificity of steroid hydroxylation in various mutants of P450 2B1 (16). Molecular modeling of SB in the active site of this P450 2B1 model revealed that of the several nucleophilic amino acid residues in the SB-modified helix I peptide fragment (residues 277-323), possibly four could be singled out as potential candidates for SB-mediated alkylation, on the basis of their relative distances to the SB bound in the active site, their sidechain orientation within the active site, and shielding effect of other residues. Of these, Thr-302 was the best candidate, with Ser-294, Thr-305, and Thr-306 plausible, but less likely. The likelihood of Thr-302 as the target residue is further strengthened by our finding that HPLC-peptide mapping of pepsin digests of [ 14 C]SB-inactivated 2B1 (which would reduce the size of the [ 14 C]SB-modified peptide), yielded a 14 C-labeled peptide whose MALDI-MS analyses gave a mass of 835.9, consistent with that of a SB-adduct of peptide G 299 TETSS 304 . This finding thus confines the site of modification to the Thr-302containing hexapeptide domain of the previously identified, 46-residue-long, [ 14 C]SB-modified Lys-C peptide 277-323. Simulations with either the S or R isomer of SB gave similar results, indicating that the SB stereochemistry did not influence its binding, a result expected from the high flexibility of the SB side chain. Furthermore, both si and re orientations were possible, but the si orientation seemed to be energetically more favorable, considering the docking energy and the potential energy of the substrate. Docked in this orientation, SB can directly interact with Thr-302, Val-363, Val-367, and Gly-478, whereas residues Ile-114 and Phe-206 are in the vicinity of the docked substrate, but not sufficiently close for direct interaction (Fig. 7). The first four residues are located in regions that are highly conserved in the four known crystal structures, while residues 114 and 206 are in regions less structurally conserved. However, in P450 2B enzymes, all six positions were shown to control steroid hydroxylation (15,16). In the case of SB, modeling results are consistent with the data indicating that mutation of residues 302, 363, 367, and 478 affected epoxidation and/or partitioning of the reactive intermediate(s) ( Table I). Interestingly, in contrast to the other mutants, the T302S mutant exhibited a marked increase in SB epoxidation, a result that may be rationalized by its ability to enable SB to assume another binding orientation with the re face toward the heme. This orientation would not be possible with Thr at position 302, since its methyl group would create van der Waals overlaps with SB. Structural characterization of the SB-modified heme adducts isolated from SB-treated rats indicated that the major component is the N A isomer. Docking of the oxidized SB cation radical above heme ring A to enable its terminal olefinic carbon to bond with pyrrolic N A revealed that this SB product interacted closely with Thr-302 and Val-363 (Fig. 8). The decrease in heme adduct formation with the T302S mutant could be related to the increased mobility of the SB product in the active site pocket. It is likely that the side chain methyl group of Thr helps to stabilize the SB intermediate in an orientation that allows heme-adduct formation. A similar decrease with the V363L mutant, on the other hand, appears to be due to the van der Waals overlaps with the oxidized SB product, a result of the CH 2 increased length of the Leu residue side chain. Conclusions-The above findings reveal that specific active site residues located in putative SRS domains may play important roles in dictating the trajectory of SB metabolism during its mechanism-based inactivation of P450 2B1. Accordingly, the SRS-1 domain does not appear to play a significant role given that critical mutations in this region failed to appreciably affect any of the SB metabolic parameters examined. Active site residue Thr-302 on SRS-4 is a critical determinant of the SB partitioning into both productive SB-epoxide formation and enzyme inactivation. On the other hand, Val-367 on SRS-5 apparently is a critical determinant of 2B1 protein modification, since its mutation to Ala drastically reduces this component, without altering either SB epoxidation or heme N-alkylation. Finally, Val-363, also on SRS-5, appears to critically control both epoxidation and heme N-alkylation. Replacement of this residue with the one CH 2 unit shorter, Ala, markedly reduces SB epoxidation but not heme N-alkylation or protein modification, whereas its replacement with the one CH 2 unit longer Leu appears to suppress its catalytic activity. This is implicated from the finding that the mutant shows decreases in all the three parameters examined. Thus, the lower heme Nalkylation of the V363L mutant can be attributed to its lower SB-metabolizing capacity as well as van der Waals overlaps imposed by the larger Leu residue. The lower heme N-alkylation of the T302S mutant on the other hand appears to be due not only to a more efficient conversion of the SB intermediate to the epoxide from both the re and si orientations, but also to the increased mobility of the oxidized SB product, features enabled by the Ser substitution that eliminates the steric constraints normally posed by the ϪCH 3 group of Thr-302.
v3-fos-license
2019-05-14T14:02:01.773Z
2019-05-14T00:00:00.000
153310797
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.01072/pdf", "pdf_hash": "39122632f975d007cea2d6087c4320a87a3a73d3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1507", "s2fieldsofstudy": [ "Biology" ], "sha1": "39122632f975d007cea2d6087c4320a87a3a73d3", "year": 2019 }
pes2o/s2orc
Myeloid Cell Crosstalk Regulates the Efficacy of the DNA/ALVAC/gp120 HIV Vaccine Candidate Vaccination with DNA-SIV + ALVAC-SIV + gp120 alum results in inflammasome activation, high levels of IL-1β production, emergency myelopoiesis, and the egress of CXCR4+ CD14+ pre-monocytes from bone marrow. Previously we have shown that this vaccine-induced innate monocyte memory is associated with decreased risk of SIVmac251 acquisition. Because IL-1β also promotes the propagation of monocyte-derived suppressor (M-MDSC)-like cells, here we extended our analysis to this negative regulator subset, characterizing its levels and functions in macaques. Interestingly, we found that DNA prime engages M-MDSC-like cells and their levels are positively associated with the frequency of CD14+ classical monocytes, and negatively with the levels of CD16+ monocytes, correlates of decreased and increased risk of SIV acquisition, respectively. Accordingly, M-MDSC frequency, arginase activity, and NO were all associated with decrease of CD8 T cells responses and worse vaccination outcome. DNA vaccination thus induces innate immunity by engaging three subsets of myeloid cells, M-MDSCs, CD14+ innate monocyte memory, and CD16+ monocytes all playing different role in protection. The full characterization of the immunological space created by myeloid cell crosstalk will likely provide clues to improve the efficacy of HIV vaccine candidates. INTRODUCTION Immature myeloid cells with a potent inhibitory effect on immunity, including granulocytes, macrophages, and dendritic cells, have been described in humans, macaques, and mice. In recent years, myeloid-derived suppressor cells (MDSCs) have emerged as a major immunosuppressive non-lymphoid population, often linked to immune evasion and unfavorable disease outcome in tumors and infections including HIV (1,2). MDSCs are a highly heterogeneous population that includes cells that are morphologically and phenotypically similar to monocytes (monocytic M-MDSCs) and neutrophils (polymorphonuclear PMN-MDSCs) (3). While the nomenclature and phenotypes used to categorize these cell populations vary, human MDSCs are generally defined as cells negative for the expression of MHC class-II HLA-DR and positive for CD33 and CD11b expression. The CD14 or CD15 phenotypic markers are, respectively, used to differentiate between MDSCs derived from monocytes or neutrophils (4). MDSCs regulate the homeostasis of inflammatory processes (5) and accumulate during unresolved inflammation (6). It is currently unknown whether MDSCs are immature myeloid precursors whose differentiation is blocked during emergency myelopoiesis, or if they are the product of monocyte and neutrophil reprogramming following TLR-signaling and cytokine stimulation (7). The induction of MDSCs is thought to require a combination of long-lasting antigen presentation and strong signals such as growth factors GM-CSF, G-CSF, and other cytokines including IFN-γ , IL-1β, IL-4, IL-6, IL-13, and TNF-α (8)(9)(10)(11). The best-known transcription factor regulating MDSC expansion and activity is the signal transducer and activator of transcription 3 (STAT3). STAT3 promotes MDSC survival and blocks their differentiation into mature myeloid cells (12,13). MDSCs use a variety of immunosuppressive mechanisms in which the metabolism of the conditionally essential amino acid L-arginine (L-arg) plays a central role. L-arginine can be metabolized by arginase (ARG1 and ARG2), which expression is controlled by STAT3 (14), and by nitric-oxide synthase 2 (NOS2/iNOS). Both ARG and NOS compete for L-arginine and generate either urea, or citrulline and nitric oxide (NO), respectively (15). In turn, the depletion of extracellular L-arginine and urea production affect the function of the CD3 TCR zeta chain (16). Nitric oxide is one of the most versatile components of the immune system, and numerous immune cells produce and respond to NO (17). NO increases MDSC recruitment in inflammatory sites, inhibits cell proliferation by nitrosylation of receptors, promotes T cell death, and, in the presence of IL-1β, IL-6, IL-23, and TGF-β, favors the development of CD4 + T helper producing IL-17 (Th17) and T regulatory cells (Tregs) (18,19). In addition, MDSCs mediate immunosuppression through reactive oxygen species (ROS), and other mediators such as IL-4 receptorα (IL-4Rα), programmed death-ligand 1 (PD-L1), interleukin-10 (IL-10), tumor growth factor-β (TGF-β), and phosphorylated STAT3 (14,20). While the role of MDSCs in the modulation of T cell responses has been extensively studied, their role in B cell suppression remains poorly understood. Studies have shown MDSCs to both directly regulate B lymphopoiesis (21) and indirectly modulate B cells by generating B regulatory cells (Bregs) (22). Less is known of the role that vaccination plays in inducing MDSCs, or what effect these cells have on protection. Two recent studies in macaques have shown that MDSCs are induced by influenza and HIV vaccines. Indeed, an mRNA vaccine encoding for influenza hemagglutinin administered in macaques induced both suppressive M-MDSCs (HLA-DR − CD14 + cells) and non-suppressive myeloid cells in blood and at the injection site (31). Moreover, a peptideprime/modified vaccinia Ankara (MVA) boost vaccine regimen induced MDSC-like cells (CD33 + CD11b + CD14 + DR low cells) and was associated with set-point viral load, suggesting a negative role for M-MDSCs in protection against high viral replication (26). We previously demonstrated that innate monocyte memory mediated by classical monocytes (HLA-DR + CD14 + CD16 − cells) is central to the protection elicited by a DNA-SIV + ALVAC-SIV + gp120 alum vaccine administered in macaques (32). While the levels of vaccine-induced classical monocytes and NLRP3 inflammasome activation were correlated with reduced risk of SIV mac251 acquisition (protective), CD16 + monocytes and STAT3 were correlates of increased risk of SIV acquisition (harmful). Given that STAT3 and IL-1β all result in MDSC accumulation, we studied the kinetics and function of this immunosuppressive subset and its role in protection in macaques vaccinated with the DNA-prime + ALVAC + gp120 boost strategy. Due to the considerable diversity of phenotypic markers used to define human MDSCs (33), we extended the characterization of these cells to include HLA-DR − CD14 + monocytes in addition to the canonical CD33 + CD11b + HLA-DR − CD14 + cell subset. Indeed, circulating monocytes expressing the monocytic CD14 + marker but lacking the expression of MHC class II cell surface receptor HLA-DR have also been identified as major mediators of tumor-induced immunosuppression (13,34). Our results demonstrate that the DNA-SIV + ALVAC-SIV + gp120 alum regimen increases the levels of M-MDSC-like cells (HLA-DR − CD14 + cells) that are associated with an increased risk of SIV mac251 acquisition. The frequency of MDSCs and their transcriptome were associated with a reduction of interferon-stimulated genes (ISGs) and T and B cell pathways. Moreover, we found that an increase in arginase activity was inversely associated with protective classical monocytes and NLRP3. Arginase activity was instead positively associated with harmful CD16 + monocytes and, in turn, with a decrease in gag-specific IFN-γ + and TNF-α + CD8 + T cell responses, and increased risk of SIV mac251 acquisition. These results unravel complex mechanisms of vaccine-induced protective immunity through the crosstalk between activating and suppressive myeloid cells. Animal Study and Challenge The study was conducted as previously described (32). All animals used in this study were colony-bred rhesus macaques (Macaca mulatta) provided by Covance Research Products. Monkeys were housed and handled in accordance with the standards of the Association for Assessment and Accreditation of Laboratory Animal Care International, and the care and use of the animals complied with all relevant institutional (U.S. National Institutes of Health) guidelines. The protocol (AUP 491) was approved by the Advanced BioScience Laboratories Institutional Animal Care and Use Committee. Twelve juvenile macaques were immunized intramuscularly twice with DNA-SIV at weeks 0 and 4 ( Figure 1A) as previously described (35). Each immunization contained a total of 6 mg of DNA in 1.5 ml PBS. DNA primed animals were given the following DNA constructs: 206S SIV p57gagmac239 (1 mg); 209S MCP3-p39gagmac239 (1 mg); 221S SIV macM766 gp160 (2 mg); 103S LAMP-Polmac239 (2 mg). At weeks 12 and 24, all macaques were boosted with intramuscular inoculations of 10 8 p.f.u. of ALVAC recombinants (vCP2432), expressing SIV mac251 gag-pro and gp120TM (Sanofi Pasteur), and with 200 µg each of SIV mac251−M766 and SIV smE660−CG7V gp120-gD proteins adjuvanted in alum alhydrogel (InvivoGen), as previously described (36). The proteins were administered intramuscularly in the thigh opposite the one of the ALVAC injection site. In addition to the 12 vaccinated animals, 6 concurrent control animals were treated with the alum adjuvant at weeks 12 and 24. Four weeks after the last immunization (week 28), the 12 immunized macaques and 6 control animals were challenged intrarectally with 10 repeated low-doses of pathogenic SIV mac251 (120 TCID50, 50% tissue culture infective dose) once a week. Thirty-five non-contemporaneous controls, challenged with the same virus stock in the same facility and following the same procedures, were added to the 6 concurrent controls as previously described (32). The time of acquisition was identified as the number of exposures to SIV mac251 prior to the detection of SIV-RNA in plasma. Measurement of SIV Viral DNA in Rectal Tissue SIV mac251 DNA was quantified in mucosal tissues collected 2-3 weeks after viral infection. Genomic DNAs were isolated from tissues and the absolute quantitation of pro-viral DNA load was assessed by a real-time qPCR assay with sensitivity up to 10 copies × 10 6 cells, as previously described (37). Kynurenine and Tryptophan Plasma Levels Tryptophan and Kynurenine plasma concentrations were measured by using the Tryptophan ELISA (Rocky Mountain Diagnostics, Colorado Springs, CO, USA, Catalog #BA E-2700) and Kynurenine ELISA commercial kits (Rocky Mountain Diagnostics, Colorado Springs, CO, USA, Catalog #BA E-2200). For tryptophan measurement, 20 µl of plasma were precipitated, the recovered supernatants were derivatized, and the product was used to perform the ELISA according to manufacturer instructions. For kynurenine assay, 10 µl of plasma were acylated and used to perform the ELISA according to manufacturer instructions. The data are presented as the ratio between kynurenine and tryptophan (Krn/Try) levels. Arginase Activity Arginase activity was analyzed on Plasma using the Arginase Activity Assay Kit (MAK112, Sigma-Aldrich, St. Louis, MO) following the manufacturer instructions. Briefly, samples were thawed on ice and, in order to deplete the urea, 50 µl of plasma were loaded in an Amicon R Ultra 10K centrifugal filter (UFC501096 EMD Millipore), diluted with pure water to 500 µl, and centrifuged at 13,000 × g for 30 min at 4 • C. Following centrifugation, the eluted solution was discarded. Filtered samples were then diluted with pure water to 500 µl, and centrifuged at 13,000 × g for 30 min at 4 • C. At the end of centrifugation, the remaining volume of each sample was measured, and ultra-pure water was added to reach a final volume of 40 µl. Each sample was loaded into 2 wells of a 96-well plate (20 µl/well), representing the sample well and the sample blank well, and 20 µl/well of ultra-pure water were added to each well. Together with samples, the plate was loaded with urea standard and water as positive and negative controls, respectively. Samples were loaded in singlicate , whereas controls were loaded in duplicate. Ten microliter of 5X substrate buffer, composed of Arginine Buffer and Mn Solution, were added to the wells except for sample blank wells, and they were incubated for 120 min at 37 • C. Reactive Oxygen Species (ROS) and Reactive Nitrogen Species (RNS) Analysis The total free radical contents were analyzed on Plasma and mucosal cell supernatants using the OxiSelect TM in vitro ROS/RNS Assay Kit (Cell Biolabs, Inc., San Diego, CA, USA, Catalog #STA-347) following manufacturer instructions. Briefly, cryopreserved samples were thawed in ice, and insoluble particles were removed by centrifuging at 10,000 g for 5 min. Following this, 50 µl of standards, plasma diluted 1:5 with PBS, or undiluted mucosal cell supernatants were single-loaded in a 96well plate suitable for fluorescent measurement. To each well was then added 50 µl of Catalyst, incubated for 5 min at room temperature, and then 100 µl of dichlorodihydrofluorescein (DCFH) solution. The plates were incubated at RT for 30 min in the dark and the fluorescence was read using a plate reader at 480 nm excitation/530 nm emission (VictorX4, Perkin Elmer, Inc., Waltham, MA, USA). The ROS/RNS content of each sample was determined by interpolation of unknown samples with a standard curve generated with hydrogen peroxide. For plasma samples, the standard curve was generated by diluting the standards with PBS. For mucosal cell supernatants, the standard curve was generated by diluting the standards with R10 media. Gene Expression Array Analysis Twelve macaques vaccinated with DNA prime and ALVAC/gp120 alum boost were included in a gene expression profiling study. PreAnalytiX tubes (#762165) were used to collect 2.5 ml of whole blood from these animals at 24 h and 2 weeks after the 1st boost or 1 week after the second boost. Paxgenes were gently rocked for 2 h and then stored at −80 • C. Total RNA was extracted using Agencourt RNAdvance Blood Kit (Beckamn Coulter #A35604). The isolated total RNA was checked for quantity and quality using a NanoDrop 2000c (Thermo Fisher Scientific) and an automated electrophoresis system (Experion, Bio-Rad). Samples with an RQI classification ≥7.0 were selected to proceed downstream to amplification. Samples were normalized at 50 ng for input and amplified using Illumina TotalPrep RNA amplification kits (Ambion) according to the manufacturer's protocol. Microarray analysis was conducted using biotinylated cRNA hybridized to Human HT-12 version 4 BeadChips (Illumina). The arrays were scanned using iSCAN (Illumina) and quantified using Genome Studio (Illumina). Analysis of the Genome Studio output data was conducted using R/Bioconductor software packages. Bead arrays were read, and missing values (>0.01%) were imputed using the nearest-neighbor method as implemented in the R package impute. Quantile normalization and log2 transformation for variance stabilization were then applied to raw intensities. For each gene, a linear regression model with the number of SIV challenges to infection as an independent variable and gene expression as a dependent variable was fit using the R package LIMMA. A moderated t-test was used to test that the coefficient of regression was statistically different from 0. The Benjamini-Hochberg method was used to correct the P-values for multiple testing (adjusted P-values). Genes with an adjusted P-value below 5% were defined as differentially expressed genes. GSEA was used to evaluate the gene sets (pathways) associated with the number of SIV challenges to infection and frequency of HLA-DR − CD14 + measured at week 6. In GSEA, the most variable probes across samples were used to remove redundant probes annotated to the same gene. Genes were pre-ranked by LIMMA t statistic, and GSEA was used to assess the enrichment of gene sets from the Molecular Signatures Database gene sets (version 5.1) and transcriptomic markers of MDSCs (38). The GSEA Java desktop program was downloaded from the Broad Institute (http://www. broadinstitute.org/gsea/index.jsp) and used with GSEA Pre-Ranked module parameters (number of permutations: 1,000; enrichment statistic: weighted; seed for permutation: 111; 15 ≤ gene set size ≤2,000). Sample-level enrichment analysis was used to investigate the enrichment of pathways in the different samples. Briefly, the expression of all the genes in a specific pathway was averaged across samples and compared to the average expression of 1,000 randomly generated gene sets of the same size. The resulting Z score was then used to reflect the overall perturbation of a pathway in a sample. Network Analysis GeneMANIA version 3.5.1 was used to identify relations (coexpression, co-localization, genetic interactions and physical interactions) between MDSC transcriptomic marker. To that end, the human orthologs and homologs of the macaque's genes included in the classifier were obtained from the NCBI gene and homologene portal. The human homologs were then imported into GeneMANIA, and a network was generated with default parameters (equal weight of network) except no (0) inferred nodes were used to consolidate the network. Statistical Analysis The Mann-Whitney-Wilcoxon test was used to compare continuous factors between the two groups. Correlation analysis was performed using the Spearman rank correlation method using exact permutation P-values. Multiple comparison analysis were performed to include all the time points analyzed using the Benjamini-Hochberg or the Tukey's multiple comparison analysis when no association between the frequency of these cells was found at different timepoints. RESULTS The DNA-SIV Prime Induces HLA-DR − CD14 + Cells That Correlate With an Increased Risk of SIV mac251 Acquisition CD14 + cells with low or absent HLA-DR expression have been linked to suppressive monocytic function (34), and they have recently been characterized as myeloid-derived suppressor cells in rhesus macaques (31). The DNA-prime ALVAC + gp120 alum boost strategy demonstrated a significant 52% vaccine efficacy in protecting macaques against SIV mac251 (32). Here we assessed the kinetics of monocytic-MDSCs and their role in this protection. Blood was collected pre-vaccination, 2 weeks after the prime (2xDNA, week 6), and after each immunization with ALVAC + gp120 alum (boosts at weeks 13 and 25; Figure 1A). Circulating monocytic MDSCs were identified as live HLA-DR − CD14 + cells that were negative for CD3 and CD20 molecules (lineage). Although conflicting reports have arisen on the validity of including CD33 as a marker for macaques MDSC (31), we also took into consideration the CD33 + CD11b + HLA-DR − CD14 + cell population (referred to as M-MDSCs). The gating strategy used to identify M-MDSCs and HLA-DR − CD14 + cells in the blood of a non-vaccinated animal is shown in Figure 1B. Both identified subsets were highly positive for the CCR2 marker, in line with phenotypic markers used to define MDSCs in humans ( Figure 1C). We could not detect significant changes in the levels of circulating CD33 + CD11b + HLA-DR − CD14 + cells during the course of immunization, possibly due to the high variability observed in this subset in addition to the relatively small number of animals in this group (Supplementary Figure 1A). Interestingly, the frequency of HLA-DR − CD14 + cells in blood was significantly increased by the DNA-prime (baseline vs. week 6: P = 0.0123, one-way ANOVA, Tukey's multiple comparisons), while no differences were detected between the frequencies pre-vaccination and after the 2nd ALVAC + gp120 boosts ( Figure 1D). Of note, there was no association between the frequency of these cells at different timepoints. Strikingly, we observed a significant association between the frequency of HLA-DR − CD14 + cells after the DNA prime (week 6) and the number of challenges to infection (P = 0.0006, R = −0.816, Spearman test ( Figure 1E). Significance was retained when the P value was adjusted for the 4 time points analyzed (Benjamini-Hochberg test, P = 0.0160). Total blood was collected for microarray analysis before and at 24 h, 1 or 2 weeks after the first boost (week 12 + 24 h and week 13 and 14), and 24 h and 1 and 2 weeks after the second boost (week 24 + 24 h, and weeks 24 and 25) with the ALVAC-SIV + gp120-alum ( Figure 1A). Transcriptomic signatures of vaccine-induced immune responses were identified as changes in gene expression after the vaccination compared to the pre-vaccination timepoint. To determine whether our vaccine induced MDSCs associated genes, Gene set enrichment analysis (GSEA) was used and vaccine induced genes were compared to a MDSC-associated genset previously identified by Heim et al. (38) (Supplementary Figure 1B). Transcriptomic markers of MDSCs were significantly induced at 24 h after each boost with ALVAC + gp120, as shown in Supplementary Table 1. Vaccine-induced MDSC markers included PTGS2, the gene coding for the enzyme cyclooxygenase 2 (COX2). Of note, the vaccine-induced transcriptomic markers of MDSCs measured at 2 weeks following the 1st boost (week 14) were positively associated with the frequency of HLA-DR − CD14 + cells (P = 0.008, R = 0.806; adjusted P = 0.19; Figure 1F). Table 2). In addition to the enrichment of MDSC transcriptomic markers among genes associated with SIV mac251 acquisition, the average expression of MDSC transcriptomic markers measured after each boost was significantly negatively correlated with the number of challenges (week 14: P < 0.001, R = −0.68; week 25: P = 0.025, R = −0.37 by the Spearman test, and P = 0.049 when the Benjamini-Hochberg correction is applied; Figure 1G). CD33 and of cyclooxygenase-2 (COX-2 or PTGS2) were among the MDSC genes associated with an increased risk of SIV mac251 acquisition after the 1st boost (week 14), as shown in the network analysis ( Figure 1H) and in Supplementary Table 2. Together with PGE 2 , the expression of COX-2 may represent a critical step for redirecting dendritic cell development toward functionally stable MDSCs (39). Indeed, PGE 2 together with MDSC-inducing factors IL-1β and IFNγ induce high levels of COX-2 in differentiating MDSCs and stabilizing their suppressive functions (39). Association between transcriptomic markers of MDSCs induced by vaccination and the number of SIV challenges was then assessed (Supplementary All together, these results suggest that the DNA-SIV-induced HLA-DR − CD14 + cell population may be enriched with M-MDSCs. Thus, we will refer to this population as M-MDSClike cells. In line with this observation, the frequency of M-MDSClike cells was also positively correlated with activation of the STAT3 signaling pathway at 24 h after the first (week 12 + Plasma Arginase Level Is Associated With an Increased Risk of SIV mac251 Acquisition Arginine metabolism plays a central role in the regulation of immune cell function (40). MDSCs expressing arginase and an increase in arginase activity have been described in trauma, cancer, and in certain infections (41). We measured arginase activity in the plasma of non-vaccinated macaques and macaques vaccinated after the first ALVAC + gp120-alum boost (week 13). Arginase activity levels were increased in some animals, though the overall increase was not significant (Figure 2A). However, changes in the arginase activity levels after the first boost (week 13 levels-pre-vaccination levels) were negatively associated with the frequency of classical monocytes, and with the NLRP3 inflammasome pathway, two previously identified correlates of HIV vaccine protection (32) (arginase vs. classical monocytes: P = 0.0012, R = −0.838; arginase vs. NLRP3 expression: P = 0.0027; R = −0.802; Figures 2B,C). On the contrary, vaccine-induced arginase activity was positively associated with the frequency of CD16 + monocytes, a previously identified correlate of increased risk of SIV mac251 acquisition (32) (arginase vs. CD16 + monocytes: P = 0.0012, R = 0.838; Figure 2D). Accordingly, the arginase activity levels were also associated with increased risk of SIV mac251 acquisition (Spearman test: P = 0.019; R = −0.67; Figure 2E). In MDSC, L-arginine is metabolized by two enzymes: a cytoplasmic arginase I (ARG1), and a mitochondrial arginase II (ARG2) that is widely expressed and associated with control of NO production (41). While no probe matched the arginase genes for rhesus macaques on the microarray platform, probes annotated to human ARG2 (huARG2) there was negative association with acquisition ( Figure 2F) in vaccinated animals at week 14 (P = 0.078, R = 0.658, Benjamini-Hochberg correction), and with arginase levels at week 13 (P = 0.014; R = 0.68; Figure 2G), but did not withstand correction for multiple comparisons on the 4 point analyzed was included (P = 0.693). Nitric Oxide-Related Genes Correlate With Plasma Arginase Level and ARG2 Expression The regulation of arginine availability is a mechanism that can potentially lead to the control of NO production (42). Indeed, through arginine depletion, MDSCs may control NO production and regulate other arginine-dependent biological processes. We attempted to measure NO and intracellular iNOS expression levels by ELISA and FACS analysis, respectively, but we were unable to find antibodies that cross-react with rhesus macaques. Hence, we performed pathway enrichment analysis on total blood collected at 24 h and 1 week or 24 h and 2 weeks after the two ALVAC boosts (weeks 12, 14, 24, and 25; Figure 3A). Genes implicated in synthesis and signaling pathways of nitric oxide were associated with SIV mac251 acquisition (Supplementary Table 3; GSEA: nominal P-value ≤0.05) at 4 time points: at 24 h and 1 or 2 weeks after the 3rd and 4th immunization (Figure 3B), however there was no significance when all 4 time points were considered. NO-related genes associated with SIV acquisition include NOS1AP (the adaptor protein of the NO synthase), NOSTRIN and AGTR2 (coding for the inhibitors of endothelial NO synthase), and AKT1 (coding for a kinase regulated by NO). Interestingly, NOrelated genes induced by vaccination at week 14 (2 weeks after the 1st ALVAC + gp120 alum boost) were positively associated with the changes in arginase levels in plasma following the same immunization (week 13) (P < 0.001, R = 0.85). NO biosynthesis may be at least partially regulated by Arg2 (43), and the NO pathway was also significantly associated with huARG2 expression (Figure 3D). All together, these results suggest that vaccination-induced changes in NO related genes and plasma arginase levels affected protection against SIV acquisition. HLA-DR − CD14 + Cells Are Associated With Decreased Expression of Interferon-Stimulated Genes and T Cell Pathways Because MDSCs have been implicated in the suppression of interferon stimulating genes (ISGs) and adaptive immune responses, we first looked for possible associations between their levels and specific T cell responses (Supplementary Table 3). Transcriptomic analysis revealed a negative correlation between the level of M-MDSC-like cells measured at prime, and ISGs that underwent a log2-fold change (Figure 4A). Following the second boost with ALVAC + gp120 alum, the expression of ISGs correlated with a decreased risk of SIV mac251 acquisition (P = 0.0136, R = 0.69, Figure 4A), and was negatively associated with the frequency of HLA-DR − CD14 + cells at the prime (P = 0.0394, R = −0.673, sample-level enrichment analysis (SLEA) method, Figure 4B). Genes included the kinases JAK1, JAK2, the transcription factor STAT1, and the suppressors of cytokine signaling SOCS1, as shown by the network analysis in Figure 1H, which inhibit receptor signaling by directly inhibiting both JAK kinases and cytokine receptors (44,45). T cell pathways induced by vaccination were also found to be associated to protection (defined as increased number of challenges to infection) (24 h after the 1st boost P = 0.0007, R = 0.836, and 2nd boost R = 0.683, P = 0.0143, Figure 4C). At the same time, T cell pathways measured at 24 h after the 2nd boost were associated with the frequency of HLA-DR − CD14 + cells at the prime (P = 0.0005, R = −0.915, Figure 4D). Of note, the NO pathway had a significant negative correlation with the same T cell activation pathways at 24 h after the 1st and 2nd boosts ( Figure 4E, P = 0.00412, R = −0.78). HLA-DR − CD14 + Cells Are Associated With Decreased Expression of B Cell Pathways We then asked the question whether an association could be found with B cell pathways (Supplementary Table 3 Figures 5A,B). Thus, these results suggest a harmful long-term effect of the prime on monocytic myeloid suppressive cells that decreases vaccine-induced protection. The NO pathway had also a significant negative correlation with the B cell activation pathway at 24 h after the 1st and 2nd boosts ( Figure 5C: P = 0.022, R = −0.66). Arginase and ROS Levels Correlate With Reduced SIV-Specific CD8 + T Cell Responses Recent research showed that MDSCs could inhibit HIV-specific CD8 + T cell responses in macaques vaccinated with an MVAbased HIV vaccine strategy (24). Priming with DNA-SIV resulted in low, but detectable, Envelope-and Gag-specific CD8 + T cells producing IFN-γ , IL-2, and TNF-α measured in blood at week 6 by intracellular staining (Figures 6A,B). We did not find a direct correlation between the frequency of these cytokineproducing T cells and the levels of MDSCs or M-MDSC-like cells at any timepoint during vaccination (Supplementary Table 3). However, IFN-γ + and TNF-α + CD8 + T cell responses to gag associated negatively with plasma arginase activity at the same time (week 6; IFN-γ : P = 0.025, R = −0.79; TNFα: P = 0.011, R = −0.87; Figures 6C,D). Moreover, the level of reactive oxygen species (ROS) and reactive nitrogen species (RNS) did not associated with the levels of HLA-DR − CD14 + cell population (data not shown), however it associated with an increased frequency of the CD33 + CD11b + HLA-DR − CD14 + cell subset (P = 0.037, R = 0.67; Figure 6E). In turn, the latter subset assisted with reduced levels of TNF-α + CD8 + T cell responses to gag at the end of the immunization regimen (week 27; P = 0.034, R = −0.82; Figure 6F). MDSCs can activate T regulatory cells that dampen T cell responses via catabolism of the essential amino acid tryptophan (Tryp), and accumulation of the kynurenine (Kyn) metabolite. The Kyn/Tryp ratio measured in the plasma of macaques vaccinated with the DNA and ALVAC + gp120 alum regimens had no association with suppressive myeloid cells, nor with SIV-specific T cell responses or viral outcome (Supplementary Table 4). Hence, these results point to the catabolism of L-arginine as an important mechanism of immunosuppression involved in the low level of protection afforded by this vaccine strategy, as both arginase and NO target this essential amino acid. DISCUSSION In recent years, new myeloid-derived suppressor cell subsets have been identified and characterized in inflammatory conditions and tumors (12). Accumulating evidence indicates an important role for MDSCs in controlling immune responses to pathogens (46). The expansion and activation of MDSCs during viral infection have been described as both detrimental and beneficial to the host. Through their immune suppressive function, MDSCs may, in fact, hamper host immune responses but conversely also limit inflammation and collateral tissue damage following an infection (46). In the case of HIV, MDSC mediated suppression of immune activation could reduce target cells for the virus (24,(47)(48)(49). Most of the studies aimed to underscore the relative contribution of MDSCs in HIV pathogenesis have described them as harmful, as MDSCs expand during untreated chronic infection and their levels are associated with disease progression (1,(50)(51)(52)(53).While less is known about the role of MDSCs in vaccines, non-responsiveness to immunization has also been linked to MDSC expansion. Indeed, in a peptide-prime/modified vaccinia Ankara (MVA) boost vaccine regimen the M-MDSCslike cells frequency was positively associated with set-point viral load, suggesting a negative role in protection from high viral replication (26). Previously, we identified different monocytic myeloid subsets as correlates of increased and decreased risk of acquisition in the blood of macaques vaccinated with the DNA-SIV + ALVAC-SIV + gp120 alum regimen (32). Further, classical monocytes (HLA-DR + CD14 + CD16 − cells) were associated with a decreased risk of SIV acquisition (32). The engagement of the myeloid compartment and the generation of a memory innate response following ALVAC immunization was most likely driven by the activation of the NLRP3 inflammasome and the release of IL-1β. CD16 + monocytes and STAT3 activation correlated with increased SIV mac251 acquisition (32). We postulated that immunosuppression by MDSCs may be playing a role in the limited vaccine efficacy (VE = 52%) afforded by the DNA-SIV + ALVAC-SIV + gp120 alum vaccine. In fact, the recombinant ALVAC vaccine vector is a known inducer of GM-CSF and CCL2 (54), and the common receptor CCR2 is expressed on virtually all classical monocytes and MDSCs. Vaccination induced myelopoiesis, and high levels of CCL2 were also detected after the DNA prime (32). Additionally, the DNA prime, the recombinant ALVAC vector, and the alum adjuvant are all known inflammasome activators, which in turn contributes to MDSC activation (9,55,56). We observed that the HLA-DR − CD14 + cell population expanded after the DNA-prime. While the antibody panel we used was designed to detect M-MDSCs, the CD15 antibody clone we used showed limited cross-reactivity. Consequently, we cannot discount the possibility that some of the gated cells in the HLA-DR − CD14 + population are in fact neutrophils (31,57). Unlike the study conducted by Lin et al., we did detect CD33 + cells within the HLA-DR − CD14 + cell population in macaque PBMCs, in alignment with the findings of Sui et al. (24,31). However, this population's frequency did not change during vaccination, nor did it associate with MDSC-related genes or STAT3. Altogether, our data strongly suggest that HLA-DR − CD14 + cells may be enriched in M-MDSCs, as we found their frequency to associate positively with transcriptomic markers of MDSCs (38). Vaccine-induced HLA-DR − CD14 + cells, MDSC gene expression, and levels of STAT3 pathway activation (32) were all correlates of increased risk of SIV acquisition, suggesting that MDSCs harm vaccine effectiveness. Of the four MDSCmediated immunosuppressive mechanisms we studied, we identified the arginase catabolism and NO biosynthesis as the ones primarily associated with diminished protection of the DNA-SIV + ALVAC-SIV + gp120-alum vaccine. Vaccination with this regimen induced changes in the levels of arginase activity in the plasma, and animals with increased levels proved more susceptible to infection. In addition, a heightened level of Arg2 expression was also associated with decreased vaccine efficacy. The physiological function of Arginase 2 in humans is still poorly understood, but studies have suggested a role in regulating cell arginine concentrations by controlling substrate availability for the biosynthesis of NO, proline, and polyamines from the arginine precursor (43). In fact, Arg2 expression levels were associated with NO-related genes encoding for NO synthesis and signaling that associated with increased virus acquisition. We could not directly link arginase activity or NO pathway activation to HLA-DR − CD14 + cells, nor can we exclude the possibility that other cell types including low-density neutrophils may have contributed to these immunosuppressive responses. However, the expansion of the HLA-DR − CD14 + population was directly associated with the reduction of ISGs and T and B cell pathways following the ALVAC + gp120 alum boosts. Our data point to a complex interplay between the CD14 + and CD16 + monocyte subsets and MDSCs, via arginase activity and inflammasome activation. The arginase activity was inversely associated with the frequency of classical monocytes, with inflammasome activation, all correlates of decreased risk of SIV acquisition, and positively associated with the frequency of CD16 + monocytes. Together, these findings support the existence of a complex crosstalk between immuneactivating and suppressive monocytic innate cells, in which the inflammasome activation and arginase catabolism of L-arginine are central components. We have previously shown that classical monocytes were associated with protective Th2 cell responses (32). The levels of HLA-DR − CD14 + cells, arginase levels, and NO pathways all associated with decreased adaptive T and B immune responses, including SIV Gag-specific CD8 + T cells. Our findings indicate a negative role for MDSCs in protection, however given the contradictory effects of immune suppressive cells in other viral infections (52,58), it is tempting to speculate that MDSCs may have also contribute to protection from virus acquisition, for example by decreasing inflammation, thus reducing vulnerable HIV targets, such as activated CD4 + T cells. Results from HIV vaccine trials in humans and macaques suggest that inducing stronger adaptive immune responses may not be advantageous, as too much inflammation may increase HIV targets and thereby exacerbate infections (48,(59)(60)(61)(62). Indeed, we found that the DNA-primed strategy induced fewer T cell responses and pro-inflammatory cytokines than both an Adenovirus-based vector (Ad26)-primed vaccine strategy and the MF59-adjuvanted vaccine, though the former did achieve superior protection (32,49). In the current study, we did not observe any associations between specific CD4 + T cells and MDSCs, but this could perhaps be due to the time points chosen to collect blood samples. Given the strong immunosuppressive capacity of MDSCs on CD4 + T helper cells, and the observed decrease in specific CD8 + T-cell responses observed also in our study, it is nevertheless possible that MDSCs might have affected vaccine-induced Th1-cell responses. We have previously identified Th1-cell responses to be harmful in ALVAC-vaccinated macaques (48,49,63), and limited induction of MDSC or MDSC-like cells may thus be partially beneficial in controlling inflammation and HIV target cells, particularly at mucosa sites. Collectively, the data presented here and those published in ref 32 demonstrate that MDSC and CD16 + monocytes have an opposite effect on the efficacy of the DNA + ALVAC + gp120 HIV vaccine candidate than innate classical CD14 + monocytes (Figure 7), underscoring the fundamental role of myeloid cells in shaping protective immune responses. A better understanding of the role of MDSCs in vaccine-mediate protection will be instrumental to improve the efficacy of HIV vaccine candidates, as well as vaccines against other human pathogens. CONTRIBUTION TO THE FIELD STATEMENT A preventive vaccine for HIV is urgently needed. A vaccine using a Canarypox virus vector ALVAC was tested in a Thailand clinical (Thai) trial and, for the first time, resulted in significant protection from HIV acquisition. The level of protection afforded by this vaccine was limited, and this strategy must be improved. In the current study, we furthered our understanding of how this partial protective HIV vaccine candidate harnesses innate myeloid-derived cells and their role in vaccine efficacy. We show that immunosuppressive cells called MDSCs may interfere with the proper induction of T and B cell signals and specific CD8 + T cell responses, that are in turn needed to clear HIV infection. We also analyzed the immune suppressive mechanisms of MDSCs that are central to their harmful role. Altogether, our results underline the complexity of the immune system and suggest ways to strengthen the effectiveness of current HIV candidate vaccines. DATA AVAILABILITY Microarray data can be obtained at the National Center for Biotechnology Information Gene Expression Omnibus (http:// www.ncbi.nlm.nih.gov/geo) under accession number GEO: GSE108011. ETHICS STATEMENT The study was conducted as previously described (32). All animals used in this study were colony-bred rhesus macaques (Macaca mulatta) provided by Covance Research Products. Monkeys were housed and handled in accordance with the standards of the Association for Assessment and Accreditation of Laboratory Animal Care International, and the care and use of the animals complied with all relevant institutional (U.S. National Institutes of Health) guidelines. The protocol (AUP 491) was approved by the Advanced BioScience Laboratories Institutional Animal Care and Use Committee. AUTHOR CONTRIBUTIONS GF designed the study and wrote the paper with MV, who also performed data analyses and prepared the figures. SF and R-PS analyzed the gene expression data, performed the correlates of risk analyses, prepared the figures, and helped write the manuscript. DB performed the flow cytometry for monocytes in blood and some correlative analyses. KF, MR, and RK performed the intracellular cytokine analysis. IS and MB performed the ELISA and Luminex assays. JB and YS provided suggestions for the identification of MDSCs by FACs. All the authors performed critical review of the manuscript. FUNDING This work was mostly supported with federal funds from the intramural program of the National Cancer Institute, National Institutes of Health, including Contract No. HHSN261200800001E. Contributions were made by the extramural NIAID program (HHSN27201100016C), the Henry M. Jackson Foundation, the US Department of Defense, and the Collaboration for Aids Vaccine Discovery (CAVD) grants OPP1032325, OPP1032817, and OPP1147555 from the Bill and Melinda Gates Foundation.
v3-fos-license
2022-11-09T16:43:07.981Z
2022-11-01T00:00:00.000
253408657
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2223-7747/11/21/2989/pdf?version=1667653859", "pdf_hash": "ec161a5f26293e71259337b08fd680a0658198a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1508", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "72bd7b058ae9b992bd25a9cfd36bfbb8661c43af", "year": 2022 }
pes2o/s2orc
Using of Essential Oils and Plant Extracts against Pseudomonas savastanoi pv. glycinea and Curtobacterium flaccumfaciens pv. flaccumfaciens on Soybean The bacteria Pseudomonas savastanoi pv. glycinea (Coerper, 1919; Gardan et al., 1992) (Psg) and Curtobacterium flaccumfaciens pv. flaccumfaciens (Hedges 1922) (Cff) are harmful pathogens of soybean (Glycine max). Presently, there are several strategies to control these bacteria, and the usage of environmentally friendly approaches is encouraged. In this work, purified essential oils (EOs) from 19 plant species and total aqueous and ethanolic plant extracts (PEs) from 19 plant species were tested in vitro to observe their antimicrobial activity against Psg and Cff (by agar diffusion and broth microdilution method). Tested EOs and PEs produced significant bacterial growth inhibition with technologically acceptable MIC and MBC values. Non-phytotoxic concentrations for Chinese cinnamon and Oregano essential oils and leather bergenia ethanolic extract, which previously showed the lowest MBC values, were determined. Testing of these substances with artificial infection of soybean plants has shown that the essential oils of Chinese cinnamon and oregano have the maximum efficiency against Psg and Cff. Treatment of leaves and seeds previously infected with phytopathogens with these essential oils showed that the biological effectiveness of leaf treatments was 80.6–77.5% and 86.9–54.6%, respectively, for Psg and Cff. GC-MS and GC-FID analyzes showed that the major compounds were 5-Methyl-3-methylenedihydro-2(3H)-furanone (20.32%) in leather bergenia ethanolic extract, cinnamaldehyde (84.25%) in Chinese cinnamon essential oil and carvacrol (62.32%) in oregano essential oil. Introduction Soybean (Glycine max Willd) is the main leguminous crop worldwide. Crop is a source of many useful substances [1], and in 2020, 353.5 million tons were harvested in the world [2]. Significant factors in reducing crop yields are weeds, pests and diseases [3][4][5]. Among crop diseases of bacterial etiology, bacterial blight is considered to be the most destructive, reducing yields by up to 40% [6]. The gram-negative bacterium Pseudomonas savastanoi pv. glycinea (Coerper, 1919; Gardan et al., 1992) (syn-Pseudomonas syringae pv. glycinea (Coerper, 1919;Young et al., 1978)) (further in the text-Psg) is the causative agent of soybean blight [7]. The disease has been detected in 41 countries covering all climatic zones of soybean production (https://gd.eppo.int/taxon/PSDMGL, accessed on 27 July 2022). Psg affects all aerial parts of the soybean, but the specific symptoms are usually observed on the middle and upper leaves and on the pods. In 5-15 days after infection, necrotic oily spots appear on the leaves surrounded by a chlorotic halo; spots grow and merge, forming necrotic zones [8]. The pathogen is mainly spread through the infected seeds [9] or, more rarely, through the crop residues. The disease reduces yield, soybean oil content, and germination of infected seeds [10]. Another harmful soybean disease of bacterial etiology is bacterial tan spot and wilt caused by a Gram-positive bacterium Curtobacterium flaccumfaciens pv. flaccumfaciens (Cff) (Hedges 1922). This bacterium affects the vascular system of the plant, causing spots on the leaves, blight, wilting, and death of seedlings and adult plants of leguminous crops [11]. Infected plants grow slowly, their leaves fall off, shoots die off, and the main stem wilts and breaks. Though the dry beans (Phaseolus vulgaris L.) are to be the main host plant for Cff, the pathogen can cause outbreaks of disease on soybeans as well [12]. The harmfulness of the pathogen is in reducing the yield [13] and seed quality [14]. Cff has been listed by the European and Mediterranean Plant Protection Organization (EPPO) on the Category A2 List of Quarantine Objects (https://www.eppo.int, accessed on 1 August 2022) (PM1/002 (28) (PM 7/102 (1)). Infected seeds are the main source of infection [15]. Currently, control technology of protecting soybean from bacterial diseases is complex and includes several methods, the main of which is the prevention. In particular, seed certification is the most common method to prevent infected seeds from entering the field [9,11,15]. Other control methods include strict crop rotation, the use of resistant cultivars, and the treatment of seeds and plants with chemical and biological agents. In particular, examples of the use of resistant cultivars are known [10,16,17]; however, the pathogens quickly adapt to them due to the evolution of pathogen virulence and the high diversity of natural populations in general. A radical method of protection is the use of chemical antibacterial substances (in particular, copper compounds and agricultural antibiotics). Unfortunately, their permanent use leads to the development of resistance in bacteria so it is limited in many countries (including Russia), while copper preparations are not effective enough [18], accumulate in plants and soil, and cause environmental problems [19]. There are attempts to use biological agents: antagonistic bacteria [20], PGPR [21], and bacteriophages [22,23] to control the pathogens, but the effectiveness of bioagents depends on the conditions of their use. On the other hand, the decrease in the number of active substances of fungicides (including bactericides) allowed for use in crop production, concern for the environment, and the development of organic crop production is leading to the development of alternative environmentally friendly pest control systems to combat crop diseases [24]. The use of natural compounds such as essential oils and plant extracts in plant protection against diseases is promising [25,26]. EOs (essential oils) are secondary metabolites derived from various plant parts. In particular, they are reported to be used to control plant diseases of fungal [27], oomycete [28], and bacterial [29] etiology. For example, the mechanism of action of EOs such as thymol (a component of thyme EO) with bacteria is mainly associated with structural and functional changes in the cytoplasmic membrane [30], which leads to damage to the outer and inner membranes; it can also interact with membrane proteins and intracellular targets and affect membrane permeability and lead to the release of K+ and ATP ions [31]. Plant extracts (PEs) such as EOs are composed of secondary metabolites of plant cells but more complex in composition. All of them are biodegradable and do not cause serious harm to the environment. Therefore, EOs and PEs can serve as natural alternatives to pesticides for phytopathogens control [32]. The mechanism of action of PEs mainly consists of the effect on the bacterial cell membrane by changing the internal pH and hyperpolarization of the cell membrane [33]. There are several reports on the determination of the antibacterial activity of EOs and PEs in vitro against Psg [6], Cff [34,35], and both bacteria simultaneously [36,37]. A single in vivo study of the control of Psg soybean seed infection has been reported [38], while no experiments on Cff infection in soybean have been performed to date. The purpose of this study is to screen the in vitro activity of EOs and PEs against soybean bacterial pathogens and evaluate the effectiveness of these substances against an artificial infection on plants. Antibacterial In Vitro Activity The primary antibacterial activity of EOs from 19 plant species and extracts (water and ethanol) from 19 plant species was tested against 3 strains of the Pseudomonas savastanoi pv. glycinea and 3 strains of the Curtobacterium flaccumfaciens pv. flaccumfaciens by disc diffusion method. 2.1.1. Antibacterial In Vitro Activity by Disc Diffusion Method Essential Oils. Pathogen susceptibility to essential oils was highly variable and depended on the type of pathogen and source plant (Supplementary Table S1). Zones of bacterial growth inhibition varied from 1.3 mm (peppermint oil against Psg) to 9.3 mm (CCEO against Psg). EOs of Chinese cinnamon and clove showed the highest indices of inhibition zones (9.7 and 9.3 mm, respectively) for Psg. Oregano and thyme EOs showed the highest indices of inhibition zones against Cff (5.7 and 8.3 mm, respectively). It was discovered that only common rue and tansy EOs did not show activity on any strain of the pathogens. In total, 15 essential oils (78.9%) showed antibacterial activity against Psg and 9 (47.4%) against Cff. Only 7 EOs (36.8%) were active against both bacteria. Plant Extracts. Susceptibility of bacteria to plant extracts varied and also depended on the type of pathogen and plant (Supplementary Table S1). Zones of inhibition of bacterial growth varied from 1.3 mm (galega extract against Psg) to 6.3 mm (LBEE against Cff). Both against Psg and Cff, LBEE showed the highest indices of inhibition zones (5.3 and 6.3 mm, respectively). It was discovered that activity was shown against at least one species of bacteria for PEs amur cork tree, leather bergenia, cayenne pepper, galega, greater celandine, black mulberry, bridewort, sweet flag, lemon balm, and elderberry. Six extracts showed antibacterial activity on disks (15.8% of 38 extracts in total) against Psg and 8 extractsagainst Cff (21.05%). Ethanol extracts were more active than aqueous extracts-8 ethanol (42.1%) and 3 aqueous extracts (15.8%) had an antibacterial effect. Curiously, the zone of inhibition in the application of the standard antibiotic gentamicin on different strains of Psg and Cff varied. For example, among the Psg strains, the G2 strain is distinguished, which is less susceptible to gentamicin (21.7 mm in diameter), and among the Cff strains, it is F-125-1 with an inhibition zone diameter of 22.7 mm. At the same time, the thiram showed a larger zone of inhibition in Cff than Psg strains (6.3 ± 0.5 mm versus 4.3 ± 0.5 mm), while the antibiotic gentamicin, on the contrary, (Psg 22.7 ± 0.5 < Cff 20.7 ± 0.5). Subsequently, all EOs and PEs that showed an effect on at least one bacterial strain were used to determine the MIC and MBC values, and G2 for Psg and F-125-1 for Cff were used as target strains. Antibacterial In Vitro Activity by Determination of MIC and MBC Values The results of the analysis of bacterial growth, measured by counting the titer after incubation in a broth medium containing various concentrations of EO/PE, are presented in Figure 1 and Table 1. Preliminary experiments showed that the presence of Tween 20 and DMSO in the broth medium in concentrations contained in the tested EOs and PEs did not affect the growth of bacteria. At the same time, only DMSO at high concentrations above 50,000 ppm had a slight negative effect on bacterial growth. The antibacterial activity of the tested substances is summarized in Table 1, which shows the minimum inhibitory concentration causing growth inhibition (MIC) and the minimum bactericidal concentration (MBC). Concentrations are expressed in ppm. EO-essential oil, ETH-ethanolic extract, W-water extract. Values in panels represent the respective mean of two independent trials and error bars represent the standard deviation. Values within columns marked by different letters (a-f) have a significant difference, Duncan's criteria, p = 0.05. A concentration of 0 indicates bacterial growth in the liquid medium without EOs, PEs, standard antibiotics, and thiram. The graphs show only variants with MBC <1600 ppm for essential oils and <10,000 ppm for plant extracts. (А) (В) Figure 1. Effect of different concentrations of essential oils and plant extracts on growth of Pseudomonas savastanoi pv. glycinea strain G2 (A) and Curtobacterium flaccumfaciens pv. flaccumfaciens strain F-125-1 (B), measured by counting colonies on agar medium after cultivation in liquid medium. Concentrations are expressed in ppm. EO-essential oil, ETH-ethanolic extract, W-water extract. Values in panels represent the respective mean of two independent trials and error bars represent the standard deviation. Values within columns marked by different letters (a-g) have a significant difference, Duncan's criteria, p = 0.05. A concentration of 0 indicates bacterial growth in the liquid medium without EOs, PEs, standard antibiotics, and thiram. The graphs show only variants with MBC < 1600 ppm for essential oils and <10,000 ppm for plant extracts. Essential Oils. Most of the tested essential oils caused significant inhibition of bacterial growth. The most active EOs with the lowest MIC values were Chinese cinnamon-200 ppm, thyme-800 ppm for Psg, and oregano-200 ppm for Cff. It is worth noting that although the MIC values did not differ for the most efficient EO and thiram for Psg (200 ppm), they were lower for Cff (thiram-400 ppm, oregano-200 ppm). The MBC values for these substances showed a similar pattern. The lowest values were for Chinese cinnamon-280 ppm, thyme-1440 ppm for Psg, and oregano-360 ppm for Cff. Plant Extracts. The lowest MIC values were for LBEE-1000 ppm and lemon balm-2500 ppm for Psg, and for Cff: LBEE-2500 ppm and cayenne pepper (water)-5000 ppm. The most active EOs, according to MBC values, were the same substances. In particular, for LBEE this indicator was 4000 ppm, for lemon balm-5000 ppm for Psg, Cff: LBEE-5000 ppm and cayenne pepper (water)-9000 ppm. The standard antibiotic gentamicin showed the lowest MIC and MBC values for both bacteria compared to the other treatments (MBC = 80 ppm for Psg and 100 ppm for Cff). Thus, although PEs showed antibacterial activity against the studied bacteria, these concentrations were much higher than those of EOs, the standard antibiotic, and the thiram (~3-15, 20-50, and 5-10 times, respectively). Phytotoxicity To assess the determination of the optimal concentrations of EOs and PEs for the treatment of soybean plants, phytotoxicity tests selected 2 EOs (Chinese cinnamon for Psg and oregano for Cff) and 1 PE (LBEE for both bacteria) that showed the lowest MBC values in Section 2.1. A preliminary study showed that the surfactants Tween 20 and DMSO used to dissolve EOs and PEs were only phytotoxic at elevated concentrations. Thus, Tween 20 did not affect seed germination, but caused blight during leaf treatment only at a concentration above 10% in the working solution, while DMSO reduced seed germination at a concentration of 50% and caused blight at a concentration of 20%. Phytotoxicity on Seeds. The effect of EO and PE concentration gradation on seed germination and soybean seedling root length is presented in Figure 2A,B. Comparing the average values of germination and root length at various concentrations with the control treated with water, we evaluated the phytotoxicity (Supplementary Figure S1). In both EOs, the threshold of phytotoxic concentrations in seeds was above 0.5%. Although at this concentration a slight decrease in germination and root length is observed for some substances, these values are not statistically significant and do not differ from the control treated with water. In the case of LBEE, a slightly different situation is observed when phytotoxic concentrations start from values above 13%. For example, in the case of germination, a statistically significant decrease occurred only at a concentration of the substance in the working solution of 20% ( Figure 2A). The effect of the same extract on the root length of soybean seedlings showed that the phytotoxicity threshold starts from 15% ( Figure 2A). Phytotoxicity on Leaves. Phytotoxicity on soybean leaves was tested by spraying solutions with various concentrations of EOs and PEs. In all cases, the dose-dependent growth of phytotoxicity with an increase in the concentration of the active substance in the solution was determined. Thus, for both EOs, the safe threshold for phytotoxicity, by analogy with seeds, was 0.5% ( Figure 2C). Only for CCEO, on single plants, signs of a slight loss of turgor in some leaves were visible, which disappeared in 5-6 days. For LBEE, the maximum concentration of the working solution at which symptoms of phytotoxicity were not visible was also 13% ( Figure 2C). The presented threshold values of EOs and PEs concentrations do not statistically significantly differ from water treatment. Control of Seed and Leaf Infections Psg and Cff with EOs and PE The relevance and repeatability of models of artificial infections Psg and Cff on soybean plants was described in detail in our previous studies and the experimental conditions were identical to those described earlier in Refs. [22,23]. Efficacy of EOs and PE against Psg and Cff leaf infection. Soybean leaves infected with Psg and Cff were treated with EOs and PE in triplicate. The spread of the disease on the leaves was measured using the Leaf Doctor program 12 days after the treatment of previously infected plants (Supplementary Figure S5). In the Psg experiment, disease progression was reduced by 60-80% with variant treatment compared to the water-treated control ( Figure 3A). Interestingly, the highest efficiency was observed with the CCEO treatment (80.6%), while the LBEE treatment was inferior (60.5%) to both the CCEO treatment and the standard fungicide Kocide (69.05% efficiency). In the control variant of the experiment with Cff, the average leaf area with symptoms of the disease, although inferior to Psg, was at a high level (9.5% and 20.15%, respectively). The development of the disease on the treated variants was reduced by 47.0-77.5% compared to the control treated with water ( Figure 3B). The highest efficiency was observed with OEO treatment (77.5%), while LBEE and Kocide showed approximately the same efficiency (48.8 and 47.0%, respectively). Identification of Chemicals Comprising EOs и PEs The extract yield from leather bergenia was 4.22% and from OEO 1.69% of the mass of air-dry plants. GC-MS and GC-FID analysis of CCEO, OEO, and LBEE identified 58 compounds (Table 2). Twenty-two compounds from LBEE, 18 from CCEO, and 24 from OEO were identified, representing 90.63%, 99.68%, and 99.49% of identified compounds, respectively, for each EO/PE. In LBEE, the most common compounds were acetic acid (27.85%), Efficacy of EOs and PE against Psg and Cff Seed Infection. Treatment of soybean seeds previously artificially infected with Psg with experimental variants showed a significant reduction in the frequency of infection of seedlings and the rate of disease development. The control treatment (using water) showed a rapid development of the disease in the plants ( Figure 3C). Due to the daily overhead watering of the plants, a secondary infection was observed with a severity similar to the disease outbreak in the field. The biological effectiveness of CCEO treatment was 77.4% (incidence of disease) or 86.9% (severity of disease) compared with control, while LBEE treatment reduced the development and prevalence of the disease by more than 2 times, but was inferior to CCEO treatment. Treatment by thiram greatly reduced both development (88.7% efficacy) and prevalence (92.8% efficacy) of the disease in the trial. In the control variant with Cff seed infection, symptoms of wilt and yellowing of soybean leaves were observed with average AUPDC = 633 points ( Figure 3D). In general, the effectiveness of treatments with experimental variants was lower than in the experiment with Psg. Thus, the biological efficiency of the OEO treatment was 54.6% compared with the control, while the LBEE treatment reduced the AUPDC by only 25.9%. Treatment of seeds with thiram also did not show a high biological effect, displaying the efficiency of 46.3%. Identification of Chemicals Comprising EOs u PEs The extract yield from leather bergenia was 4.22% and from OEO 1.69% of the mass of air-dry plants. Retention Time (min) Compound Discussion Bacterial diseases of the soybean are a problem all over the world: among them, Psg is a species that has been causing harm for a long time, while Cff has only recently begun to be registered as a significant crop pathogen [12,39]. At the same time, it is known that both bacteria can form a single pathocomplex and simultaneously infect soybean in the field [40]. The particular danger of both bacteria for soybean cultivation is the main method of transmission-seeds, through which pathogens are transmitted and spread to new locations [9,15]. Currently, there are several strategies for the control of bacterial plant diseases. By analogy with human bacterial diseases, antibiotics should be used, but this is prohibited by the legislation of many countries, including Russia and the EU. Copper-containing fungicides often used to control phytopathogenic bacteria are increasingly excluded (or restricted) from plant protection systems due to legal prohibitions and environmental concerns. Therefore, in areas where diseases are spread, control methods should include the prevention and control of infected seeds using eco-friendly pest control systems [41]. Compared to chemical bactericides and antibiotics, EOs and PEs have many advantages because they are environmentally friendly, have low toxicity to mammals, and are biodegradable in the field when released into the soil [42]. In addition, they seem to be a potential alternative to synthetic substances, in particular, due to the increasingly developing resistance even to multisite biocides in pathogenic microorganisms [19]. In recent years, many studies have been reported describing the strong antibacterial activity of EOs and PEs. Although the bulk of research concerns human pathogens, precedents are known for the use of these substances to combat pathogens that lead to food spoilage [43] or fungi and bacteria that infect agricultural plants [44,45]. Much attention is paid to the use of EOs and PEs as disinfectants for seed disinfection against bacterial and fungal diseases [46,47]. In terms of bacteria, EOs are known to have an anti-quorum sensing effect in addition to direct contact biocidal action, which consists of blocking cell communication, which plays an important role in biofilm development and affects resistance and virulence [48,49]. Moreover, many countries use commercial pesticides based on EOs and PEs. For example, Koppert BioSystems (Veilingweg, The Netherlands) supplies the fungicide Nopas (a.i. EOs thyme and peppermint) to protect tomatoes from root rot; in the USA, fungicides VertigoTM with a.i. cinnamaldehyde [50] and Qwel [51] are used to protect crops against a wide range of pathogens. In this context, this study aimed to screen for the activity of a range of EOs and PEs against soybean bacterial pathogens and to fill some of the gaps in knowledge about the main aspects of the control of these diseases with botanical pesticides. Most of the compounds tested showed dose-dependent in vitro antibacterial activity against both Psg and Cff. These data confirmed previous results obtained for Psg and Cff. In particular, the work [38] reported on the in vivo use of thyme EO for the control of Psg seed infection and showed that when treated with this substance, the number of bacteria on seeds decreased by 6%, and germination increased by 21%. The results for Cff are also consistent with those of Flores et al. [52], in which EOs of oregano, thyme, and cinnamon were tested against Clavibacter michiganensis subsp. michiganensis and oregano EO showed the highest degree of bacterial inhibition. This may be because the genera Curtobacterium and Clavibacter are closely related and were previously formerly classified as genus Corynebacterium sp. [53]. The work [6] reports the antibacterial activity of aqueous extracts of neem (Azadirachta indica) and ginger (Zingiber officinale), and the work [37] reports the action of carvacrol against Psg strains in vitro. Against Cff, EOs of cumin (Carum carvi) [36], moshkoorak (Oliveria decumbens), and spartan oregano (Origanum minutiflorum) [34] exerted strong antimicrobial activity in vitro. As far as extracts are concerned, the ethanolic extract of St. John's wort (Hypericum perforatum) displays antibacterial activity against Psg in vitro. Given that EOs and PEs are composed of many different secondary metabolites, it is of interest to characterize individual substances with antibacterial activity. In particular, the antimicrobial efficacy of CCEO is due to high phenolic compounds such as cinnamaldehyde and eugenol [54]. In [55], it is reported that the antibacterial activity of OEO is due to the two main phenols carvacrol and thymol. The activity of extracts is also due to different groups of phenolic compounds [56,57]. In this study, the most abundant compound in CCEO was cinnamaldehyde, which corresponds to the literature data [54]. In OEO, the most common compounds were carvacrol and thymol, which also corresponds to the literature data; however, cymene (19.85%) is represented in a smaller proportion than carvacrol, but bigger than thymol, which is of interest given that this compound also has an antibacterial effect [58]. The analysis showed that the predominant compound in LBEE is 5-Methyl-3-methylenedihydro-2(3H)-furanone, the biological effect of which has not yet been published. Other major components of the extract are eugenol and acetic acids, which display antibacterial activity against a wide range of bacteria [59,60]. If at the initial stage all EOs and PEs were tested, then, at the stage of the application on plants, studies were carried out using 3 substances that showed the lowest MBC value: CCEO for Psg, OEO for Cff, and LBEE for both bacteria. Potential phytotoxic effects on soybean were assessed for seeds using a germination test after dip treatment, and for leaves by spraying with a hand sprayer. Considering that a large number of diverse methods used to determine the phytotoxic effects of EOs and PEs concerning plants [61,62] rather complicate the overall picture, it was decided to determine the phytotoxic concentrations for each of the substances planned for use in vivo in relation to soybean. Moreover, there are precedents for the use of EOs as contact herbicides for weed control, in which a decrease in crop yield was observed [63,64]. In the current study, all tested substances were phytotoxic to soybean seeds and leaves at certain concentrations. Phytotoxicity analysis of EOs showed that the lowest concentration at which no phytotoxic effect was observed was 0.5%. For LBEE, this figure was 13%. It is known from the literature that the analyzed EOs and PEs have a phytotoxic effect: in particular, OEO against germinating seeds of radish (Raphanus sativus) [65], wheat (Triticum aestivum) [66], and tomato (Solanum lycopersicum) [67]. The work [63] reported on the phytotoxic effect of CCEO in the treatment of apple leaves. Moreover, CCEO has elicitor activity in plants against phytopathogens [68,69]. From the component composition of essential oils, it is known that monoterpenes, in particular 1,8-cineole and carvone, have the greatest phytotoxic effect [70]. Arbutin from Bergenia crassifolia [71] and methanolic extract of Bergenia ciliata [72] are known to have plant growthinhibiting properties. In addition, plants from Bergenia species or their specific components can be used as a natural insecticide [73]. Tests on an artificial infection Psg and Cff showed that the analyzing substances can reduce the development of the disease both in the treatment of seeds and in the treatment of soybean leaves. In general, the effectiveness of variants was higher in the control Psg than against Cff. Possibly, this phenomenon is related to the fact that Cff can penetrate the conducting system of the seedling [74] and be inaccessible to contact bactericides, while Psg does not have this ability. The high efficiency of seed treatment with EOs is possibly due to both the treatment method (soaking), in which the antibacterial agent penetrates the thickness of the seed much better than with the traditional semi-dry treatment method, and the ability to evaporate essential oils into a gaseous form, the penetrating ability of which is at or below higher than the liquid form [75]. Undoubtedly, the use of these substances in industrial volumes to control soybean bacterial diseases requires several additional studies. In particular, it is necessary to choose formulation to obtain stable emulsions/solutions of EOs and PEs when treating plants [76,77]. Packaging is also promising, in particular that of EOs in nanoformulations, which make it possible to reduce the consumption rate of the active substance and increase efficiency due to more uniform contact of the substance with the pathogen/plant [78,79]. Evaluation of the individual components of EOs and PEs as antibacterial substances to combat bacterial plant diseases arouses great interest, too. For example, the Chinese cinnamon EO main active ingredient cinnamaldehyde is used as a common commercial pesticide [50]. Thus, the results of this study confirmed the antimicrobial activity of EOs and PEs and showed promising prospects for their potential use also in the treatment of soybean seeds and leaves for protection against bacterial diseases. Because this is an initial report, more research is needed to improve soybean processing efficiency by optimizing delivery technology, plant application, and formulation for commercial use in field conditions. Bacterial Strains The work used strains of Pseudomonas savastanoi pv. glycinea CFBP 2214 and Curtobacterium flaccumfaciens pv. flaccumfaciens CFBP 3418 from the CFBP collection (Beaucouzé, France) and Russian strains isolated (from damaged by diseases soybean plants in 2019-2021 years) and described by us (Psg: G2 and G17, Cff: F-125-1 and F-30-1) in previous publications [22,80]. These strains were pathogenic against soybean plants cv. Kasatka by artificial infection. Psg strains reacted positively with PCR assay for gene cfl [8] and had sequences of gene cts fragments [81] that were most similar to the corresponding sequence in the genome of the Psg strains in Genbank. Strains Cff were attributed by PCR using genus-specific [82] and species-specific [83] primers. Plant Material The plant samples for the isolation of antibacterial substances were collected during June-August 2021 on the territory of the botanical garden of First Moscow State Medical University (Moscow, Russia) (all excepting a few species), Field experimental station of Russian State Agrarian University (Moscow, Russia) (garlic, cv. Novosibirskiy) and local markets (key lime, mandarin, paprika (country of origin-India)). Chinese cinnamon EO (CCEO) was kindly provided by «SoyuzSnab» company (Krasnogorsk, Russia). The selection of plant species and the EO and/or extracts used was based on preliminary reports (references) of antibacterial activity (Table 3). In two plants (sweet-flag and garlic), both essential oil and extracts were used separately. A complete list of plants and parts from which EOs and PEs were isolated is presented in Table 3. Biological species were identified jointly with specialists from the Botanical Garden following morphological characters and keys that serve as the main basis for the taxonomy of the respective plant families [84]. Table 3. Plant material is used for the preparation of essential oils and extracts. Extraction of Essential Oils After harvesting, the plants were dried out of the sun and under natural ventilation for 2 weeks, after which they were cut into small pieces 5-6 mm in size with scissors and subjected to hydrodistillation according to [108] with minor changes. For this, 100 g of each plant was soaked in 2 L flasks with 1500 mL of water, hydrodistilled for 3 h in a Clevenger apparatus, and the collected distillates were dried over anhydrous Na 2 SO 4 . EOs were stored in sealed tubes at 4 • C until analysis. Extraction of Plant Extracts Plant samples were preliminarily collected in the same way as in Section 4.2.1. An extraction was carried out in a Soxhlet apparatus according to [109]. For this, 50 g of each plant were crushed in a laboratory mill to a powder, 300 mL of water or 96% ethanol was added, and extracted for 12 h. The resulting solutions were filtered through Whatman No. 1 paper, evaporated, and concentrated to dryness using a RE100-Pro rotary evaporator (DLab, Beijing, China) at 50 • C. The resulting extracts were dissolved in a 4% aqueous solution of DMSO (dimethyl sulfoxide) to a final concentration of 50% (by a.i.) according to [110] with changes and stored in sealed test tubes at 4 • C until analysis. Determination of Antibacterial Activity by Disc Diffusion Method Substances were screened for antibacterial activity by the disk diffusion method, which is usually used as a preliminary check and for the selection of the effective substances [111] with some changes on all 6 strains. It was performed using 48 h cultures grown at 28 • C on a King B agar medium. The suspension was adjusted to 10 5 CFU/mL with sterile saline. Then, 100 µL of the suspension was dispensed into plates containing King B using a sterile loop. Discs of Whatmann filter paper (No. 1) with a diameter of 6 mm were cut by punching paper, and disc blanks were sterilized in a hot air oven at 160 • C for one hour. Essential oils were dissolved in a 2.5% aqueous solution of Tween 20 to a concentration of 5% on a vortex until a stable emulsion was formed. Under aseptic conditions, sterile discs were impregnated with 10 µL of an emulsion (0.5 µL of a.i./disc) of the appropriate essential oils and placed on the agar medium. Plant extracts pre-dissolved in DMSO were tested similarly to essential oils, except that 10 µL of a 50% aqueous/ethanolic extract dissolved in DMSO was placed on the disks. Discs containing 2.5% aqueous Tween 20 and discs containing 10 µL 4% aqueous DMSO were used as negative controls. Discs containing 0.5 mg (in a.i.) of the antibiotic gentamicin (DalKhimPharm, Khabarovsk, Russia) and reference pesticide thiram (TMTD fungicide, WSC (400 g/L a.i.), Avgust LLC, Moscow, Russia) were used as positive controls). All dishes were sealed with laboratory film to avoid possible evaporation of the test samples. The dishes were left for 30 min at room temperature to allow diffusion of the oil, and then they were incubated at 28 • C for 48 h. After the incubation period, the zone of inhibition was measured with a caliper. The diameters of the inhibition zones of bacterial growth were measured excluding the disc diameters (6 mm). Studies were performed in triplicate of 3 plates with one disk of the defined EO, PE, or standard antibiotic/thiram each, and the mean bacterial growth inhibition zone was calculated. Determination of MIC and MBC The activity of EOs and PEs was assessed with Psg and Cff according to the CLSI 2015 broth microdilution method [112] with modifications. Starter cultures were prepared by suspending bacterial cells in 5 mL King B (without agar) and incubating at 28 • C for 24 h at 150 rpm in an ES-20 shaker (BioSan, Riga, Latvia). Bacterial suspensions were diluted to obtain an absorbance value on a spectrophotometer corresponding to a concentration of 10 5 CFU/mL. Sterile 1.5 mL Eppendorf-type tubes were filled with liquid King B medium, emulsions or solutions of EO/PE/standard antibiotic/thiram to a predetermined concentration, and 50 µL of a suspension of bacteria strains (Psg G2 and Cff F-125-1). The total volume of the reaction mixture was 1000 µL. The concentration series for EOs and standard antibiotic/thiram were: 50, 100, 200, 400, 800, 1200, 1600, and 3200 ppm; for PEs: 500, 1000, 2500, 5000, 10,000, 50,000, and 100,000 ppm (in a.i.). After preparing the reaction mixtures, the tubes were incubated at 28 • C for 48 h and 350 rpm in ThermoMixer F 2.0 (Eppendorf, Hamburg, Germany). After 48 h of incubation, 100 µL of the reaction mixture was taken from each tube, a series of tenfold dilutions in sterile water was performed, placed in Petri dishes on King B medium, and spread over the entire surface of the medium using an L-shaped spatula. The dishes were placed in a thermostat at 28 • C for 48 h, after which the concentration of bacteria in each original tube was calculated. The repetition of the experiment was 2 with 3 dishes for each dilution. The minimum inhibitory concentration (MIC) was defined as the lowest concentration of test compounds that caused 90% growth inhibition compared to control, which was determined by calculating the inhibition of bacterial growth. The minimum bactericidal concentration (MBC) was defined as the lowest concentration of an analyte that caused 99.9% death of a bacterium. For a more accurate result, the same experiment was carried out, but with a decrease in the concentration of the analyte to MIC at regular intervals of concentrations (5 points of concentration from the expected MBC to MIC). For further purposes, 2 EOs and 1 PE showing the lowest MBC were used: CCEO for Psg, OEO for Cff, and LBEE for both bacteria. Phytotoxicity on Soybean Seeds and Plants The phytotoxicity of EOs and LBEE tested on soybean seeds was assessed by a germination test using the standard "over paper" method described in the International Rules for Seed Testing [113]. Soybean (cv. Kasatka) seeds were treated by immersion in an aqueous solution of EOs or PEs at various concentrations for 10 min and then completely dried on sterile filter paper at room temperature under sterile laminar box conditions. EOs were dissolved in a 2.5% aqueous solution of Tween 20 to a concentration of 5% on a vortex until a stable emulsion was formed and PE was pre-dissolved in DMSO. EOs solutions were diluted to concentrations of 0.25, 0.5, 1, 2, 3, and 5%, LBEE to 2. 5,5,10,13,15,20,40, and 50% with sterile water. Seeds soaked in water were used as a negative control. Then, the seeds were kept under conditions of constant humidity and incubated at 25 • C. Germination was assessed after 8 days; a seed was considered germinated if it produced a sprout with a well-developed root. The average percentage of germinating seeds was determined for all repetitions. After calculating the germination, the cotyledons were separated and the length of the roots was measured using a caliper. The experiment included 3 repetitions of 50 seeds for each variant. To test the phytotoxicity of substances on vegetative plants, soybeans were grown to phase R1 in a turf-perlite mixture (Vieltorf, Velikiye Luki, Russia) in plastic trays for plant cultivation (volume 1 L, AgrofloraPack, Vologda, Russia). Plants were kept in a greenhouse at 28/22 • C (14 h day/10 h night) under natural light and watered as needed. After preparation and homogenization, the solutions were applied to plants using a sprayer (with a drop size of~300 µm) at a working solution consumption rate of~5 mL/plant (until all leaves were completely wetted). After treatment, the plants were kept in a greenhouse under the same conditions for 7 days, and then they were evaluated using the phytotoxicity scale [114] where: 0-no symptoms; 1-very slight discoloration; 2-more severe, but not lasting; 3-moderate and more lasting; 4-medium and lasting; 5-moderately heavy; 6-heavy; 7-very heavy; 8-nearly destroyed; 9-destroyed; 10-completely destroyed. Each treatment had three repetitions with 2 plants each. Gas Chromatography with Flame Ionisation Detector and Mass Spectrometry of Selected EOs and PE Analysis of PE and EOs was carried out on an Agilent 8890 GC System gas chromatograph with two independent channels, DB-1MS capillary quartz columns 60 m long, 0.250 mm in diameter, stationary phase film thickness 0.25 µm, using a mass spectrometric detector (MSD) Agilent 5977B and flame ionization detector (FID) manufactured by Agilent Technologies, Inc. (Santa Clara, CA, USA) according to the method [115] with modifications. For the analysis of EOs, 1% solutions in methanol were prepared and the solution was injected into the device in an amount of 0.2 µL per MSD channel and 0.5 µL per FID channel. For PE analysis, a sample of the extract was placed in a vial and heated at 120 • C for 5 min. Using a gas syringe, 2500 µL of an equilibrium gas-vapor mixture was taken from the vial and injected into the device through a channel connected to the MSD. The temperature program was as follows: injector temperature 250 • C, initial isotherm 35 • C-2 min, heating 5 • C/min up to 140 • C, then 10 • C/min up to 250 • C, final isotherm 250 • C-5 min. Helium was used as the carrier gas at a rate of 1.3 mL/min on the MSD channel and 1.0 mL/min on the FID channel. Air 400 mL/min and hydrogen 30 mL/min were used as auxiliary gases for FID. Source temperature 230 • C MSD, quadrupole temperature 150 • C, scanning mode-total ion current, FID temperature 260 • C. The spectra were identified using the NIST spectrum library (National Institute of Standards and Technology, Gaithersburg, MD, USA). The analysis was repeated in quadruplicate and the results were represented as the mean area of peaks ± standard deviation. Control Psg and Cff Artificial Infection Using EOs and PE The experiments on the use of the analyzed substances against Psg and Cff against an artificial infection of soybean seeds and leaves were carried out during June-August 2022 in an experimental greenhouse. In all variants, the tests were carried out on soybean cv. Kasatka (the harvest year 2021, the weight of 1000 seeds = 122.8 g). Control Psg on Seeds The creation of an artificial infection Psg on seeds was carried out according to the method [22]. Briefly, 72 h culture of Psg CFBP 2214 was resuspended in sterile 10 mM MgCl 2 to~10 4 CFU/mL. Soybean seeds were sterilized in 75% ethanol, washed with an aqueous 50% solution of commercial bleach (sodium hypochlorite)/0.002% Tween 20 (v/v) for 8-10 min and distilled H 2 O until chlorine was removed, and left in a humid chamber for 2 h to swell. The swollen seeds were pierced with a sterile toothpick, transferred to a flask with a bacterial suspension, vacuum treated at -10 5 Pa for 10 min, and dried to remove excess liquid. The seeds were treated by immersion in an aqueous solution of (1) sterile water, (2) CCEO at a concentration of 0.5%, and (3) LBEE at a concentration of 13% for 10 min. After, the seeds were dried on paper towels to remove excess moisture. Thiram was used as a seed treater. Treatment with a standard seed treater was carried out at a drug consumption rate of 7 L/t and a working solution consumption of 8 L/t according to the pesticide registration data in the Russian Federation. To do this, 25 g of seeds and 200 µL of the treatment solution were placed in portions in a 50 mL tube (Eppendorf type) and thoroughly mixed in a microcentrifuge vortex for 2 min until the solution was completely absorbed into the seeds. The treated seeds were sown in a peat-perlite mixture (Veltorf, Velikie Luki, Russia) in 40-cell plastic transplant trays (cell volume 0.12 L, AgrofloraPack, Vologda, Russia). Plants were kept in a greenhouse at 28/22 • C (14-h day/10-h night) in natural sunlight and watered as needed. The treatments in each experiment were arranged in a complete randomization design. There were five replicates in each treatment with 40 seeds (1 tray per replicate). Control Psg on Leaves The Psg artificial infection was created on vegetative plants according to the method [22] by infiltration of a bacterial suspension into a soybean leaf using an 1113 AirControl airbrush (JAS, Ninbo, China). Briefly, a bacterial suspension was prepared for seed infection with the addition of surfactant Silwet Gold (Chemtura, Philadelphia, PA, USA) to a concentration of 0.01% (w/w). Trifoliate leaves of plants were infected at stage V2 by treatment with an average dose of 5 mL of a suspension of bacteria at a concentration of 109 CFU/mL per trifoliate leaf. Plants were grown in plastic pots with a volume of 0.5 L as in paragraph 4.6.1. Two days before and 24 h after inoculation, relative humidity was maintained at~95% at a constant temperature of 27 • C. The treatment of vegetative plants with the studied substances was carried out using 35-day-old soybean plants 2 h after bacterial inoculation at a consumption rate of the working solution of preparations of~5 mL per plant (until all leaves were completely wetted) using a manual spray gun (with a droplet size of~300 µm). The design of the experiment included the use of (1) sterile water, (2) CCEO at a concentration of 0.5%, (3) LBEE at a concentration of 13%, and 4) foliar reference pesticide with bactericidal action Kocide 2000, WDG ((copper hydroxide 350 g/kg) Corteva Agriscience, Indianapolis, IN, USA). The reference foliar pesticide was used with a working solution concentration of 0.6% according to the preparation (according to the manufacturer's recommendation). The disease rate was recorded as the percentage of plants that showed leaf symptoms. The assessment of the development of the disease, in terms of infection of adult plants, was carried out on the 12th day after infection using the LeafDoctor application (https: //www.quantitative-plant.org/software/leaf-doctor, accessed on 21 July 2022), installed on an iPhone SE 2. All leaves from all plants were individually photographed and analyzed by moving the threshold slider until only symptomatic tissues were converted to blue and calculating the percentage of diseased tissue as recommended by the developer [116]. In the seed treatment experiment, similar calculations were made, but after reaching stage V3 (35 days after sowing). Control Cff on Seeds For the artificial infection of seeds, we used the method of damaging the hilum (hilum) with a sterile seed needle described in [117] with modifications. To do this, each seed was damaged by piercing the hilum with a sterile needle and soaked in a bacterial suspension; the mixture was then placed in a vacuum and dried on paper towels. Soybean seeds were treated by immersion in an aqueous solution of (1) sterile water, (2) OEO at a concentration of 0.5%, (3) LBEE at a concentration of 13% for 10 min, and dried on paper napkins remove excess moisture. Thiram was used as the reference seed treater by analogy with point 4.6.1. Further actions with plants and growing conditions were similar to Section 4.6.1. At 15, 18, 21, 24, 27, and 31 days after sowing, the severity of bacterial wilt disease of each plant was assessed with disease scores ranging from 0 to 5, where 0 = no symptoms of wilt; 1 = wilting on one of the primary leaves; 2 = wilting on both primary leaves, but not on the first trifoliate; 3 = withering of the first trifoliate leaf; 4 = death of the seedling after the development of primary leaves, and 5 = no germination or complete wilting and loss of turgor (in adult plants) of the soybean scale we adapted in a previous study described in [118]. AUPDC (Area Under Progress Disease Curve) was calculated according to the method [23] using the above scale in MS Excel 2007. Control Cff on Leaves The Cff infection of vegetative soybean plants and the method for calculating plant disease were similar to that of Section 4.6.2. The design of the experiment included the use of (1) sterile water, (2) OEO at a concentration of 0.5%, (3) LBEE at a concentration of 13%, and (4) foliar reference pesticide with bactericidal action Kocide 2000, WDG. The calculation of the incidence rate, recurrence, and plant growing conditions was similar to Section 4.6.2. Statistical Analysis For the EOs and PEs disc inhibition zone experiment, the means were analyzed by one-way analysis of variance (ANOVA) followed by a Tukey post hoc multiple comparison test using the Statistica 12.0 software package (StatSoft, TIBCO, Palo Alto, CA, USA). Results were expressed as mean (M) ± standard deviation (SD). p values < 0.05 were considered significant. For the rest of the experiments, including the determination of MIC and MBC, phytotoxicity, and control against an artificial infection on soybean, data analysis was carried out using the analysis of variance method using Statistica 12.0 (StatSoft, TIBCO, Palo Alto, CA, USA), comparing the average values by the criterion Duncan. The percentage data were converted to arcsine before processing. Graphs were created using GraphPad Prism 9.2.0. (GraphPad Software Inc., San Diego, CL, USA). Conclusions The results of this study showed that of 19 essential oils and 19 plant extracts, high antibacterial activity is displayed by the essential oils of Chinese cinnamon (against Psg), oregano (against Cff) and the ethanol extract of leather bergenia (for both bacteria). Moreover, the experiment with plants on an artificial infection of two bacterial diseases showed that these substances in non-phytotoxic concentrations can reduce the harmful effect of Psg and Cff when treating both infected seeds and leaves. These results are intriguing as they suggest that EOs and PEs could potentially be used as alternatives to traditional chemical pesticides and antibiotics in the control of soybean bacterial blight, tan spot, and wilt. However, before any promising application of the studied EOs and PEs as natural (botanical pesticides) to control phytopathogenic bacteria, it is necessary to evaluate potential side effects on non-target organisms, select an effective formulation, and conduct field studies in commercial crop production. Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/plants11212989/s1, Figure S1. Phytotoxicity on seedlings by soaking seeds in an aqueous solution (A) and treating soybean leaves (B) with various concentrations of Chinese cinnamon EO. Shown are 2 typical seeds (8 DAT) and 1 trifoliate leaf (7 DAT) from each sample before counting; Figure S2. Chromatographic profile of ethanolic extract of leather bergenia using a mass spectrometric detector; Figure S3. Chromatographic profile of Chinese cinnamon EO using MSD (A) and FID (B); Figure S4. Chromatographic profile of oregano EO using MSD (A) and FID (B); Figure S5. Symptoms of Psg and Cff on soybean leaves 12 days after infection of the leaves with an airbrush. Table S1. The activity of essential oils, plant extracts, antibiotics, and reference pesticide against Pseudomonas savastanoi pv. glycinea and Curtobacterium flaccumfaciens pv. flaccumfaciens strains were measured as growth inhibition zones in the agar-well diffusion method. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2019-03-28T13:33:22.555Z
2019-05-01T00:00:00.000
86548699
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.jmir.org/2019/5/e11030/PDF", "pdf_hash": "508b6091b9399a55f09c9d8c390c5c339f99f714", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1511", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "sha1": "f3e1fff38e3751062b3cd0806f98b984bfabba6c", "year": 2019 }
pes2o/s2orc
Data-Driven Blood Glucose Pattern Classification and Anomalies Detection: Machine-Learning Applications in Type 1 Diabetes Background Diabetes mellitus is a chronic metabolic disorder that results in abnormal blood glucose (BG) regulations. The BG level is preferably maintained close to normality through self-management practices, which involves actively tracking BG levels and taking proper actions including adjusting diet and insulin medications. BG anomalies could be defined as any undesirable reading because of either a precisely known reason (normal cause variation) or an unknown reason (special cause variation) to the patient. Recently, machine-learning applications have been widely introduced within diabetes research in general and BG anomaly detection in particular. However, irrespective of their expanding and increasing popularity, there is a lack of up-to-date reviews that materialize the current trends in modeling options and strategies for BG anomaly classification and detection in people with diabetes. Objective This review aimed to identify, assess, and analyze the state-of-the-art machine-learning strategies and their hybrid systems focusing on BG anomaly classification and detection including glycemic variability (GV), hyperglycemia, and hypoglycemia in type 1 diabetes within the context of personalized decision support systems and BG alarm events applications, which are important constituents for optimal diabetes self-management. Methods A rigorous literature search was conducted between September 1 and October 1, 2017, and October 15 and November 5, 2018, through various Web-based databases. Peer-reviewed journals and articles were considered. Information from the selected literature was extracted based on predefined categories, which were based on previous research and further elaborated through brainstorming. Results The initial results were vetted using the title, abstract, and keywords and retrieved 496 papers. After a thorough assessment and screening, 47 articles remained, which were critically analyzed. The interrater agreement was measured using a Cohen kappa test, and disagreements were resolved through discussion. The state-of-the-art classes of machine learning have been developed and tested up to the task and achieved promising performance including artificial neural network, support vector machine, decision tree, genetic algorithm, Gaussian process regression, Bayesian neural network, deep belief network, and others. Conclusions Despite the complexity of BG dynamics, there are many attempts to capture hypoglycemia and hyperglycemia incidences and the extent of an individual’s GV using different approaches. Recently, the advancement of diabetes technologies and continuous accumulation of self-collected health data have paved the way for popularity of machine learning in these tasks. According to the review, most of the identified studies used a theoretical threshold, which suffers from inter- and intrapatient variation. Therefore, future studies should consider the difference among patients and also track its temporal change over time. Moreover, studies should also give more emphasis on the types of inputs used and their associated time lag. Generally, we foresee that these developments might encourage researchers to further develop and test these systems on a large-scale basis. Reconstruction of CGM data using spline interpolation and a rough feature elimination, using fast SEPCOR algorithm. Support vector machine Sensitivity, and specificity [13] 21 Real BG, Meal, Rate of decrease from a peak and absolute level of the BG at the decision point Diagnostic (professional) CGM devices N/A Decision trees Accuracy, sensitivity, and specificity [14] 1 Real (Male) Glucose levels right before meals (G1), Glucose levels after more than 5 hours (G2), Time interval (T), Average Fasting glucose level (AG1), The rate of decrease in [Glu], Ratio of current level to average (RBF), exponential radial basis function (ERBF) and polynomial function)-hybrid particle swarm optimization [38] 15 Real (children) BG, heart rate (HR), corrected QT (QTc), change in the heart rate (ΔHR) and change in the QTc interval (ΔQTc) Compumedics system, BGLs were acquired using Yellow Spring Instruments Normalization Hybrid particle swarm optimization based normalized radial basis function neural network (NRBFNN)-hybrid particle swarm optimization with wavelet mutation (HPSOWM) Sensitivity and specificity [39] & [40] 15 Real (children) BG, heart rate (HR) and corrected QT interval (QTc) Compumedics system, BGLs were acquired using Yellow Spring Instruments Normalization Variable translation wavelet neural network (VTWNN)hybrid particle swarm optimization with wavelet mutation (HPSOWM) Sensitivity and specificity [41] & [42] 15 Real (children) BG, heart rate (HR) and the corrected QT interval (QTc) Compumedics system, BGLs were acquired using Yellow Spring Instruments Normalization Evolvable block based neural network (BBNN)-hybrid particle swarm optimization with wavelet mutation (HPSOWM) Sensitivity, specificity, ROC Curve, and geometric mean value [43] 15 Real (children) BG, heart rate (HR) and corrected QT (QTc) Compumedics system, BGLs were acquired using Yellow Spring Instruments N/A Adaptive neural fuzzy inference system (ANFIS)-hybrid particle swarm optimization with wavelet mutation (HPSOWM) Sensitivity and specificity [44] 15 Compared support vector regression (SVR) and Gaussian process (GP). [8] Fuzzy neural network estimator algorithm (FNNE) predicted the onset of hypoglycemia episodes with a mean error of 0.071 (p < 0.03) The FNNE algorithm was developed as a parallel combination of fuzzy inference mechanism (FIM) and a multi-layered neural network architecture. [9] & [10] Support vector regression (SVR) -with an event-based sensitivity of 100%, the algorithm produced only one false hypoglycemia detection. The sample-based sensitivity and specificity levels were 78% and 96%, respectively Developed an android based system to detect hypoglycemia incidence using CGM and other information. [16] & [17] & [18] Genetic algorithm based multiple regression with fuzzy inference -Sensitivity (75%) and specificity (over 50%) Genetic algorithm is used to optimize regression and fuzzy rules. Compared various order multiple Regression Fuzzy Inference System and Linear multiple regression with various number of inputs. Bayesian neural network -Sensitivity (83.46%) and specificity (63.88%) Investigated the applicability of Bayesian neural network to detect hypoglycemia from real time physiological parameters. Investigated towards predicting hypoglycemia incidence during intravenous (IV) insulin infusion for ICU patients. [53] Feed forward multi-layer neural network -Sensitivity (70.59%), specificity (65.38%) and geometric mean (67.94%) Compared the ANN model with Linear Discriminant Analysis (LDA) and K-Nearest Neighbors (KNN) on hyperglycemia detection. [52,54] Hidden Markov model (HMM) -The simulation result show that the proposed model is capable of detecting anomalies (i.e., no false positives) from the CGM readings based on historical data (in the presence of reasonable changes in the patient's daily routine). Investigated the applicability of Hidden Markov model (HMM) in anomalies detection from the change in the patient's daily lifestyle. [55] Naïve Bayes classifier -matched the physicians' classifications 85% of the time that they were internally consistent and in agreement with each other. Investigated into the applicability of characterizing blood glucose variability using new metrics with CGM data using Naïve Bayes classifier. [56] SVR models -When applied to 262 different CGM plots as a screen for excessive GV (accuracy (90.1%), sensitivity (97.0%), and specificity (74.1%). Investigated on an automatic glycemic variability detection and compared Naive Bayes (NB), Multilayer Perceptron (MP), and Support Vector Machine (SVM) models using CGM data. [58] Artificial Neural network -Average Accuracy (90%), Average sensitivity (72.23%) and Average specificity (92%) Developed Artificial Neural Network integrated with physiological model for both blood glucose prediction and classification of hypoglycemia and further compared the result with existing models. [59] Bayesian regularized neural network -Sensitivity (73%) and specificity (60%) Investigated and tested a feed-forward neural network trained with Bayesian regularization algorithm.
v3-fos-license
2021-07-21T13:15:46.018Z
2021-07-13T00:00:00.000
237762894
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/14/7822/pdf?version=1626205177", "pdf_hash": "fe02f09f53f6281f66ccbc424dd7d2078123ecf8", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1512", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Medicine" ], "sha1": "3960009913017c58c6ed7385829ad5f6b03077b7", "year": 2021 }
pes2o/s2orc
Supplying Personal Protective Equipment to Intensive Care Units during the COVID-19 Outbreak in Colombia. A Simheuristic Approach Based on the Location-Routing Problem : The coronavirus disease 2019, known as COVID-19, has generated an imminent necessity for personal protective equipment (PPE) that became essential for all populations and much more for health centers, clinics, hospitals, and intensive care units (ICUs). Considering this fact, one of the main issues for cities’ governments is the distribution of PPE to ICUs to ensure the protection of medical personnel and, therefore, the sustainability of the health system. Aware of this challenge, in this paper, we propose a simheuristic approach for supplying personal protective equipment to intensive care units which is based on the location-routing problem (LRP). The objective is to provide decision makers with a decision support tool that considers uncertain demands, distribution cost, and reliability in the solutions. To validate our approach, a case study in Bogot á , Colombia was analyzed. Computational results show the efficiency of the usage of alternative safety stock policies to face demand uncertainty in terms of both expected stochastic costs and reliabilities. Introduction COVID-19 has generated many challenges for governments and all economic activities. For the health sector and logistics industries, the challenge is undeniable considering variations of demands (i.e., people infected), needs of supplies, hospital capacities, among others. Therefore, the efficiency of all logistics and supply chain management activities, especially during pandemics and risk events, has a crucial role to play [1]. Considering the growing rate of confirmed cases of COVID-19, in certain countries, the occupancy of intensive care units (ICUs) has augmented. In Colombia, according to the 21 June 2021 report, the city with the highest number of confirmed cases was Bogotá, representing 29% of confirmed cases of the country and 97.43% of the occupation of the ICUs [2]. In addition, Bogotá is Colombia's biggest city with a surface of 685 mi 2 , and a population of around 11.2 million inhabitants [3]. The health system is composed of both public and private institutions from which 53 have ICUs allocated to serve COVID-19 patients. Currently, the number of habilitated ICU beds is 2261 while the number of COVID-19 confirmed cases is up to 1.29 million [2]. Due to the high exposure of health care workers at ICUs, personal protective equipment (PPE) such as masks, face shields, and gloves are essential for preventing the spread of COVID-19 [4]. Considering this fact, the Bogotá local government is concerned with the distribution of PPE to ICUs. Therefore, in this paper, we propose an approach for the location of potential facilities to distribute PPE to ICUs and the subsequent route planning. This problem could be represented by the location-routing problem (LRP) which is an NP-hard problem [5]. The decision-making of the LRP considers two types of problems, i.e., the facility location problem (strategic decision term) and the vehicle routing problem (tactical/operational (i) We attend an emerging real-life problem for supplying PPE to ICUs. (ii) We consider one of the cities in a country (i.e., Bogotá, Colombia) with more infected people and deaths due to the pandemic. (iii) Real data of facilities and ICUs are considered. (iv) Demand uncertainty due to daily variation of COVID-19 patients was estimated using historical data of ICUs occupancy in Bogotá. (v) A simheuristic approach is proposed to facilitate the reliability analysis during the assessment of alternative high-quality solutions integrating an iterated local search with a Monte Carlo simulation. (vi) Different safety stock policies were evaluated for dealing with uncertain demand. (vii) Assessment of solutions considering distribution cost and reliability are provided. The remaining sections of this paper are organized as follows: Section 2 gives the literature review; the problem is specified in Section 3; Section 4 presents the simheuristic approach; in Section 5 computational experiments are conducted; finally, Section 6 presents the concluding remarks, conclusions, and future research. Literature Review The LRP integrates the following decision-making problems: the number of facilities and their location, the allocation of the customers to the opened facilities, and the corresponding vehicle routing to serve customers [7,18]. As stated by Nagy and Salhi [19], the LRP is defined as an NP-hard problem. Considering the practical impact on industries, the LRP became relevant. Thus, different variations and applications are found in the literature. Broadly, variations of the problem consider characteristics of depots, vehicles, or the consideration of uncertainty [20]. Readers are referred to Drexl and Schneider [21], Prodhon and Prins [6], and Nagy and Salhi [19], as key surveys of the LRP. Generally, contributions about LRPs consider deterministic parameters [20]. However, in real case applications, uncertain parameters are an issue, regarding data availability. In the literature, uncertainty in the LRP is commonly associated with demands, travel times, time windows, among others, and modeled as single or multiple uncertain parameters. In terms of the LRP with uncertain demands, the type of uncertainty is mainly considered as fuzzy and stochastic. Regarding fuzzy uncertainty, Ghaffari-Nasab et al. [22] tackle the LRP with fuzzy demands. The authors proposed a fuzzy chance-constrained and a hybrid simulated annealing with stochastic simulation. Nadizadeh and Nasab [23] studied the dynamic capacitated location-routing problem with fuzzy demands. To solve the problem, fuzzy chanceconstrained programming is designed with a hybrid heuristic algorithm that contemplates stochastic simulation and local search. Mehrjerdi and Nadizadeh [24] studied the capacitated LRP with fuzzy demands. The authors proposed a fuzzy chance-constrained programming model with a greedy clustering method which includes the stochastic simulation. Another study of the LRP with uncertain demand is presented in Fazayeli et al. [25]. The authors considered a multimodal transportation network with time windows and fuzzy demands and developed a genetic algorithm. Nadizadeh and Kafash [26] addressed the fuzzy capacitated LRP with demand uncertainty in pickup and delivery. To model the problem, a fuzzy chance-constrained programming model and a greedy clustering method were developed. In the same way, Zhang et al. [27] tackled the LRP with fuzzy demands. A fuzzy chance-constrained programming approach and a hybrid PSO algorithm, including stochastic simulation and local search based on variable neighborhood search (VNS), were introduced. Concerning stochastic uncertainty, Albareda-Sambola et al. [28] cope with the stochastic location-routing problem. The authors modeled uncertainty as a vector of independent random variables following the Bernoulli distribution. Then, a two-phase heuristic is developed with an iteratively local search procedure. Zhang et al. [29] addressed the electric vehicle battery swap station location-routing problem with stochastic demands. A hybrid VNS algorithm was proposed and integrated with the binary PSO. Additionally, Rabbani et al. [30] tackled the stochastic multi-period industrial hazardous waste location-routing problem with uncertain demands. The authors formulated a multi-objective stochastic mixed-integer nonlinear programming model, a non-dominated sorting genetic algorithm-II, and a Monte Carlo simulation. Quintero et al. [16] investigated the capacitated LRP with stochastic demands. They proposed four versions of a VNS metaheuristic hybridized with Monte Carlo simulations. Tordecilla et al. [20] studied the flexible-size LRP considering both stochastic and fuzzy approaches to model uncertain demands. The authors proposed a simheuristic combining an iterated local search (ILS) metaheuristic with a Monte Carlo simulation for the stochastic version. Recently, an LRP with stochastic demand is found in Martínez-Reyes et al. [13]. The authors developed a preliminary version of a simheuristic in which an iterated local search (ILS) algorithm was enhanced through a Monte Carlo simulation to face demand uncertainty in supplying the intensive care units with personal protective equipment. Tirkolaee et al. [31] formulated multi-trip location-routing for medical waste management in the COVID-19 pandemic. Similarly, Valizadeh et al. [32] studied waste collection management during the pandemic. To solve the problem the authors proposed a Benders decomposition method and generated stochastic scenarios of the outbreak for evaluating decision-making. Moreover, they introduced a cooperative game theory method for solving the problem. Pasha et al. [33] proposed the "Factory-in-a-box" concept which has applications to the delivery of products with urgent demands, such as PPE. In addition, the authors proposed a mixed-integer linear programming model and four metaheuristics to solve the associated routing problem. Chen et al. [34] proposed a hybrid metaheuristic to solve the contactless joint distribution of food for closed gated communities. Other approaches for dealing with uncertainty are through robust optimization, for example, Kahfi et al. [35] presented a mathematical modeling approach to tackle the location-arc routing problem with time windows and uncertain demand in a bank case study. Even though the LRP with uncertain demand is broadly studied, few works con-sider real applications and just one considers the LRP for providing personal protective equipment to intensive care units during the COVID-19 pandemic [13]. Thus, the relevance of this work can provide a real impact in society. Problem Definition In this section, we introduce a model for supplying PPE to ICUs in Bogotá. The idea is to find a set of locations for warehouses to provide PPE to the different ICUs belonging to the health system in Colombia's capital. It is clear that many sources of uncertainty can appear in this situation (travel times, arc disruptions, among others). However, as we are facing the COVID-19 pandemic, we decided to focus on demand uncertainty because in the current situation it is critical to guarantee that the amount of PPE delivered to the different ICUs will satisfy the expected demands. Taking into consideration the above-mentioned, the problem is formally defined as a location-routing problem with stochastic demands (LRP-SD), considering the behavior of patients with COVID-19 at ICUs. The LRP-SD is defined in a directed graph G = (V, A). V denotes the set of nodes comprising m possible depot locations (W is the subset of potential locations and S is a subset of nodes) and n ICUs (I is the subset of ICUs), while A is the set of arcs a = (i, j) with a cost C a . δ − (S) and δ + (S) denote the set of arcs entering and leaving S, respectively, and L(S) the set of arcs ending in S. Each depot is associated to a fixed capacity Q w and an opening cost O w . The ICUs have a stochastic demand D i > 0 and its variation is defined in a probability distribution. To deal with the uncertain demand, a safety stock %SS is considered. A fleet of K of homogeneous vehicles with capacity h is available for supplying the PPE to ICUs. A variable cost related to fuel consumption is considered per vehicle considering the traversed distance performed in a single route. The following binary decision variables are used: Y w is used to represent the opening of depot w, f ak represents if vehicle k traverses arc a, or not and, finally, X iw is to represent if ICU i is assigned to depot w or not. A solution of the LRP-SD is a set of open depot locations with allocated ICUs and vehicle routes for supplying the PPE to ICUs from the assigned depot. The LRP-SD aims at minimizing the total expected cost while ensuring the reliability of the solution. The total expected cost includes: (i) the opening facility cost, (ii) the cost of visiting all ICUs, (iii) the cost of vehicles, and (iv) the corrective cost of a solution when the demand surpasses the vehicle capacity due to the stochastic nature of the ICUs' demand. Additionally, the reliability considers when a route failure occurs because of the demand uncertainty. As part of the constraints, the demand D i must be attended by a vehicle. The total demand of the ICUs must be respected. Each route starts and ends at the opened depot. Depots and routes must respect the depot and vehicle capacity, respectively. The proposed model for the LRP-SD is based on previous works done by Martínez-Reyes et al. [13], Prins et al. [36], and Quintero et al. [16], and is formulated as follows: Sustainability 2021, 13, 7822 Equation (1) is the objective function consisting in the minimization of the opening, routing, and failure costs. Equation (2) computes the failure costs. Constraints (3) ensure that each arc is traversed once. Constraints (4) guarantee that expected demands served by each route respect the reduced capacity of each vehicle (i.e., the capacity once the safety stock policy is applied). Constraints (5) ensure the continuity of each route and combined with Constraints (6) force the vehicle to return to its departure warehouse. Inequalities (7) avoid sub-tours. Constraints (8) ensure that ICUs are assigned to a facility only if there are routes starting at that facility. Constraints (9) respect depot capacity. Finally, expressions (10) define our decision variables. The LRP-SD is illustrated in Figure 1. The problem considers potential distribution center locations (circles) and the ICUs (squares). Figure 1 shows an initial solution setting (top-left), the selection of the depots to be opened (top-right), the allocation of ICUs to the opened depots (bottom-left), and the routing from the opened depot to its allocated ICUs (bottom-right) while satisfying the set of constraints. It is well known that location decisions have a huge impact on routing plans. Thus, the problem of supplying PPE to the ICUs must be addressed through the LRP with stochastic demands. However, daily, the situation could derive into a multi-depot vehicle routing problem with stochastic demands. Therefore, our algorithm is flexible to handle both situations with minor adjustments. Solving Approach To deal with the LRP-SD, we have developed a hybrid method belonging to the socalled simheuristics paradigm. It consists of an ILS algorithm [37] combined with a Monte Carlo simulation (MCS). The optimization part of the procedure is carried out by the ILS framework while the simulation is used to assess the quality of the provided solutions under the stochastic setting of the problem. ILS is a well-known and powerful local searchbased approach to cope with deterministic problems. Thus, we need to use a protection It is well known that location decisions have a huge impact on routing plans. Thus, the problem of supplying PPE to the ICUs must be addressed through the LRP with stochastic demands. However, daily, the situation could derive into a multi-depot vehicle routing problem with stochastic demands. Therefore, our algorithm is flexible to handle both situations with minor adjustments. Solving Approach To deal with the LRP-SD, we have developed a hybrid method belonging to the so-called simheuristics paradigm. It consists of an ILS algorithm [37] combined with a Monte Carlo simulation (MCS). The optimization part of the procedure is carried out by the ILS framework while the simulation is used to assess the quality of the provided solutions under the stochastic setting of the problem. ILS is a well-known and powerful local search-based approach to cope with deterministic problems. Thus, we need to use a protection strategy (safety stock policy) to face demand uncertainty and, therefore, to obtain better results in the stochastic scenario. Our simheuristic algorithm comprises a multi-start procedure to obtain a set of initial solutions. Next, these solutions are passed through an MCS engine to estimate their quality in stochastic settings. Then, the top-ranked promising solutions are improved within an iterated local search framework. Finally, we carry out two MCS processes to refine the estimations on the quality of the obtained solutions in the stochastic scenario setting. Once the complete algorithm is finished, we report the top 10 obtained solutions (see Algorithm 1). In the following, we will give a detailed explanation of each component of our solving approach. The multi-start procedure is divided into three stages: (i) Opening of depots-depots to be opened are randomly selected until there is enough available capacity to serve the total expected demands. (ii) ICUs' allocation to open depots-a non-allocated ICU is randomly chosen, and it is allocated to its nearest open depot with available capacity to serve the demand of the selected ICU; this process is executed until all ICUs have been allocated. In the case that a subset of ICUs could not be assigned because of capacity constraints, a closed depot is randomly selected, set as open, and the non-allocated ICUs are assigned to it. (iii) Route planning-to create routes, from each open depot, the first ICU to be visited is randomly selected, while the next ICUs are added using the nearest neighbor heuristic until capacity is satisfied, then the vehicle is sent back to the depot and new routes are added using the same logic until all ICUs are visited. The aforementioned stages are executed during a certain number of iterations while keeping the top solutions found so far. An outline of the multi-start procedure can be seen in Algorithm 2. Once the multi-start procedure is executed, two simulation stages are carried out. The first one is a short simulation to have a first approximation of the expected stochastic costs and reliabilities for each solution. Next, a more intensive simulation (i.e., with more simulation runs) is performed to refine the previous estimations for the top 10 solutions according to their estimated stochastic costs. It is important to note that stochastic costs are computed as the required cost to serve a given ICU when its demand cannot be fully served, i.e., the cost of a round-trip from the corresponding ICU to the depot to fully reload the vehicle and going back to serve the ICU. Every time that a route cannot serve all customers, the number of route failures is increased by one. Then, the estimated reliability of each route can be computed as one minus the quotient among route failures and the total simulation runs, as shown in Equation (11). Accordingly, the reliability for a given solution s conformed by R routes is computed After the second simulation process, the top 10 solutions are improved using an ILS framework (see Algorithm 3) in which we first apply a local search on each solution, then we apply perturbation to the solution and again apply local search. To do so, two different perturbation operators and four different local search operators were implemented. The perturbation operators are: (i) switching of open and closed depots-one depot is randomly selected among the opened ones and is interchanged with one randomly selected closed depot with equal or higher capacity to satisfy the demand of the corresponding ICUs, i.e., the ICUs previously allocated to the closed depot next the ICUs are allocated to the recently opened depot and routes are planned using the routing heuristic explained within the multi-start procedure (see Figure 2, top). (ii) Customer reallocation among different depots-a given percentage of nodes, ranging from 20 to 50%, is randomly selected and exchanged among the opened depots (breaking of ICUs' allocation) and, the next routes are created with the already mentioned routing heuristic (see Figure 2, bottom). Regarding the four local search operators, they are: (i) exchange of two-node chains among routes from the same depot, (ii) exchange of two-node chains among routes from different depots, (iii) exchange of two non-consecutive nodes among routes from the same depot, and (iv) exchange of two non-consecutive nodes among routes from different depots. A graphical representation of the four local search operators is presented in Figure 3. Promising solutions obtained so far are then passed through a short simulation process, after which they are sorted by their expected stochastic costs. Next, the top 10 stochastic solutions go through an intensive (long) simulation process to refine the estimates on both expected stochastic costs and reliabilities. It is worth mentioning that safety stocks (%SS) are used when planning routes to reduce the possibility of not serving some ICUs, when performing the routing tasks, due to demand uncertainty. However, after a certain value (too conservative) of safety stock, expected costs could be increased due to excessive deterministic (fixed) costs. The idea, then, is to find the most convenient safety stock policy, i.e., the value that provides the best trade-off between expected costs and reliability. A graphical representation of the four local search operators is presented in Figure 3. A graphical representation of the four local search operators is presented in Figure 3. Promising solutions obtained so far are then passed through a short simulation process, after which they are sorted by their expected stochastic costs. Next, the top 10 stochastic solutions go through an intensive (long) simulation process to refine the estimates on both expected stochastic costs and reliabilities. It is worth mentioning that safety stocks (%SS) are used when planning routes to reduce the possibility of not serving some ICUs, when performing the routing tasks, due to demand uncertainty. However, after a certain value (too conservative) of safety stock, expected costs could be increased due to excessive deterministic (fixed) costs. The idea, then, is to find the most convenient safety stock policy, i.e., the value that provides the best trade-off between expected costs and reliability. Promising solutions obtained so far are then passed through a short simulation process, after which they are sorted by their expected stochastic costs. Next, the top 10 stochastic solutions go through an intensive (long) simulation process to refine the estimates on both expected stochastic costs and reliabilities. It is worth mentioning that safety stocks (%SS) are used when planning routes to reduce the possibility of not serving some ICUs, when performing the routing tasks, due to demand uncertainty. However, after a certain value (too conservative) of safety stock, expected costs could be increased due to excessive deterministic (fixed) costs. The idea, then, is to find the most convenient safety stock policy, i.e., the value that provides the best trade-off between expected costs and reliability. Computational Settings The experiments were performed on a personal windows PC with Intel ® CoreTM i7 6th generation and 8Gb RAM. The LRP was modeled and solved using modeling language, with Cplex 12.8.0.0 as solver and a time limit of 8 h (28,800 s). To do so, we have adapted the LRP-SD formulation proposed by Quintero et al. [16], to represent the deterministic version, by eliminating the failure costs, assuming deterministic demands, and using %SS = 0. The proposed simheuristic for the LRP-SD was coded in Visual Basic for Applications (VBA) language in MS Excel 2013. Spreadsheet-based solutions are considered due to their interface familiarity, ease of use, flexibility, accessibility, and low cost, which may generate important savings for enterprises, especially in non-developed countries [13,38]. The set of instances was generated considering the location of ICUs and possible locations of DCs in Bogotá, Colombia. Locations were retrieved from Google Maps with their corresponding latitude and longitude coordinates. The distance for each arc (i, j) was retrieved using the Google Distances API. The expected value for demands (ED) corresponds to the PPE kit, i.e., mask, gloves, and impermeable coveralls, required for each ICU assuming that each patient is served by a team consisting of one physician, one nurse, and one therapist. The team visits each patient once per hour, so 24 visits are required during a complete day [13]. For the LRP-SD, uncertain demand related to the PPE is modeled with a probability distribution according to the August 31, 2020 report of confirmed cases of COVID-19 in Bogotá and the occupation of the total ICUs [2]. Distribution fitting is done using IBM SPSS Statistics version 26 to select the statistical distribution that best fits the demand. As a result, the Weibull distribution with parameters a = 13.8 and = 1.4 was obtained, where a is scale and b is shape. Thus, variation of demands D i are generated with the following equation: The capacity of DCs guarantee the total demand satisfaction. The O i for each DC corresponds to the real month rent costs in Bogotá in USD (USD 4.7/m 2 per month). The fleet capacity Q = 2218 K, and fuel consumption (11.4 Km/gallon) correspond to the real-load information of the Chevrolet NHR [39]. Each PPE kit weighs 1 Kg, and the fuel cost is USD 2.1/gallon. All instances are available in https://cutt.ly/obASjVa (accessed on 2 June 2021). The file names are defined as MQS-BOG#, where # identifies the number of the instance. Results and Analysis For the set of instances, two scenarios are evaluated, i.e., the deterministic version of the LRP and the LRP-SD. The detailed results for the deterministic case are provided in Table 1. The table shows, per each instance, the number of possible depots, the number of ICUs, the GAMS results, the best deterministic solution reported by our algorithm (OBDS), and the GAP between the OBDS and the GAMS results. We have compared the results provided with GAMS against our proposed method. It is worth mentioning that none of the solutions provided by GAMS was proven to be optimal in the defined time limit (i.e., 28,800 s). As can be seen, our approach outperforms GAMS for all instances, with percentual gaps ranging from −24.38% to −0.02%. On average, our gap represents a reduction of 9.86% compared with GAMS results. In addition, it is important to mention that our algorithm requires short computational times to be executed (around 46 s on average per instance in the worst-case scenario). This is a key factor of our method since the available time for the associated decision-making is scarce (2-3 h in real life). Results for the stochastic case are presented in Tables 2 and 3. The behavior of expected stochastic costs and reliabilities for each instance when using six different safety stock policies (0, 3, 6, 9, 12, and 15%) is analyzed. For each safety stock policy, the best stochastic solution (OBS), the average of our top 10 stochastic solutions (OTTAS,) and the expected reliability of the OBS (Avg. Reliability) and the gaps between OTTAS and OBS, are reported. According to these gaps, we can see that our algorithm provides consistent results independently of the instance size, i.e., the OTTAS is quite near the OBS for each instance. Moreover, a graphic example based on a representative instance is presented in Figure 4 for analyzing the behavior of the expected stochastic costs and reliabilities using the different safety stock policies. As expected, in the case of not considering any type of protection (i.e., 0% of safety stock), the associated reliability is the lowest among all policies. Once the protective policy is increased, the corresponding value of expected reliability tends to increase as well. Regarding stochastic costs, we can see their augmentation when considering the lowest values of protective policies (below 9%) because of the increase in fixed vehicle costs that do not compensate the diminution in corrective costs (those associated with the round-trip from ICU to the corresponding depot to reload the vehicle, serve the ICU and resume the original planned route). After the 9% policy, the costs due to route failures become too small in such a way that total stochastic cost tends to fall when increasing the percentage of safety stock. Moreover, we have compared the best deterministic solution for a given instance against the top two solutions obtained in the stochastic scenario. As can be seen in Figure 5, both stochastic solutions, i.e., the ones with protective (safety stock) policies, outperform the deterministic one in the stochastic case, in terms of expected stochastic costs. There is also a slightly lower variability of the results obtained during the 5000 simulation runs. Moreover, we have compared the best deterministic solution for a given instance against the top two solutions obtained in the stochastic scenario. As can be seen in Figure 5, both stochastic solutions, i.e., the ones with protective (safety stock) policies, outperform the deterministic one in the stochastic case, in terms of expected stochastic costs. There is also a slightly lower variability of the results obtained during the 5000 simulation runs. Results show the robustness of our method for supplying PPE to ICUs during the COVID-19 outbreak, which face uncertain and variable demands. Moreover, this method can be adapted to other cities or zones dealing with the same problem or even in other types of applications handling the LRP-SD. To do so, historical cases of infected people need to be analyzed to define the demand probability distribution, as well as the number of ICUs to be attended, the number of available facilities, the fleet size, the vehicle capacity, the safety stock policies, distances between arc connections of nodes, i.e., facilities and ICUs, the opening facility cost, the cost of visiting all ICUs, the cost of vehicles, and the Results show the robustness of our method for supplying PPE to ICUs during the COVID-19 outbreak, which face uncertain and variable demands. Moreover, this method can be adapted to other cities or zones dealing with the same problem or even in other types of applications handling the LRP-SD. To do so, historical cases of infected people need to be analyzed to define the demand probability distribution, as well as the number of ICUs to be attended, the number of available facilities, the fleet size, the vehicle capacity, the safety stock policies, distances between arc connections of nodes, i.e., facilities and ICUs, the opening facility cost, the cost of visiting all ICUs, the cost of vehicles, and the corrective cost of a solution when the demand exceeds the vehicle capacity due to the uncertain nature of the ICUs' demand. Conclusions This article has studied an imminent necessity of supplying personal protective equipment to intensive care units during the COVID-19 outbreak in Bogotá, Colombia, defined as a location-routing problem with stochastic demands. As the number of infected people may vary from day to day, and consequently the number of ICU patients, we have gathered real data from ICUs in the city to estimate the associated stochastic distribution and, consequently, estimate the needs for PPEs at each ICU. To cope with this complex problem, we proposed a simheuristic approach that considers uncertain demands, distribution cost, and reliabilities in the solutions. Our simheuristic algorithm combines a Monte Carlo simulation with iterated local search. The proposed method was coded in VBA and tested using eighteen different instances generated with data retrieved from Google Maps to characterize the geographical distribution of both warehouses and ICUs in Colombia's biggest city. In terms of results, different safety stock policies are considered as protection against demand uncertainty. Our results were compared to the ones obtained by GAMS in the deterministic version of the problem, showing promising results. In the stochastic setting of the problem, our method provides an estimation of the expected stochastic costs and reliabilities when using different safety stock policies for each instance. As expected, when no protection is considered, there are many route failures due to demand uncertainty, and higher costs and lower reliability are obtained. On the other hand, once the value of the safety stock policy reaches the ideal value, costs tend to decrease while reliability increases. These results on a realistic application can be used to ensure the sustainability of the health system of the city in terms of guaranteeing the supply of a critical product to protect physicians, nurses, and therapists of the front line, who are struggling with the current pandemic. Regarding future directions of this work, there is an opportunity to evaluate sustainable criteria for dealing with emergency decision-making. In addition, since our model considered the Weibull distribution for representing the demand's behavior, other distributions could be adapted for evaluating the robustness of our proposed method. Considering that uncertainty may affect the decision-making in humanitarian crisis, other methods (e.g., robust optimization), and sources of uncertainties (e.g., uncertain travel times, uncertain time windows, imperative pickup, and delivery loads) could be implemented. Finally, equity and cost deprivation are also criteria to be considered for attending the total demand in a crisis context.
v3-fos-license
2016-01-13T18:10:52.408Z
2012-08-28T00:00:00.000
62192618
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=21932", "pdf_hash": "4633d6418ed308e82328f9c02b27580138623951", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1514", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "03c3aa41c002c1945cbb562e1874882acff51f99", "year": 2012 }
pes2o/s2orc
Cognitive Congestion Control for Data Portals with Variable Link Capacity , Introduction A data portal provides information from diverse sources in a unified way.It enables instant, reliable and secure exchange of information over the web; in particular, a data portal focuses on providing centralized, robust access to specific data and supported manipulations.The concept of a portal functions to offer a single web page that aggregates content from various servers. There are different types of data portals, for instance, academic portals, including those for scientific data; commercial portals; and enterprise portals.A data portal can be considered as an application-based network that consists of databases, different servers, web-based application software, communication links, and computing clusters. With regard to a data portal, congestion can happen when a link or node carries so much data that a loss of quality of service for the portal results.As an early effort to control the network congestion, the Jacobson's algorithm [1] was embedded into the Transmission Control Protocol (TCP) [2].Although this protocol controls endto-end congestion conveniently, it also deteriorates network performance due to unstable throughput, increased queuing delay, and restricted fairness.Furthermore, longer delays will lead to weak link utilization, significant packet losses, and poor adaptation to changing link loads. Conventional congestion control methods often cannot achieve both fairness and appropriate bandwidth utilization due to packet loss.To deal with the problem, various TCP parameters have been utilized for the estimation of the available link capacity and the Round-Trip Time (RTT) in order to predict congestion [3][4][5][6][7]. When a delay-bandwidth product grows, the TCPbased networks exhibit an oscillatory behavior under some congestion-control algorithms.Reference [8] explains that when the delay or capacity increases, Random Early Marking (REM) [9], Random Early Discard (RED) [10], proportional integral controller [11], and virtual queue [12] show oscillatory behavior.Whereas the bandwidth-delay product relating to a flow during high bandwidth links could contain many packets, TCP could waste a lot of RTTs ramping until full utilization, following a congestion burst. The main obstacle in TCP is related to its reliance on scarce events that provide poor resolution information. To improve adaptation to network conditions, achieve high utilization, attain stable throughput, and decrease standing queues in the network, some approaches have been proposed in the literature [13][14][15][16][17][18][19][20].Explicit Congestion Control (XCC), one of the famous congestion control approaches, is able to inform sources concerned with the network status and control the bit rate in network. The XCC uses a header to carry the throughput information and Round-Trip Time (RTT) of the flow to which the packet belongs.When the throughput is used for the adjustment of bandwidth distribution, the RTT enables sources to control the speed of adaptation to network conditions.In XCC, routers play an important role in informing sources concerned with the network status and in helping sources to control their bit rate by accurate feedback.In fact, to determine the feedback for sources, a router should calculate the current spare bandwidth for outgoing links and compute the link capacity. Some congestion control methods need explicit and precise feedback.As congestion is not a binary variable, congestion signaling should provide the congestion degree.By means of precise congestion signaling, it is possible to determine when the network tells the sender the congestion state and how to react to it.In fact, the senders can decrease their sending windows quickly when the bottleneck is extremely congested.However, these methods-based on a control loop with feedback delay-become unstable for long feedback delay.To deal with this effect, the system should slow down while the feedback delay increases.In other words, when delay increases, the sources should change their transmission rates more slowly [8,[21][22][23]. As one of crucial issues related to network congestion, robustness of the method should be independent of unknown and quickly changing parameters (e.g., the number of flows).Also, for such methods as XCC, convenient bandwidth sharing is difficult when the information inquiries and capacity of links are variable.In other words, the unpredictability of the network creates a problem for XCC.This study focuses on a cognitive method to control congestion; it also can perform well when the link capacity and information inquiries are unknown or variable. Cognitive Concept A cognitive system is a complex system that has the ability for emergent behavior [24].It processes data over the course of time by performing the following steps: 1) perceive defined situations; 2) learn from defined situations and adapt to their statistical variations; 3) build a predictive model on prescribed properties; and 4) control the situations and do all of these procedures in real time for the purpose of executing prescribed tasks. To optimally adapt the network parameters and to provide efficient communication services using a cognitive approach, learning the relationships between parameters of network is crucial.In learning phase, it is possible to utilize the Bayesian Network (BN) model.A BN is a probabilistic graphical model that represents conditional independence relations between random variables by means of a Directed Acyclic Graph (DAG) [25].The DAG is constructed with a set of vertices and directed edges, each edge connecting one vertex to another, such that there is no way to start at vertex i and follow a sequence of edges that eventually loops back to i [26][27][28]. The BN model can be used to provide a representation of the dependence relationships among network parameters and adjust cognitive parameters to improve the network's efficiency.It is utilized to deal with congestion, one of the challenging tasks in the TCP; there is no efficient mechanism to determine when congestion occurs in the network. Variable Link Capacity As mentioned earlier, to efficiently control the network congestion, and preserve stable throughput, low queuing delay, the critical parameters in network can be defined and adjusted based on pre-defined criteria and statistical variations of the network. The available bandwidth, available , which is distributed among different flows during a certain time period T, is defined as follows: where coefficients 1 and 2 are constant, x(t) is the bandwidth utilized for the last period T, C is the estimated capacity of the data transmission link, and Q(t) is the minimum queue length that happened during the last T seconds. The parameter T can be written as , in which 0 is the system base delay, or the delay excluding queuing delay; and is the actual capacity of the data transmission link. The capacity C is a function of various factors, such as the data rate of every link, the number of active links, failed transmissions, the number of collisions, and handshake procedures.The estimation error of link capacity is defined as real    C . The given error should be compensated up to a certain limit.To define the limit, the parameter C in (1) is replaced by real   .When the capacity of the data transmission link is fully utilized, i.e.,   real x t C  , it is expected that the available bandwidth is zero or close to zero, due to error.Therefore, the limit of the estimation error is defined as the following: The value of available depends on the available bandwidth and the standing queue in the router.In fact, if the link capacity changes, the F available can be adjusted to dis-The proposed method computes the F available with no knowledge of exact channel capacity.It also can adjust the F available according to bandwidth variations. Adjustment of Available Bandwidth Typically, a router controls each of its output queues; therefore, available bandwidth is computed for each of them.With the proposed method, in order to compute available bandwidth, it is not required for the router to be configured with certain medium capacity.In addition, the proposed method can adapt to changing bandwidth conditions over time. First, the effect of queue speed on available bandwidth F available is considered.The queue speed can be defined as the difference between the capacity of the transmission channel and utilized bandwidth during the time period T. Equation ( 3) is written as follows: where Q T  is the queue speed.Due to queue variations, the queue length should be adjusted for F available , so parameter α is defined.It is possible to conveniently tune F available using the parameter α during extreme queue variations.The parameter α is adjusted by the cognitive algorithm.In fact, the parameter α controls the target queue length in which the network stabilizes. Cognitive Congestion Control The schematic of the cognitive congestion control is illustrated in Figure 1.Each network parameter is periodically sampled and collected in the input matrix.The cognitive process is decomposed into four steps: 1) observation, 2) learning, 3) decision, and 4) action. During the observation step, required information from network is collected.Then, the cognitive algorithm learns the relations between the parameters and their conditional independences as well as the effect of controllable parameters on observable parameters. During the decision step, the values to be assigned to controllable parameters are calculated to meet pre-defined requirements.In other words, the values of the network parameters of interest are predicted based on the observations.This prediction is done by inference, using the Bayesian network. In the action step, the controllable parameters are tuned, and the appropriate actions are taken in the network. Observation During the observation step, seven network parameters are examined.These parameters are: 1) The Round Trip Time (RTT), that is, time period for which a signal to be sent plus the time period for which an acknowledgment of that signal to be received; 2) The queue length; 3) The queue speed; 4) The throughput, that is, total amount of successful delivered data over a link; 5) The contention window size; 6) The congestion window size, that is, the total amount of unacknowledged data; 7) The congestion window status.The congestion window status is considered as 0 if the congestion window size at time t becomes 25% less than the congestion window size at time t − 1; otherwise the status is 1.The status equals to zero is of interest, as the congestion is being decreased. Here, observed network parameters are considered as random variables (x 1 , •••, x 7 ).It is assumed the given variables have unknown dependence relations.The independent samples from every variable have been gathered into the input matrix (size of n × 7).The construction of input matrix is performed during the observation step. Learning The learning step is a key step in the cognitive algorithm.During this step, the BN is built to provide a structure representing conditional independence relations between parameters of interest in a DAG.To form the BN and demonstrate the relations in a DAG, learning from the qualitative relations between the variables and their conditional independences is considered. A node in the DAG represents a random variable, while an arrow that joins two nodes represents a direct probabilistic relation between the two corresponding variables.For i x , if there is a direct arrow from j to i, node j will be a parent of node i. ( i describes the set of parents of node i).A complete DAG with all nodes connected with each other directly can represent all possible probabilistic relations among the nodes.       During the learning phase, based on the input matrix (Im), the dependency is exploited among the variables represented as nodes in a DAG.To build the DAG representing the probabilistic relation between the variables, the selection of DAGs and the selection of parameters are utilized. Selection of DAGs For the selection of DAGs, the scoring approach and the constraint approach can be utilized [29,30]. In the constraint approach, a set of conditional independence statements is defined by a priori knowledge.Then, the given set of statements is utilized to build the DAG, following the rules of d-separation [29]. The scoring approach generally is utilized when a set of given conditional independence statements is not available [31,32].The scoring approach is capable of inferring a sub-optimal DAG from a sufficiently large data set (i.e., Im).The scoring approach consists of two phases: 1) Searching to select the DAGs to be scored within the set of all possible DAGs and 2) scoring each DAG according to how accurately it defines the probabilistic relations between the variables based on the Im. 1) Searching process The searching process to select the DAGs (i.e., the first phase of the scoring approach) is required because it is not computationally efficient to score all the possible DAGs, since the scoring procedure generally takes a great deal of time.For instance, to find the DAG with the highest score for a set of m variables, the following formula is expressed [33]: Most of searching processes in scoring approaches are based on heuristics that find local maxima almost appropriately.However, the heuristics do not generally guarantee that global maxima is obtained [30]. There are two classical searching procedures in literature [34]: 1) Hill Climbing and 2) Markov Chain Monte Carlo. Hill Climbing is an iterative algorithm by which an arbitrary solution is initially defined for a problem.Then, the hill climbing algorithm searches a better solution by incrementally changing a single element of the solution.If the change generates a better solution, an incremental change is made to the new solution; this is repeated until no further improvements can be reached [34,35]. Markov Chain Monte Carlo (MCMC) is a category of algorithms for sampling from probability distributions based on constructing a Markov chain that has the dis-tribution of interest as its equilibrium distribution.After specific procedure, the state of the chain is utilized as a sample of the distribution of interest [36,37]. The searching process results in some DAGs. 2) Scoring The Bayesian information criterion is selected for scoring, and is based on the maximum likelihood criterion.The Bayesian information criterion is expressed as follows [32]: where Im is the dataset (i.e.input matrix), A is the DAG to be scored, A  is the maximum likelihood estimation of the parameters of A, and n is the number of observations for every variable in the dataset.When all random variables are multinomial, the Bayesian information criterion is formulated as follows [30][31][32][33]38]: where i is a finite set of outcomes for every variable i x ; i is the number of different combinations of outcomes for the parents of i C x ; ijk is the number of cases in the input matrix in which the variable i N x took its kth value (k = 1, 2, •••, O i ), and its parent was instantiated as its jth value (j = 1, 2, •••, C i ); and ij is the total observations related to variable N i x in the input matrix (Im) with parent configuration j (i.e., ). Therefore, based on Equation ( 6), the scoring approach is computationally tractable.More details about Bayesian information criterion are presented in [38].Now, the DAG with highest score can be selected. Selection of Parameters During the selection of parameters, the best set of the controllable parameters are chosen and estimated, based on the observed parameters and their independence relations. Based on the Bayesian network definition, every variable is directly calculated by its parents; thus, the estimation of the parameters for every variable x i is performed according to the set of its parents in the DAG selected during structure learning.The Maximum Likelihood Estimation (MLE) technique is used to build a predictive model and to estimate the appropriate set of parameters describing the conditional dependencies among the variables.The MLE technique is expressed as follows: For x  , the parents of node i are in the configuration of type j, and the variable i x takes its kth value (i.e.x given the evidence j (i.e., parents of node i in the configuration of type j).Therefore, Equation ( 7) can be re-written as follows [30]: Decision The Bayesian network is completed after selection of DAGs and parameters in the learning step.The completed BN provides the probabilistic relations among selected parameters from the selected DAG.In this step, the future values of the queue length and queue speed-that is, the unobserved parameters-are predicted based on selected observed parameters.The estimated value of unobserved parameter i x is defined as the expectation of the given parameter, using probability function represented in Equation (8).Therefore, the expected value of   ˆt i i x at time t, x , is calculated as follows: where x is the actual value of the unobserved parameter i x at time t, and evidence is the set of selected observed parameters. Action To calculate α in Equation ( 3), the predicted values of the unobserved parameters (i.e., queue length and queue speed) are considered.In fact, the fluctuation of predicted values for the queue length and queue speed are utilized to set the parameter α; then, parameter α adjusts the available bandwidth, F available . As mentioned earlier, α represents the target queue length in which the network stabilizes.When there is no queue constructed (underutilization), α explains how much bandwidth is distributed in every control interval.During full utilization or overutilization, α will control how much queuing delay is introduced. During the time of underutilization, the bandwidth is maximally distributed; if a link is saturated, the queuing delay is significantly decreased.Generally, α is high during underutilization, and is low during full utilization. Result The base scenario used in the simulation includes a dumbbell network topology, which provides a number of nodes connected to a single router.The router is connected to another router over a serial link.A group of nodes are connected to that router, creating the dumbbell topology. 25 The network traffic consists of flows between the client and server nodes in both directions.It is assumed that the flows traversing the network from server nodes to client nodes are downloads, while flows in the opposite direction are considered uploads. The simulations were performed using the ns-3 network simulator [39].During the simulation, parameters of interest were sampled for each flow at certain sampling periods (i.e., every 30 ms and 60 ms).The queue length was set to 50 packets while the bottlenecks happened.The size of the data packets was 1200 bytes, and the size of acknowledgment packets was 50 bytes. Variable Capacity In this part of the procedure, the response of the proposed method to unexpected variations of link capacity was emphasized.During this simulation, the data rate changed.At first, the simulation was performed by the data rate of 56 Mbps.The variable capacity was simulated by changing the data rate, as shown in Figure 2. The data rate changed each 20 seconds, that is, 56 Mbps at t = 0, 21 Mbps at t = 20, 5 Mbps at t = 40, 21 Mbps at t = 60, and 56 Mbps at t = 80. Due to sudden bandwidth reduction, there are queue spikes in the Figure 2. When queue length significantly increases, the parameter α increases.Thus, based on Equation (3), the difference between the queue length and α will not significantly change.Therefore, these spikes were compensated by the method, and available bandwidth was conveniently utilized. Dynamics of Parameter α To demonstrate the responsiveness of the proposed method to arrival and departure flows, a 40-sec simulation was performed, and the RTT was set to 60 ms.The average queue length as well as the parameter α throughout the time are illustrated in Figure 3.It was observed that the proposed method responded conveniently to the queue fluctuation.In fact, the dynamic of parameter α was assessed by observing the queue fluctuation. When the queue is reduced, that is a sign of underutilization, and α is increased.During the increase of α, more bandwidth is distributed among servers to quickly provide full utilization. To match the variation of the queue, the queue length was increased, while parameter α was decreased.Generally, there was a low latency caused by queue buildup.To prevent high queue spike, the maximum value of α should be less than the maximum value for queue length (i.e., channel capacity).Parameter α can tune the variation of bandwidth as it affects the queue. Different Data Rates In this part of the procedure, the response of the method was assessed while different data rates are used in network.It is considered that a part of the network has a data rate of 10 Mbps and the rest of the network has the data rate of 56 Mbps.In other words, new flows enter the network with data rate of 10 Mbps; other flows with data rate of 56 Mbps leave the network, or vice versa.It causes an oscillatory behavior for the bandwidth.The proposed method provides a stable queue under the given situation (Figure 4). Efficiency Now, the efficiency of the method is evaluated as network utilization.It is demonstrated that the increase of the bandwidth-delay product of network negatively affects the efficiency of the TCP; however, it has trivial influence on the efficiency of the proposed method. To simulate a traffic pattern, two kinds of flows are considered: 1) flows with exponentially distributed duration, with certain minimum value (1 s) and mean value (10 s); and 2) other flows that are active during the simulation. Each wired path between the end-system and router was configured with a specific latency; latencies of wired paths were between 20 ms and 120 ms.The growth of the bandwidth-delay product of network was simulated by increasing the path delay. The result of simulation is shown in Figure 5.As shown in the figure, the efficiency scales change based on the increase of the link capacity. It can be demonstrated that the TCP was not able to scale with the bandwidth-delay product of network because of its fixed dynamics.Based on the traffic pattern and the number of flows, the TCP was not able to fully utilize network resources for a specific bandwidth threshold. Overall, the proposed method was able to maintain convenient utilization at all times. Accuracy of Learning Process To predict the status of congestion in future (i.e., t + k) at time t, the current value of all parameters of interest was considered.It is possible to predict when the congestion happens, and try to act before it affects the network. To analyze the accuracy of the learning process for predicting congestion at time t, the value of Status (t + k)-that is, the presence or absence of congestion at time t + k, with k ≥ 1-was considered. The performance of the learning process is assessed as a function of the size (i.e., number of samples) of the training set utilized to learn the relations between the desired parameters.The parameters are stored during the training, and the stored values become the input for the inference phase. In   is the predicted value of congestion status at time t + k.When congestion is present, the variable of Status is zero, otherwise it is one.This variable can be illustrated as the frequency of an error in the process of prediction.In Figures 6 and 7, two cases are separately assessed.In other words, the results are shown for and . In Figure 6, under a different training set size from RTT and queue length, the average prediction error is not the same.As shown in the figure, if more information about RTT and queue length is available, the average prediction error decreases.Therefore, the number of collected samples from each parameter of interest should be more than 300. In Figure 7, the average prediction error changes, based on the training set size for different values of the sampling period .If enough data (i.e., input samples) is available, the learning phase is conveniently preformed, and the prediction will be more accurate. Conclusions In this paper, a cognitive method is proposed to improve bandwidth sharing and deal with congestion in a data portal.For example, when the data portal is about climate change data, congestion control is more emphasized because the scientific climate data is voluminous; there is high traffic to/from the data portal by the scientific community, research groups, and general readers.In fact, this study was performed to improve congestion control in such data portals as the climate change portal. Here, the data portal is considered as an applicationbased network.The proposed method was able to adjust the available bandwidth in the network when the link capacity and information inquiries were unknown or variable.In fact, it was possible to conveniently adjust available bandwidth, using the cognitive method, during extreme queue variations. The variation of link capacity has an influence on the queue.In fact, α dynamically changes over the time, and helps the queue to have a smoother behavior while guaranteeing that the set is based on pre-defined operating conditions. The learning phase is a key step in the cognitive method.During this step, the collected information in the observation phase is used by the Bayesian network model to build a probabilistic structure to predict variations of queue length. The efficiency of proposed method was tested by a network simulator.Based on results, available bandwidth during extreme queue variations can be conveniently adjusted by the proposed method.Unlike TCP, in which the growth of the bandwidth-delay product of network affects negatively TCP's efficiency, the proposed method is able to maintain convenient utilization at all times. number of possible DAGs.When m increases, the  DAG increases significantly, and the scoring procedure takes more time.Therefore, a searching process is required to choose a small, and possibly representative, subset of the space of all DAGs.  of the posterior distribution of i Figure 2 .Figure 3 . Figure 2. The fluctuation of average queue length and parameter α throughout time during variable capacity.25 Figure 4 . Figure 4. Fluctuation of average queue length and parameter α throughout time, while different data rates used in the network. Figures 6 and 7 Figure 5 .Figure 6 .Figure 7 . Figure 5. Efficiency versus capacity.Unlike the proposed method, the growth of the bandwidth-delay product of the network leads to TCP utilization decreases.In contrast, the proposed method is able to maintain utilization.0.5
v3-fos-license
2020-10-29T09:02:41.613Z
2020-10-28T00:00:00.000
236846804
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-96737/v1.pdf?c=1631873373000", "pdf_hash": "bf4badc4403faec82e5bac4c0ba4abdad21ff18b", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1515", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "59902c7188bbe77b6e33764a1bc65d0fc06a881c", "year": 2020 }
pes2o/s2orc
Protocol for Systematic Review on Inequity in Child Health Care Service Utilization in Low and Middle-income Countries Background: One challenge to achieving Millennium Development Goals was inequitable access to quality health services. In order to achieve the Sustainable Development Goals, interventions need to reach underserved populations, it appears that the maternal, newborn and child health goals (MDG 4 and 5) will not be universally achieved. There was early recognition that it could be possible to achieve the health goals while decreasing health inequity, because most of the gains might go to the better-off rather than to the very poor. Methodology/Design: The current protocol adopts a strategy informed by the guidelines of The Cochrane Handbook for Systematic Reviews. Our systematic review will identify studies in English, provided inclusion of an English abstract - from 2010 onwards till 2020, by searching MEDLINE (PubMed interface), EMBASE (OVID interface), Cochrane Central (OVID interface) and the gray literature. Study selection criteria include research setting, study design, reported outcomes and determinants of interest. Our primary outcome is inequity in utilization of child health care services, and determinants of concern are: 1) socioeconomic status (for example, income, education); 2) geographic determinants (for example, distance to a health center, rural versus urban residence); and 3) demographic determinants (for example, age, ethnicity, religion, and marital status). Screening, data abstraction, and scientic quality assessment will be conducted independently by two reviewers using standardized forms. Where feasible, study results will be combined through meta-analyses to obtain a pooled measure of association between utilization of child health care services and key determinants. Results will be stratied by income levels (World Bank classication) geographical residence and demographical determinants. Discussion: Our review will inform policy-making with the aim of decreasing inequities in utilization of child health care services. This research will provide evidence on unmet needs for child health care services in LMICs, knowledge gaps and recommendations to health policy planners. Our research will help promote universal coverage of quality child health care services as an integral part of the continuum of maternal and child health care. This protocol will be registered with the Prospero database. of child health care services and key determinants. Results will be strati ed by income levels (World Bank classi cation) geographical residence and demographical determinants. Discussion: Our review will inform policy-making with the aim of decreasing inequities in utilization of child health care services. This research will provide evidence on unmet needs for child health care services in LMICs, knowledge gaps and recommendations to health policy planners. Our research will help promote universal coverage of quality child health care services as an integral part of the continuum of maternal and child health care. This protocol will be registered with the Prospero database. Background In 2012 it was estimated that 6.6 million children under the age of ve died worldwide [1], eighty-two per cent of which in Sub-Saharan Africa and Southern Asia. Thirty-three per cent of these deaths were due to pneumonia (17%), diarrhoea (9%) and malaria (7%) (2). Progress in both intervention and impact indicators of child health have been inconsistent over several decades, with several evidences showing both between and within country disparities in several of these child health indicators [1,2]. These disparities are especially signi cant in Low and Middle-Income Countries (LMICs), with 99% of global child and neonatal death occurring in these countries [3]. Evidence also shows that poor health outcomes at national levels are highly re ective of disparities at individual household levels; such re ection was demonstrated in the work of Chao and colleagues [4,5], exploring relationships between household wealth quintiles and Under-Five Mortality Rates (U5MR). Access to important health care services, as well as the overall health outcome of women and children, in LMICs often depends on factors such as gender of a child, place of residence, level of education, income and other socio-economic parameters [1,5]. These factors collectively referred to as 'dimensions of inequality'; usually have an interwoven effect. Low income level for example, may directly prevent a woman from utilizing health services due to associated medical, nonmedical and opportunity costs [3,5], equally a low level of educational attainment may also imply that there is limited economic opportunities, hence impaired ability to earn su cient income. Low socioeconomic status may also result in women and children living in areas with limited health infrastructure [6]. To effectively assess coverage, and progress in child health, changes in inequity must be monitored over time [7], several indexes and frameworks have been developed for this purpose [7]. These frameworks often comprise of important child health services and outcome indicators. Examples of these include the composite coverage index [8,9], co-coverage index, count down to 2015 indicator framework [10]. Despite the substantial body of literature that have explored equity dimensions for the various child health indicators [11,12], there remains a signi cant gap in literature, which concerns the paucity of systematic reviews and meta-analysis synthesizing evidence from the various existing cross-sectional studies. Furthermore, the lack of household survey data for certain interventions implies there are no equity studies for those interventions. For example, no study has investigated inequality in the availability of ARV prophylaxis for pregnant women with HIV/AIDS to prevent HIV transmission to the new born, this may be explained by the fact that information is not available in any database that allows equity breakdown for this indicator [2]. Likewise, there is a paucity of study on the effect of inequality on existing studies only examined inequity in child health service at national levels, and not from an individual or household level, for example there are studies examining variation in maternal and child mortality based on national Gross National Income (GNI) or Gross Domestic product (GDP) levels [12], or based on Human Development Index (HDI) [2,12], no study has examined child health care service utilization based on individual household wealth index. This lack of household-level data disaggregation, is also likely due to the fact that child mortality is currently estimated through modelling, at national levels only, and equity breakdowns are not available [2]. The objective of our study is to ll the gap concerning the lack of systematic review and meta-analysis, and to provide a pooled evidence base on the association between a selection of child health indicators, and three important equity strati es, with focus on studies that utilized nationally representative data. Other gaps identi ed regarding the paucity of studies on child health Objectives and research question Our Objectives are to: 1. Systematically identify and asses' studies and reports on the utilization of child health services. 2. Synthetize evidence on the determinant of child health services utilization and inequity in the use of child health services 3. Provide evidence to policy planners in order to address unmet need for child health services utilization. This systematic review is guided by the following research questions. Is child health service utilization associated with 1. Socioeconomic 2. geographic and 3. demographic determinants? Methodology/design The current protocol outlines a strategy informed by the guideline of the Cochrane collaboration (Cochrane hand book of for systematic review) [13]. The systematic review will follow the four phases ow diagram ( Fig. 1) put forth by the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement [14]. Information sources and literature search Literature search strategies will be implemented by the search team of Addis Ababa University Filters for bibliographic research will include publication date -from 2010 onwards till 202 and published in English language. We will use speci c medical terms/ subjects headings (MeSH) term and text words to identify study by searching MEDLINE (Pubmed interface). We will hand -search relevant abstracts in the Cochrane child group ,Cochrane pregnancy, child birth and public health group. As per peer review of electronic of Electronic search strategies (PRESSS) recommendation, we will include the 'explode' option to the entree terms in the EMBASE research. The exact search strategy for EMBASE, MEDLINE and Cochrane central can be found. We will search the gray literature namely the following sources WHO, World Bank and UNICEF. We utilized a framework of child health care indicators from the 2010-2015 global strategy for women's and children's health [12], comprising of two mortality indicators, eight coverage indicators and one child nutrition indicators as follows: comprising of eight measures of intervention coverage, and three measures of impact as follows: U5MR, stunting in children under ve, exclusive breastfeeding, three doses of combined diphtheria-tetanus-pertussis immunization coverage (DPT3), and care seeking for suspected pneumonia, fever and diarrhea [13]. We categorized the included indicators into two broad groups; negative indicators, and positive indicators. The negative indicators were prevalent among the poor, these includes: U5MR, stunting in children under ve, while the positive indicators are mostly pro-rich, which includes: exclusive breastfeeding, three doses of combined diphtheria-tetanus-pertussis immunization coverage (DPT3), and care seeking for suspected pneumonia and diarrhea [13]. The review protocol will be registered at the PROSPERO international prospective register of systematic reviews. Ethics This study analyzed secondary data and exempted from ethics review by the Gri th University o ce for research. Search strategy The online databases PubMed, Embase, Scopus will be searched form 2010 until June 2020 for relevant studies, we also search the publications of the World Bank, WHO, and UNICEF. The following key words will be used for the search: Gaps OR Disparity AND Child Mortality AND Developing Countries, Inequity OR Inequality AND Child Mortality AND Developing Countries, Gaps OR Disparity AND Under-ve Mortality AND Developing countries, Inequality OR Inequity AND Under-ve Mortality AND Developing Countries, Gaps OR Disparity AND Stunting in children AND Developing countries, Inequality OR Inequity AND Stunting in children AND Developing countries, Gaps OR Disparity AND Exclusive breastfeeding AND Developing countries, Inequality OR Inequity AND Exclusive Breastfeeding AND Developing countries, Gaps OR Disparity AND DPT3 immunization AND Developing countries, Inequality OR Inequity AND DPT3 immunization AND Developing countries Gaps OR Disparity AND Care seeking for children with suspected pneumonia AND Developing countries, Inequality OR Inequity AND Care seeking for children with suspected Pneumonia And Developing countries. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines will be followed in searching the results. Participants and settings We will retrieve studies implemented in LMICs, as de ned by The World Bank Group's classi cation [15]. We will extracted data from studies that met the following inclusion criteria: 1) Publication in peerreviewed literature in the last 10 years (2010-2020); 2) Study must be based on data from at least one nationally representative data; 3) Study must have a clearly de ned primary outcome, encompassing one or more of the eleven pre-de ned core child health indicator; 4) Study must have investigated the effect of at least one of three 'dimensions of inequity', which includes: Income level, level of education and place of residence; 5) Data from low or middle-income countries only must have been utilized; 6) Included studies must have utilized a quantitative analytical method Exclusion criteria We will exclude studies based on the following criteria: 1) Studies that did not use a nationally representative data; 2) Studies that were not peer-reviewed; 3) Studies that did not assess at least one of the 3 dimensions of inequity: place of residence, income level and level of education; 4) Studies that did not include at least one of the child health outcome measures; 5) Studies that used data from high income countries; 6) Studies that were purely descriptive. Design Our systematic review will include all observational studies including cohort, case-control and crosssectional studies Determinants Determinants of concern are: 1) socioeconomic status -assessed by income, expenditure, household characteristics and/or assets, occupational or contractual status and education (highest level of education completed, years of schooling, literacy 2) geographic determinants (Euclidian distance -km -to a health center, travel time, location -rural versus urban residence) and 3) demographic determinants ethnicity, marital status, age. Results We will consider quantitative results of the association between potential determinants and the utilization of child health care services. Published results must include an association measure, frequency ratio/difference, or statistical test comparing utilization of child health care services across two or more groups. If these results are not explicit, we have to be able to estimate them with the information provided in the paper. We will consider relative comparisons -for example, relative concentration index (RCI) or relative index of inequity (RII) -to a reference group, along with absolute differences in child health care services utilization, such as absolute concentration index (ACI) or slope index of inequity (SII). Such reported disparities will be useful in making comparisons over time or across geographical areas, populations or indicators, in light of the Centers for Disease Control and Prevention (CDC)'s guidelines [16,17]. Studies strictly reporting qualitative results on access to child health care utilization are excluded. Within the same publication, results for the most recent year will be appraised if information exists for consecutive years. In the case of secondary analyses from national representative surveys such as the Demographic and Health Surveys (DHS) for consecutive years in the same country, we will only consider the most recent . Study selection procedure Screening A team of researchers, HF will identify articles by rst analyzing titles and abstracts for relevance and compliance with the selection criteria, based on research setting, study design, reported outcomes and determinants of interest. Relevant articles will be classi ed as: 1) included; 2) excluded; or 3) uncertain. After exclusion of records not relevant to the systematic review, full texts of selected abstracts (records categorized as included or uncertain) will be extracted systematically for further eligibility analysis. Eligibility Full-text screening will be conducted independently by the reviewers (HF) using a standardized form with explicit inclusion and exclusion criteria. Discrepancies will be resolved by discussion between the two reviewers, and persisting disagreement will be resolved by discussions with two experienced researchers. Data collection process Reviewers will use an explicit data collection form to abstract data items, including but not limited to: study characteristics (country, setting, year of publication, study design, sample size); participants' characteristics (mean age ± SD, health literacy, women's decision making power); outcomes (child health care service utilization); and results of the association between child health care services and potential determinants. In cases where numerous publications report data originating from the same study, the latest outcomes of interest will be assessed. Missing data on key characteristics will be dealt with by contacting the study authors and through complementary research. Reviewers will systematically use a standardized data abstraction form. To increase the reliability of data abstraction by the reviewers, a pilot test of the standardized form will be performed on a random sample, and the tool will be re ned as necessary. Methodological quality assessment We will assess the scienti c quality of selected studies to ensure internal validity of reported results and avoid analyzing false associations -confounded or biased -or type I statistical errors. We will use standardized quality assessment tools for speci c types of designs to determine the methodological quality and the risk of bias of the included studies. To assess the quality of cohort, case control and cross-sectional studies, we will use the Effective Public Health Practice Project (EPHPP) Quality Assessment Tool for Quantitative Studies, adapted to extend the criteria for selection bias assessment [18].Special attention will be provided to precise study objectives, explicit identi cation of the population studied, clear de nitions of outcomes, independent factors, potential confounders and effect modi ers. According to the methodological characteristics appraised, we will classify the studies' scienti c quality as either 1) Strong, 2) Moderate or 3) Weak. Search results Evidence tables will be generated to descriptively summarize the included studies and results: 1) authors, 2) study design, 3) objectives, 4) setting, 5) population, 6) outcomes assessed, 7) determinants/predictors, 8) results and 9) scienti c quality. Evidence tables will be strati ed by countries' income level (World Bank classi cation) to provide for different contextual characteristics of low versus middle-income countries. Data synthesis Where feasible, data will be combined to obtain a pooled measure of association evaluating child health care services inequities, through meta-analyses conducted by' using The Cochrane Group's Review Manager Software (RevMan 5.1) [19]. Data will be analyzed along subsets de ned by the countries' income level and grouped by determinants of child health care services utilization (socioeconomic, geographic, demographic). Due consideration will be given to heterogeneity (I2 statistic) and corresponding analysis ( xed versus random-effects models; meta-regression, if necessary). Depending on the number of studies, we will further stratify observational studies according to design (cohort, casecontrol, cross-sectional) and/or association measure -odds ratio, risk ratio, incidence rate ratio, hazard ratio, and prevalence ratio -exploring potential heterogeneity. will be paid to assessing results in light of study settings to ensure proper contextualization of evidence and relevance for policy planning purposes in LMICs. Results will be reported according to the PRISMA Statement, with a focus on health equity (PRISMA-Equity 2012 Extension) [20]. Discussion This systematic review will provide: 1) knowledge on existing inequities and unmet needs for child health care services in LMICs; 2) pragmatic recommendations to health policy planners for improving access to, and utilization of, quality child health care service in LMICs; and 3) an overview of knowledge gaps and future research needs. Results of the systematic review will be published in a peer-reviewed international journal and presented at conferences and symposia in relevant elds (for example, global health, health policy and planning, health systems, healthcare equity). Further knowledge dissemination will involve communicating results to the governments of LMICs and to organizations active in promoting access to maternal and child health services (for example, WHO, Family Care International). The utmost relevance of systematic reviews to inform health systems policymaking is increasingly recognized. Tugwell et al. (2010) underlined that a focus on health equity in systematic reviews improves their relevance for public policy making [18]. Welch et al. (2012) stressed that systematic reviews are a valuable source of scienti c evidence on inequities in health outcomes, resource allocation and use [20]. Our review will hence supply evidence to health policy planners with the objective of decreasing inequalities in maternal and child health indicators and promoting universal coverage of essential obstetric care services. Knowledge thus created may help promote equitable access to child health care services as a fundamental element of the continuum of care essential to reduce child mortality and morbidity.
v3-fos-license
2017-06-16T10:16:37.681Z
2012-07-26T00:00:00.000
20075897
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://phytokeys.pensoft.net/lib/ajax_srv/article_elements_srv.php?action=download_pdf&item_id=1431", "pdf_hash": "55432e411b9cc1ffe02ed954e960a6200648301a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1516", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "8fa87b8af89bd98a7f2fec97375dbd1d13114574", "year": 2012 }
pes2o/s2orc
Aristolochia vallisicola (Aristolochiaceae), a new species from Peninsular Malaysia Abstract A new species in the genus Aristolochia (Aristolochiaceae), Aristolochia vallisicola T.L.Yao, from Peninsular Malaysia is described and illustrated. Among all Peninsular Malaysian Aristolochia,itis the only species with a pinnately veined lamina and a disc-liked perianth limb. A distribution map is provided and its conservation status is assessed as Least Concern. Introduction Aristolochia, the largest genus in the family, consists of about 400 species. It is widely distributed throughout tropics and subtropics, but also in the warm temperate regions. Hou (1984) recognised 28 species in Malesia, 5 of which occur in Peninsular Malaysia while none of them is endemic. Th e new species presented here was fi rst collected by a Forest Guard, Kalong (KEP) in 1929 (FMS 24048) from Ulu Kelau, Pahang. Th e specimen consists of two detached leaves and a detached infl orescence mounted on one sheet. Its vernacular name, Akar telinga berok (the pig-tail macaque's ear climber in Malay) indicates that it is a climber. After a lag of 70 years, Kiew collected a fl owering specimen (RK 4879) in the Awana waterfall area, Genting Highlands, Pahang. Th e specimen is complemented by good fi eld notes and was identifi ed as Aristolochia sp. Recently, I was asked to identify a leaf (Kiew s.n., barcode KEP196081) of a butterfl y larva food plant collected in the Genting Tea Estate, Pahang. Th is instigated me to make a visit to the estate, which revealed that the plant is conspecifi c with the two specimens mentioned above. According to H.S. Barlow and S.K.L. Hok (pers. comm.), larvae of the butterfl y species, Parides (Atrophaneura) sycorax egertoni (Distant) 1 a member of the family Papilionidae, commonly known as the White Head Batwing (Malay name: Kepala Putih) feed on the leaves of this species. Th eir observations in the Genting Tea Estate revealed that its larvae defoliate young plants and then girdle the stem base just before they metamorphose into pupae. Th e plant manages to re-sprout later. Taxonomy Aristolochia vallisicola T.L.Yao, sp. nov. urn:lsid:ipni.org:names:77120982-1 http://species-id.net/wiki/Aristolochia_vallisicola Figures 1-3 Note. Th is species diff ers from all other Peninsular Malaysian Aristolochia L. species in its lamina with pinnate lateral veins, infl orescence with a long peduncle, its disc-shaped perianth limb, annulated hairy perianth mouth and 3-lobed gynostemium. Th is species is similar to Aristolochia coadunata Backer in the lanceolate or oblanceolate lamina with pinnate lateral veins but diff ers in its larger disc-shaped perianth limb, 58-65 mm diam. versus 15-30 mm diam. in A. coadunata and its longer peduncle, 15.5-17 cm long versus up to 2 cm long in A. coadunata. Th is species is also similar to Aristolochia versicolor S.M.Huang in the lanceolate or oblanceolate lamina with pinnate lateral veins but diff ers in its longer petiole, 2.5-7 cm long versus 1-2 cm long, broader leaves, at least 7.5 cm wide versus to 6.5 cm wide, and longer peduncle, 15.5-17 cm long versus 2-3(-10) cm long in A. versicolor. Th e summary and other characters comparison is presented in Table 1. Ecology. Th is species occurs in highland valleys of lower montane forest about 1000 m altitude and often by rocky streamsides. Specimens with fl owers were collected in September and November. Etymology. Th e species name vallisicola denotes its habitat preference for valleys. Discussion and conclusion Aristolochia vallisicola with disk-shaped perianth of 3 lobes which valvate in bud, annulated perianth mouth and gynostemium with 3 segments each consisting 2 stamens belongs to Isotrema (Huber 1993). Isotrema consists of ca 50 species distributed in temperate and tropical East Asia and in North and Central America. Th e new species presented here is its fi rst record in Peninsular Malaysia. Th e position of Isotrema clade within Aristolochia s.l. is confi rmed by phylogenetic studies (Kelly and González 2003;Ohi-Toma et al. 2006). Old World Aristolochia species with a disc-shaped perianth limb are common in northern India (Hooker 1886; Karthikeyan et al. 2010) and southern China (Huang et al. 2003) while 1-lipped or 3-lobed perianth limb are prevalent in Malesian Aristolochia species (Hou 1984). Aristolochia vallisicola is the only species with a disc-liked perianth limb in Peninsular Malaysia. Apparently, it is a link between the Asian Continental element and Sumatran-Javanese Aristolochia coadunata. Species of Aristolochia, a genus of high climber or woody lianas in Malesian forests, are not easy sighted and are very often represented by meagre herbarium specimens. Furthermore, the plants are rarely found in fl ower. In the past 15 years, 8 new species of Aristolochia were described from Th ailand (González and Poncy 1999;Hansen and Phuphatanaphong 1999;Phuphathanaphong 2006). Th is indicates that the species diversity of Aristolochia in the Old World, especially in South East Asia is still underestimated. I predict more novelties will be discovered when more specimens from this region are available for taxonomic study.
v3-fos-license
2021-05-11T01:08:40.438Z
2021-05-10T00:00:00.000
234336472
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/05/10/2021.05.06.21256771.full.pdf", "pdf_hash": "707582b0fe78e4dd29a1b65ab9e58cd251c945ee", "pdf_src": "MedRxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1518", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "707582b0fe78e4dd29a1b65ab9e58cd251c945ee", "year": 2021 }
pes2o/s2orc
Psychosis-like experiences and cognition in young adults: an observational and Mendelian randomisation study Background: Psychosis-like experiences (PLEs) are common and associated with mental health problems and poorer cognitive function. There is limited longitudinal research examining associations between cognition and PLEs in early adulthood. Aims: We investigated the association of PLEs with different domains of cognitive function, using cross-sectional and longitudinal observational, and Mendelian randomisation (MR) analyses. Method: Participants from the Avon Longitudinal Study of Parents and Children (ALSPAC) completed tasks of working memory at age 18 and 24, and tasks of response inhibition and facial emotion recognition at age 24. Semi-structured interviews at age 18 and 24 established presence of PLEs (none vs. suspected/definite). Cross-sectional and prospective regression analysis tested associations between PLEs and cognition (N=3,087 imputed sample). MR examined causal pathways between schizophrenia liability and cognition. Results: The fully adjusted models indicated that PLEs were associated with poorer working memory performance (cross-sectional analyses: b=-0.18, 95% CI -0.27 to -0.08, p<0.001; prospective analyses: b=-0.18, 95% CI -0.31 to -0.06, p<0.01). A similar pattern of results was found for PLEs and response inhibition (cross-sectional analyses: b=7.29, 95% CI 0.96 to 13.62, p=0.02; prospective analyses: b=10.29, 95% CI 1.78 to 18.97, p=0.02). We did not find evidence to suggest an association between PLEs and facial emotion recognition. MR analyses were underpowered and did not support observational results. Conclusions: In young adults, PLEs are associated with poorer concurrent and future working memory and response inhibition. Better powered genetically informed studies are needed to determine if these associations are causal. 3 This is also a time of vulnerability for development of PLEs 2 and psychotic disorders 14 . To understand the relationship between cognition and PLEs it is therefore not sufficient to rely on data from childhood and adolescence as many functions continue to develop in young adulthood. In the current study we used data from ALSPAC to investigate the relationship between PLEs and three cognitive domains in early adulthood: working memory, response inhibition, and emotion recognition. We aimed to: (1) examine the cross-sectional association between PLEs and cognition at age 24, (2) use longitudinal data at age 18 and 24 to test whether PLEs simply co-occur with poorer cognitive function or if they are preceded by poorer cognitive function, and (3) use Mendelian randomisation to explore causal effects between genetic liability for schizophrenia and cognitive impairment. Method Participants ALSPAC (www.alspac.bris.ac.uk) is a birth cohort initially comprising 14,541 pregnancies in the former County of Avon, currently the Bristol area of the UK, with an estimated delivery date between April 1991 and December 1992. The parents completed regular postal questionnaires on aspects of the children's development since birth, and from mid childhood (age 7.5 and above) children attended in-clinic assessments where they take part in a range of face-to-face interviews and tests. ALSPAC includes a wide range of phenotypic and environmental measures, genetic information and linkage to health and administrative records. Detailed information about ALSPAC is available online www.bris.ac.uk/alspac and in cohort profiles [15][16][17] . A detailed overview of the study population that completed cognitive assessments at age 24, including attrition at the different measurement occasions is presented by Mahedy and colleagues 18 . Please note that the study website contains details of all the data that is available through a fully searchable data dictionary and variable search tool: http://www.bristol.ac.uk/alspac/researchers/our-data/. Ethics statement Ethics approval for the study was obtained from the ALSPAC Ethics and Law Committee and the Local Research Ethics Committees. Written informed consent was obtained for the use of data collected via questionnaires and clinics from parents and participants following recommendations of the ALSPAC Ethics and Law Committee at the time. Consent for . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021 Measures Supplementary Figure S1 shows a timeline of the data used in analyses. Study data gathered from age 22 years onwards was collected using REDCap data capture tools 19 . Psychosis-like Experiences (PLE) PLEs were assessed via the psychosis-like symptoms interview (PLIKSi), a semi-structured interviewer-rated screening assessment. The interview lasts approximately 20 minutes and examines unusual experiences (unusual sensations, derealization, depersonalization, selfunfamiliarity, dysmorphophobia, partial object perception, and other perceptual abnormalities), and the three main domains of positive psychotic symptoms: hallucinations (auditory, visual, olfactory and tactile), delusions (being spied on, persecution, thoughts being read, reference, control, grandiose ability and other unspecified delusions) and bizarre symptoms (thought broadcasting, insertion and withdrawal). Interviews were completed by trained psychology graduates who rated psychotic experiences as absent, suspected or definite. For each symptom, the interviewer read out a stem question from the interview schedule, and the participant responded with 'yes', 'no' or 'maybe'. Where the participant responded 'yes' or 'maybe' the reply was then examined further with additional probes. Attribution questions identified if these were experienced only while falling asleep or waking, during high fever or under the influence of alcohol or other substances -in which case the experience was rated as not present. Symptoms were rated as definite where a clear example was provided. For the purposes of this study, and in line with earlier research 12 , a binary variable was used for broad psychosis-like symptoms (none, versus suspected or definitely present). PLIKSi data was available from age 18 and 24. Symptom frequency in the last 6 months was examined at age 18 and symptom frequency in the last year was examined at age 24. Cognitive assessments Three computer-based cognitive assessments were delivered via E-Prime version 2 (Psychology Software Tools, pstnet.com). . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. ; https://doi.org/10.1101/2021.05.06.21256771 doi: medRxiv preprint Working memory: The N-back task 20 is a widely used measure of working memory, requiring on-line monitoring, updating, and manipulation of remembered information 21 . We analysed data from the 2-back variant, administered at ages 18 and 24. Participants monitored a series of numbers (0-9), and indicated with a '1' keystroke when the presented number was the same as the one presented 2 trials previously, and a '2' keystroke when it was different. Stimuli were presented for 500ms, followed by a 3,000ms response window. Participants completed a practice block consisting of 12 trials containing two targets and response feedback, followed by an experimental block containing 48 trials with eight targets and no response feedback. The primary outcome measure was d-prime, the ratio of hits (correct detection of an n-back match) to false alarms (response during no match). Participants who responded to fewer than 50% of trials or with a negative d-prime were excluded. Higher d-prime scores index better working memory performance. Response inhibition: A Stop Signal Task 22 assessed the ability to prevent a prepotent motor response. Participants were instructed to respond rapidly to visual stimuli (an X or O) with the corresponding keys on a keyboard (X/O), unless they heard a tone (i.e., 'stop signal) after the stimulus presentation. Targets were displayed for 1000ms, with a variable inter-stimulus interval of 500-100ms. Stop signal delay (SSD) was drawn from one of four adaptive staircases at 100ms, 200ms, 400ms or 500ms. On successful inhibition the staircase was adjusted by -25ms, and on failed inhibition it was adjusted by +25 (min SSD 25ms, max 800ms). Participants completed 32 practice trials, in which incorrect responses and slow responses were given the feedback "wrong" and "too slow", respectively. This was followed Emotion recognition: A six alternative forced choice emotion recognition task (ERT) measured the ability to identify emotions in facial expressions 23 . Participants were presented with a series of face stimuli (male and female), each displaying one of six emotions: anger, disgust, fear, happiness, sadness, or surprise. They were instructed to click on the descriptor that best described the emotion. Images were displayed one at a time for 200ms, followed by a backwards mask of visual noise to prevent processing of afterimages for 250ms. Emotion intensity varied across 8 levels within each emotion from the prototypical emotion to an almost neutral face. Each individual stimulus was presented twice, giving a total of 96 trials with each . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. ; https://doi.org/10.1101/2021.05.06.21256771 doi: medRxiv preprint emotion presented 16 times. Outcome measures included ERT hits (the total number of facial emotions accurately identified), and the number of correctly identified emotions for each discrete emotion category. Higher ERT scores indicate better performance. Confounding variables Background characteristics controlled for in analyses included participant sex (male, female), ethnicity (non-white, white), highest parental occupational level (grouped into four categories Confounders relating to cognitive dysfunction included a history of head injury and a prior index of global cognitive function as measured by IQ. Head injury was defined as a cracked skull or unconsciousness at any time up to age 16 (present vs. absent), with data collected via questionnaires at age 4, 5, 6, 8, 11 years from parents and at age 16 years via self-report questionnaires. IQ was estimated from vocabulary and matrix reasoning subtests of the Wechsler Abbreviated Scale of Intelligence 25 at age 15. Mental health confounders included above threshold symptomatology for depression or anxiety on the Clinical Interview Schedule-Revised 26 at both age 18 and age 24. Depression (mild, moderate or severe) and anxiety disorder (generalized anxiety disorder, social phobia, agoraphobia, and/or panic disorder) were coded as present or absent. Study 1: Multivariable linear regression was used to examine the cross-sectional associations between PLEs and cognition (assessed at age 24). Summary variables from cognitive assessments (d-prime, SSRT, and ERT hits) were examined within one model. Individual ERT emotions (ERT hits for Anger, Disgust, Fear, Happy, Sad and Surprise) were entered into a second model to examine whether PLEs were associated with poorer recognition of specific emotions. A Wald test was used to test for differential effects across outcomes. Using complete case analysis without taking missing data into account can result in biased . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. ; https://doi.org/10.1101/2021.05.06.21256771 doi: medRxiv preprint estimates 28 so we imputed missing data using multivariate imputation by chained equations using the 'ice' package in Stata. Analyses were conducted on individuals with complete information on exposures, outcomes and confounders (n=2,134 for summary variables, n=2,300 for individual emotion ERT). Analysis of imputed data were based on 3,087 participants who provided data for all cognitive outcomes at age 24. A number of auxiliary variables known to be related to missingness were included in the imputation models and 100 data sets were imputed. An overview of predictors of attrition in ALSPAC participants completing cognitive testing at age 24 has been reported previously 18 . Study 2: Multivariable linear regression was used to examine the prospective associations between PLEs and cognition. The association between PLEs assessed at age 18 and summary cognitive outcomes (d-prime, SSRT, and ERT hits) assessed at age 24 were examined in one model, and the association of PLEs at 18 with individual ERT emotions (ERT hits for Anger, Disgust, Fear, Happy, Sad and Surprise) at age 24 were entered into a second model, again using a Wald test to examine differential effects of exposure across outcomes. 1,788 individuals provided complete information on exposures, outcomes, and confounders for all summary cognitive outcomes, and 1,918 for all individual ERT emotions. Logistic regression was also used to examine the prospective association between working memory (d-prime) at age 18 and PLEs at age 24, for which 2,134 individuals provided complete information on exposures, outcomes, and confounders. Like Study 1, analysis of imputed data was based on 3,087 participants who provided data for all cognitive outcomes at age 24. Model reporting: For both Study 1 and Study 2, we present unadjusted results, followed by results adjusted for: (i) sociodemographic variables (sex, ethnicity, parental occupation, maternal education, housing tenure, maternal age at birth and maternal smoking in pregnancy); (ii) additionally, confounders relating to cognitive dysfunction (history of head injury and IQ at age 15); and (iii) additionally, co-occurring depression and anxiety at age 24 in cross-sectional models, or depression and anxiety symptomatology at age 18 in prospective models. Models using multiply imputed data are presented as the main findings. Non-imputed (complete case) results are reported in the Online Supplement. Results from observational analyses are reported as unstandardized beta coefficients (b) or odds ratios (OR), as appropriate, both with 95% confidence intervals (95% CI). Mendelian randomization Study 3: Causal pathways between psychotic experiences and cognition were examined with Mendelian randomisation (MR) analyses. A genome-wide association study (GWAS) of PLEs across three adolescent population samples identified no genome-wide significant single . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. Figure S2. Observational analyses For studies 1 and 2, we primarily report the findings from the multiply imputed datasets, due to the greater potential for biased estimates in complete case analyses 28 . Complete case models for both studies are reported in the Supplementary Material. Study 1: There was strong evidence for a cross-sectional association between PLEs and poorer working memory at age 24, and this was robust to all levels of adjustment (Table 1, see Table S2 for complete case analyses). There was also evidence for an association between PLEs and poorer response inhibition (Table 1). Evidence for this association was weaker in the complete case analysis; however, effect estimates were consistent (i.e., in the same direction) across all models. We did not find clear evidence to suggest a cross-sectional association between emotion recognition performance and PLEs in any analyses (Tables 1 and 2, Supplementary Tables S2 and S3). [ There was evidence for a prospective association between PLEs at age 18 and poorer response inhibition at age 24, robust to all levels of adjustment (Table 1). Slightly weaker . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. ; https://doi.org/10.1101/2021.05.06.21256771 doi: medRxiv preprint evidence for this association was also found in the complete case analysis (Supplementary Table S2). We found no clear evidence to suggest a prospective association between global or specific emotion recognition and PLEs (Tables 1 and 2), with comparable results in complete cases (Supplementary Tables S2 and S3). Mendelian randomization analyses Study 3: Results from two-sample MR analyses were ambiguous (Table 3), most likely because they were underpowered. We found a consistent (negative) direction of effect estimates across all four methods for emotion recognition, but the confidence intervals for these estimates were wide. We found a mixed direction of effect estimates across MR methods for working memory and response inhibition, with similar imprecision. For one-sample MR analyses, polygenic risk scores for schizophrenia did not predict PLEs within our sample (Estimate= 23.22, 95% CI -56.13 to 102.98, p=.567). We therefore did not progress these analyses beyond the validation stage. Discussion We examined cross-sectional and prospective relationships between experiences of psychosis and specific cognitive domains (working memory, response inhibition and emotion recognition) in a population sample. We found evidence of both cross-sectional and prospective associations between PLEs and poorer cognitive test performance, in particular in the domains of working memory and response inhibition, indicating that PLEs at age 18 predict poorer later cognitive function in these domains over a period of 6 years. However, MR analyses did not provide evidence to allow us to conclude that these associations reflect causal pathways. Comparison with previous studies Our findings extend previous research showing cross-sectional associations between PLEs and impairments in specific domains of cognitive function, including working memory 10 . By using a longitudinal approach, we were able to evaluate associations over a key developmental stage of cognitive maturation 13 , and vulnerability for development of PLEs 2 and psychotic disorders 14 . . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. Previous research has examined longitudinal associations between general cognitive function, as measured by IQ, and PLEs early in childhood and adolescence, showing that lower IQ is associated with a higher rate of later PLEs 12 . Here we find rather that PLEs predict poorer performance in specific domains of cognitive function when assessed later in life. This may indicate bi-directional causal pathways of PLEs with cognitive function. The longitudinal nature of ALSPAC data allows us to adjust for key confounders, including socioeconomic and gestational factors, head injury and earlier global cognitive function. PLEs are also associated with affective and anxiety disorders 1 , which themselves are linked to cognitive impairment 37,38 . Therefore, adjusting for depression and anxiety is important to identify cognitive impairments specifically associated with PLEs, rather than simply those linked to co-occurring mental health problems. Results robust to these adjustments improves confidence that the associations we report are not the result of concurrent or historical confounders. MR analyses did not provide evidence to allow us to conclude that the associations we observed reflect causal pathways. Previous research has shown negative genetic associations between general cognition and schizophrenia using a polygenic risk score (PRS) approach 39 , and that high IQ attenuates likelihood of schizophrenia in research using an MR approach 40 . Conducting MR analyses to identify causal pathways from specific cognitive domains, such as response inhibition and working memory, may identify targets for intervention. Unfortunately, our MR analyses were most likely underpowered, since GWAS of individual cognitive domains did not yield any genome-wide significant results. Moreover, we found that the PRS for schizophrenia did not predict PLEs at age 24, limiting our ability to complete a one-sample MR analysis. This result is in keeping with prior research in ALSPAC showing that genetic liability for schizophrenia is associated primarily with negative symptoms rather than with PLEs 41,42 . Therefore, MR analyses of causal pathways between specific cognitive functions and PLEs remain an important area for future research, should cognitive GWAS of suitable sample sizes become available. Limitations There are a number of limitations to this study. First, it is possible that the causal pathway could be in the other direction to that tested by our MR analysis; that is, that liability for cognitive impairment may increase the risk of PLEs. Due to lack of robust instruments for our measures of cognition, MR was only completed with schizophrenia as exposure to cognition as outcome. Second, ALSPAC suffers from attrition, with lower participation amongst socially disadvantaged and lesser educated participants 43 . Polygenic risk scores for schizophrenia are . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 10, 2021. ; https://doi.org/10.1101/2021.05.06.21256771 doi: medRxiv preprint associated with drop-out, which can lead to underestimation of risk of schizophrenia and related psychiatric and behavioural phenotypes 44 . However, potential bias arising from missing data was dealt with using multiple imputation, using a large amount of additional information to make the assumption of missing at random as plausible as possible. Findings from complete case and multiple imputed models were comparable for each of the three outcomes. Third, we could not complete bidirectional prospective analyses for all cognitive measures, since equivalent data for response inhibition and emotion recognition were not available at age 18. Furthermore, the sample at age 24 has not yet passed through the entire risk period for PLEs 2,14 , indicating that further longitudinal research, with repetition of cognitive and PLE assessments is required to shed light on the developmental trajectories of cognition and PLEs and their association in later adulthood. Implications and conclusions Young people who experience PLEs are likely to also experience other mental health and functional impairments. Here we show that these impairments in young adults with PLEs extend to concurrent and future poorer function in the cognitive domains of working memory and response inhibition.
v3-fos-license
2020-10-29T09:03:47.527Z
2020-10-27T00:00:00.000
225097360
{ "extfieldsofstudy": [ "Medicine", "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s42003-020-01324-2.pdf", "pdf_hash": "bae61c5cc473105a85f182662636375a3af255ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1519", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "5d070faf637c954c694f236f0fcf9b8d87c3eb61", "year": 2020 }
pes2o/s2orc
Isotopic and microbotanical insights into Iron Age agricultural reliance in the Central African rainforest The emergence of agriculture in Central Africa has previously been associated with the migration of Bantu-speaking populations during an anthropogenic or climate-driven ‘opening’ of the rainforest. However, such models are based on assumptions of environmental requirements of key crops (e.g. Pennisetum glaucum) and direct insights into human dietary reliance remain absent. Here, we utilise stable isotope analysis (δ13C, δ15N, δ18O) of human and animal remains and charred food remains, as well as plant microparticles from dental calculus, to assess the importance of incoming crops in the Congo Basin. Our data, spanning the early Iron Age to recent history, reveals variation in the adoption of cereals, with a persistent focus on forest and freshwater resources in some areas. These data provide new dietary evidence and document the longevity of mosaic subsistence strategies in the region. Reporting Summary Nature Research wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Research policies, seeAuthors & Referees and theEditorial Policy Checklist . Statistics For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section. n/a Confirmed The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section. A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Give P values as exact values whenever suitable. For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above. Software and code Policy information about availability of computer code Data collection Data analysis For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors/reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Research guidelines for submitting code & software for further information. Data Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A list of figures that have associated raw data -A description of any restrictions on data availability Field-specific reporting Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. Ecological, evolutionary & environmental sciences study design All studies must disclose on these points even when the disclosure is negative. Study description Research sample Sampling strategy Data collection Stable carbon, oxygen and nitrogen isotope analysis was applied to archaeological human and animal tooth enamel and bone collagen from four archaeological sites in the Democratic Republic of the Congo. This was done to investigate dietary consumption in the Congo Basin during the Iron Age and assess human reliance on incoming domestic cereals. Bone collagen samples were selected from humans previously excavated from the archaeological sites of Longa, Imbonga, Bolondo and Matangai Turu Northwest in the Congo Basin. !13C and !15N measurements of bone collagen were made to investigate dietary consumption, largely protein sources such as meat, from the sites of study. To provide a baseline, collagen was also analysed from a range of wild and domestic fauna from the site of Bolondo, including crocodile, antelope, goat and dog. In addition tooth enamel was samples for !18O and !13C from humans and fauna to explore the consumption of wild plant sources (C3) compared to incoming domestic crops (C4) such as pearl millet. Stable carbon and nitrogen isotope analyses were also performed on a number of charred food remains from Bolondo to further investigate the processing of different foodstuffs. For a single individual from Matangai Turu Northwest, dental calulus (calcified dental plaque) was removed and analyzed using microscopy to identify plant phytoliths and starch granules entrapped in the calculus. Samples were selected for stable carbon and oxygen isotope analysis of tooth enamel and stable carbon and nitrogen isotope analysis of bone collagen from the available fossil material at the archaeological sites of Longa, Imbonga, Bolondo and Matangai Turu Northwest in the Congo Basin. All teeth or tooth fragments were cleaned using air-abrasion to remove any external material. Enamel powder for bulk analysis was obtained using gentle abrasion with a diamond-tipped drill along the full length of the buccal surface in order to ensure a representative measurement for the entire period of enamel formation. All bone fragments were cleaned with airabrasion to remove any soil. Bone samples were cleaned by abrasion using a sandblaster. Samples were demineralised in 0.5M HCl for 1-7 days and rinsed three times with H2O. The residue was gelatinised in pH3 HCl at 70°C for 48 hours and the solution Ezee-filtered. Samples were lyophilised in a freeze dryer for 48hrs. Enamel was pretreated to remove organic or secondary carbonate contaminants. Samples were washed in 1.5% sodium hypochlorite for 60 minutes, followed by three rinses in purified H2O and centrifuging, before 0.1M acetic acid was added for 10 minutes, followed by another three rinses in purified H2O. Following reaction with 100% phosphoric acid, gases evolved from the samples were analyzed to stable carbon and oxygen isotopic composition using a Thermo Gas Bench 2 connected to a Thermo Delta V Tables 7-9). Precision (u(Rw)) was determined to be ± 0.06 ‰, accuracy or systematic error (u(bias)) was ± 0.11 ‰ and the total analytical uncertainty in !13C values was estimated to be ± 0.13 ‰ using the equation presented in Supplementary material (Supplementary Table 10-13).The nitrogen contents of the samples were calculated based on the area under the N2 peak relative to the weight of the sample, calibrated using IAEA-N2. Stable nitrogen isotope values were calibrated to the AIR scale using IAEA-N-1 (!15N 0.4 ± 0.2 ‰) and IAEA-N-2 (!15N 20.3 ± 0.2 ‰). Measurement uncertainty in !15N values was monitored using three in-house standards: LEU (DL-leucine, !15N 6.5 ± 0.4 ‰), GLU (DL-glutamic acid monohydrate, !15N -1.9 ± 0.1 ‰) and MIL (millet flour from a single panicle from a plot in Senegal, !15N 3.1 ± 0.6 ‰). u(Rw) was determined to be ± 0.18 ‰, u(bias) was ± 0.59 ‰ and the total analytical uncertainty in !15N values was estimated to be ± 0.61 ‰. Analysis was performed by Amy Styring. Dental calculus was removed from three mandibular molars: M1-M3. Images of the mineralised plaque prior to removal, as well as those from contaminant starch granules and phytoliths are published elsewhere.Microbotanical materials released from the calcified
v3-fos-license
2022-04-30T06:24:45.238Z
2022-04-29T00:00:00.000
248431030
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://figshare.com/articles/journal_contribution/Diagnostic_accuracy_of_endoscopic_ultrasound_and_intraductal_ultrasonography_for_assessment_of_ampullary_tumors_a_meta-analysis/19683928/1/files/34961398.pdf", "pdf_hash": "62b42606aabe7de7a1b2f4b2fc21be87dabf6b0f", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1520", "s2fieldsofstudy": [ "Medicine" ], "sha1": "90f63bb4bde5ffeb4273e9d2102ee5d47993ad56", "year": 2022 }
pes2o/s2orc
Diagnostic accuracy of endoscopic ultrasound and intraductal ultrasonography for assessment of ampullary tumors: a meta-analysis Abstract Background Accurate preoperative assessment of ampullary tumors (ATs) is critical for determining the appropriate treatment. The reported diagnostic accuracy of endoscopic ultrasound (EUS) and intraductal ultrasonography (IDUS) for detecting tumor depth (T-staging) and regional lymph node status (N-staging) varies across studies. Method An electronic search of the MEDLINE and Embase databases was conducted to identify studies that assessed the diagnostic accuracy of EUS and IDUS for ATs. Sensitivities and specificities of eligible studies were summarized using either fixed effects or random-effects model. Results Twenty-one studies were included in the final analysis. The pooled sensitivity and specificity of EUS were 0.89 and 0.87 for T1, 0.76 and 0.91 for T2, 0.81 and 0.94 for T3 and 0.72 and 0.98 for T4, respectively. For IDUS, estimates from five studies were 0.90 and 0.88 for T1, 0.73 and 0.91 for T2 and 0.79 and 0.97 for T3, respectively. For N-staging, 16 studies using EUS were included with sensitivity and specificity of 0.61 and 0.77, respectively. Moreover, estimates of IDUS for N-staging were 0.61 and 0.92, respectively. Conclusion Our results imply that EUS and IDUS have good diagnostic accuracy for T-staging of ATs. However, the accuracy of EUS or IDUS is less satisfactory for N-staging. More well-designed prospective studies are warranted to confirm our findings. Introduction Ampullary tumors (ATs) originate from the ampulla of Vater itself, distal to the bifurcation of the distal common bile duct and the pancreatic duct [1]. ATs have been increasingly diagnosed over the last decades, due to the wide use of endoscopic and radiological modalities for unrelated or other indications [2]. The removal of ATs is recommended in most cases because of its malignant attribute, especially when symptoms are present [1]. However, radical surgery carries high mortality rates ranging from 0 to 13% and high morbidity rates ranging from 25 to 63% [3]. In contrast, endoscopic ampullectomy appears to be a valid alternative to surgery for ATs. Despite the high rate of radical resections and low recurrence rate, the incidence of adverse events such as pancreatitis and hemorrhage should not be neglected [4]. A meta-analysis have showed that the overall rate of adverse events was up to 24.9% [5]. Therefore, an accurate preoperative assessment of ATs is crucial for triage of patients to endoscopic or surgical treatment. The recent published European Society of Gastrointestinal Endoscopy (ESGE) guideline recommended endoscopic ultrasound (EUS) and intraductal ultrasonography (IDUS) for locoregional staging of ATs with low quality of evidence [6]. The main advantage of EUS is that the transducer can be placed close to the lesion without interference of fat, bowel gas or bone. IDUS provides real-time, high-quality cross-sectional images, and previous studies indicated that IDUS offers diagnostic yields that are equivalent to or slightly greater than those of EUS [7,8]. Various studies have evaluated the diagnostic accuracy of EUS or IDUS in endosonographic evaluation of T-and N-staging of ATs. However, results from these studies vary considerably. To our knowledge, a meta-analysis published in 2014 investigated the accuracy of EUS alone in ATs and found that EUS had a moderate strength of agreement with histopathology in determining T-and N-staging [9]. Since almost a decade has passed, new studies concerning the diagnostic accuracy of EUS or IDUS have been published, adding new information to the body of evidence. Thus, we conducted a systematic review and meta-analysis to update the current evidence. Diagnostic Test Accuracy Working Group to perform this meta-analysis [10]. Two investigators (Ye XH and Wang L) independently performed a computerized search of MEDLINE (from 1 January 1966 to 31 December 2021) and Embase (from 1 January 1974 to 31 December 2021) databases to identify potentially relevant articles. The search was carried out using the following keywords: endosonography ('endoscopic ultrasound', 'EUS', 'intraductal ultrasonography' and 'IDUS'), ampullary ('ampulla' and 'papilla') and tumor ('malignancy', 'neoplasm', 'cancer' and 'adenoma'). Manual searches of the bibliographies from these potential articles were also performed to identify additional studies. Study selection Two investigators (Ye XH and Wang L) independently reviewed potentially relevant articles for eligibility and inclusion. Studies were included if they met the following inclusion criteria: (1) Retrospective or prospective design published in manuscript form; (2) studies involving 10 or more patients using EUS or IDUS to evaluate ATs; (3) an appropriate reference standard was reported (endoscopic or surgical pathology); (4) reported absolute numbers of truepositive, false-negative, true-negative and false-positive observations for ATs, or if sufficient data could construct a 2 Â 2 contingency table and (5) ATs were evaluated according to Tumor Node Metastasis (TNM) classification [11]. Case reports, editorials, review articles or clinical guidelines were excluded. Any disagreements were resolved by consensus. For T-staging, T1 refers to lesion limited to the ampulla of Vater or sphincter of Oddi; T2 refers to invasion of the duodenal muscularis propria/duodenal wall; T3 refers to invasion of the pancreas and T4 refers to invasion of the peripancreatic soft tissue or adjacent organs or structures other than the pancreas. For N-staging, regional lymph nodes were defined as N1 if there were malignant regional lymph nodes on surgical pathology and N0 if no malignant regional lymph nodes were detected. Data extraction and quality assessment A custom-made standardized form was used for data extraction. For each eligible study, the following data were extracted: surname of first author, publication year, region of the study population, study design, sample size (the number of patients with ATs), details of the endosonographic type (EUS or IDUS, radial or linear) and reference standard (surgical or endoscopic pathology). The methodological qualities of the studies were assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool [12]. This tool consists of four key categories: patient selection, index test, reference standard and flow and timing. Each category was assessed in terms of risk of bias, and the first three categories were also considered in terms of applicability. Data synthesis and statistical analysis The 2 Â 2 tables (numbers of true-positive, false-negative, true-negative and false-positive) were constructed based on the data of the including studies. The pooled sensitivity and specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR) and diagnostic odds ratio (DOR) were calculated [13]. The summary estimates of sensitivities and specificities along with their corresponding 95% confidence intervals (CI) and prediction region were presented with a summary receiver-operating characteristic curve. Moreover, the area under the curve (AUC) was calculated [14]. Most clinical tests have an AUC value between 0.5 and 1.0, with a better diagnostic performance correlating with an AUC closer to 1.0 [15]. Heterogeneity was assessed using the Q-statistic and quantified using I 2 . For the Q test, p < .10 was considered to imply statistical heterogeneity. I 2 is the proportion of total variation contributed by between-study variation. Deek's test was used to evaluate publication bias [16]. All statistical analyses were carried out using Meta-Disc (version 1.4; Unit of Clinical Biostatistics, Ramony Cajal Hospital, Madrid, Spain) and STATA software (version 12.0; College Station, Texas, USA). A p value less than .05 was considered statistically significant. Baseline characteristics of the included studies are summarized in Table 1. The eligible studies were published between 1988 and 2019 and were performed in 12 regions. The sample size in each study ranged from 12 to 120 and included a total of 736 cases. Based on the QUADAS-2 tool, study bias and applicability outcomes were assessed, and the results are shown in Figure 2. Thirteen of the 21 studies were judged as high or unclear risk in one or more of the four key categories. With regard to IDUS, five studies were included for evaluation of T-staging [8,22,26,28,33]. The summary results of sensitivity and specificity were 0.90 (n ¼ 5, 95%CI 0.82 À 0.95) and 0.88 (95%CI 0.78 À 0.95) for T1, 0.73 (n ¼ 5, 95%CI 0.54 À 0.88) and 0.91 (95%CI 0.85 À 0.95) for T2 and Figure 5 show the sensitivity and specificity of IDUS in diagnosing various T stages. The AUC curves of IDUS for T-staging are shown in Figure 6. A test of heterogeneity for all the pooled estimates of IDUS had a p value > .10 expect for T2 in specificity. PLR, NLR and DOR of IDUS for various T stages are shown in Table 3. Furthermore, two studies included T3 and T4 stage tumors in the same group and therefore additional analyses for T3-4 were performed [28,33]. The summary results of sensitivity and specificity were 0.80 (n ¼ 2, 95%CI 0.44 À 0.97) and 0.91 (95%CI 0.82 À 0.96) for EUS and 0.80 (n ¼ 2, 95%CI 0.44 À 0.97) and 0.93 (95%CI 0.84 À 0.97) for IDUS, respectively. The heterogeneity was significant for EUS and IDUS in specificity (p < .05). The AUC, PLR, NLR and DOR values of EUS and IDUS to diagnose T3-4 stage of ATs are shown in Tables 2 and 3. Assessment of heterogeneity Heterogeneity was mainly found in sensitivity (T1) and specificity (T1, T2, T3 and N-staging) of the summarized EUS results. Subgroup analyses were performed based on publication year (before 2000 vs. after 2000), area (eastern countries vs. western countries), EUS technique (radial EUS only vs. radial or linear EUS) and study design (retrospective vs. prospective). For T1, the heterogeneity in sensitivity was attributed to EUS technique and study design. The heterogeneity in specificity (T1, T2, T3 and N-staging) was eliminated or reduced to varying degrees when subgrouping based on publication year, area, EUS technique and study design, suggesting that these factors might be significant contributors (Supplementary Table 1). Publication bias The publication bias was assessed using Deek's funnel plot. If exists, publication bias results in a higher proportion of smaller studies with bigger effect sizes compared to larger ones. In the funnel plot, the vertical axis represents the inverse of the square root of the effective sample size, while the horizontal axis represents the DOR. With the exception of EUS for T4 (p < .01), all the other Deek's funnel plots were symmetrical with respect to the regression line, and asymmetry tests revealed no evidence of publication bias. The funnel plots to investigate the effect of publication bias for EUS and IDUS estimating T-and N-staging of ATs are shown in Figure 9. Discussion In this study, we conducted a robust systematic review and an appropriately performed meta-analysis. Our main findings were that both EUS and IDUS had acceptable sensitivities (0.72 À 0.89 for EUS; 0.73 À 0.90 for IDUS) and specificities (0.87 À 0.98 for EUS; 0.88 À 0.97 for IDUS) in diagnosing T-staging of ATs, whereas the accuracy of EUS or IDUS is less satisfactory for N-staging. The AUC values of EUS (0.89 À 0.95) and IDUS (0.87 À 0.95) for T-staging were very close to 1, indicating that both EUS and IDUS are excellent T-staging tests for ATs. With regarding to Nstaging, the summarized results suggest that either EUS (sensitivity, 0.61; specificity, 0.77; AUC, 0.74) or IDUS (sensitivity, 0.61; specificity, 0.92; AUC, 0.87) is suboptimal. In consideration of intraductal extension as a predictor for incomplete endoscopic resection and recurrence [34,37], data were also collected. According to our synthesized results, EUS has been shown to be useful in evaluating intraductal extension of ATs (sensitivity, 0.79; specificity, 0.88; AUC, 0.92). To the best of our knowledge, this is the first meta-analysis that quantitatively summarizes all the available evidence of both EUS and IDUS in the locoregional staging of ATs. A previous systematic review was published in 2014, which included 14 studies with respect to the diagnostic accuracy of EUS alone on ATs [9]. In our study, we included 21 studies and strengthened the body of evidence. Several included studies reported results for T-staging as 'T3-4', which precluded data extraction for T-staging. However, the prior systematic review considered 'T3-4' as T3 and T4 separately Figure 9. Funnel plots assessing bias for T-and N-staging of EUS and IDUS for ATs. EUS: endoscopic ultrasound; IDUS: intraductal ultrasonography; ATs: ampullary tumors. [28,33]. The main strength of our meta-analysis is that we performed additional analyses of studies that reported Tstaging as 'T3-4' and found that the performance of EUS and IDUS to diagnose T3-4 tumors was comparable with other results, hence improving the estimate accuracy. The incidence of ATs has increased in clinical practice due to the development of routine screening endoscopic procedures and imaging modalities [2]. Consequently, accurate locoregional assessment is of great importance for selecting optimal treatment modality. The ESGE guideline published in 2021 has recommended EUS and IDUS for locoregional staging of ATs; however, the quality of evidence was low [6]. In addition, ESGE guideline also stipulated that other imaging modalities such as abdominal magnetic resonance cholangiopancreatography (MRCP) for staging of ATs. Generally, various imaging technologies including CT scan, MRCP and transabdominal ultrasound are traditionally used combined with EUS for preoperative staging of ATs in clinical practice. Indeed, several studies suggest that EUS provides significantly higher performance specifically for T-staging compared with CT and transabdominal ultrasound, whereas comparable or slightly higher accuracy compared with MRCP with no statistical significance [17,18,24,25,38]. Lymph node metastasis is a well-established prognostic predictor for ATs [36]. Although the accuracy is not as reliable as that of Tstaging, EUS and IDUS can still help clinicians stratify the risk of patients with lymph node metastatic disease and represent a clue in selecting patients for optimal treatments. In addition, studies have demonstrated that the performance of EUS was not statistically different as compared to MRCP and CT [22,30,31,33]. Recently, several reports have described EUS-guided fine-needle aspiration (FNA) for ATs and it might be another diagnostic option with a sensitivity of 82.4% and a specificity of 100% [39,40]. IDUS uses higher frequency ultrasound probe (20 À 30 MHz) and thus produces higher-resolution images than EUS [41]. IDUS provides superior differentiation between the sphincter of Oddi and the duodenal wall because less tissue is compressed during scanning. Previous studies demonstrate that IDUS has slightly higher diagnostic yields than or comparable to those of EUS [7,8] and is helpful in selecting appropriate patients for indication of endoscopic ampullectomy [28]. Moreover, as the guideline recommended, routine use of IDUS should weigh against training, costs and risk of pancreatitis [6]. However, the number of included cases with ATs undergoing IDUS was limited, and larger series with longer follow up are imperative for exploring its clinical significance. Our data carry clinical implications. As shown in Tables 2 and 3, DOR, PLR and NLR were calculated. DOR refers to the odds of having a positive test in patients with a true histological stage of the disease when compared with patients who do not have the disease. For instance, the odds of having the correct histological T1 stage of ATs is 28.31 times. It enables physicians determining treatment strategy with confidence. PLR defines as a measure of how well the test correctly identifies a disease, whereas NLR is a measure of how well the test correctly excludes a disease [42]. Comparatively speaking, likelihood ratios including PLR and NLR are supposed to be more clinically practical. The results indicate that both EUS and IDUS perform well in excluding as well as diagnosing the correct T stage of ATs. Some limitations of our study merit consideration. First, most of our studies were retrospective in design, thereby may overestimating the diagnostic precision. Also, some of the studies did not specifically differentiate benign ampullary adenomas from ampullary cancers. Third, the definition of lymph node metastasis varied across studies, thereby leading to selection bias. Fourth, for IDUS results, there was only a small number of studies and cases included to draw a robust conclusion. Therefore, the diagnostic performance of IDUS for ATs might be less reliable. The resolution of this issue will require more data and additional studies. Last, it is well known that intraobserver variability exists in EUS and IDUS interpretation and thus may affect the accuracy of our analyses. In summary, EUS and IDUS are highly accurate techniques in T-staging and intraductal extension for ATs. However, EUS and IDUS are less satisfactory for N-staging due to their modest sensitivities and specificities. The results of the test must be interpreted with caution in specific clinical contexts. More prospective, well-designed studies are demanded. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This work was supported by Natural Science Foundation of Zhejiang Province (LQ19H030003) and Key Project of Jinhua Science and Technology Bureau (2018A32022).
v3-fos-license
2021-05-11T00:06:34.987Z
2021-01-01T00:00:00.000
234204864
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://doi.fil.bg.ac.rs/pdf/journals/esptoday/2021-1/esptoday-2021-9-1-5.pdf", "pdf_hash": "db124cb1a64132a63eb6af4227d1f984981e06bb", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1521", "s2fieldsofstudy": [ "Medicine", "Education", "Linguistics" ], "sha1": "a63e8999a732f7c35357f2ee14ef41d4714a639f", "year": 2021 }
pes2o/s2orc
OET vs IELTS: FINDING THE MOST APPROPRIATE WAY TO TEST LANGUAGE SKILLS FOR MEDICINE The question of whether someone is ‘proficient’ in a language or not can be difficult to measure. The problems that surround language testing are well researched and the suitability of tests such as the International English Language Testing System (IELTS) has been studied extensively. How well studying for tests such as IELTS equips learners with language needed for the world of work is, however, less researched. This paper focusses on the suitability of two tests, the Occupational English Test (OET) and IELTS, for the evaluation of language competency in people who wish to access employment and register in a medical profession, where the need to communicate effectively is essential for professionals such as nurses and doctors to be safe at work. Rather than looking at the tests themselves to ascertain their suitability, this paper explores the views of the test-taker and investigates their experiences of preparing for the two tests and their opinions of the test content. The findings show that candidates overwhelmingly prefer the OET, rating it more achievable, more relevant and more motivational than the IELTS. INTRODUCTION Language proficiency can be difficult to measure. In language assessment research, questions that relate to the kind of English we are testing, what is meant by standard English and, in an increasingly globalised world, questions that relate to the suitability of approaches to testing are areas of debate (Bachman & Purpura, 2008;Hall, 2014;Pennychook, 2007;Pilcher & Richards, 2017). The validity of specific language tests such as the International English Language Testing System (IELTS) has also been investigated (Bayliss & Ingram, 2006;Dooey & Oliver, 2002) as has the question of whether the English learnt when studying for an exam will equip that student with the language needed to succeed academically (Pilcher & Richards, 2017), or whether this English will enable a student to cope in the outside world beyond the language classroom (Badwan, 2017). Language tests are increasingly used by employers and professional bodies who wish to determine the language proficiency of an overseas professional. In healthcare, communicating effectively is paramount to patient safety. Hull (2015) emphasises the possible risks for patients should there be a breakdown in communication and highlights "the potential to positively or negatively affect patient outcomes" (2015: 158). Hull distinguishes between 'medical language' and 'standard language' and states that "neither command nor fluency of a standard language guarantees success in specific contexts such as medicine and healthcare" (2015: 159). Bachman and Purpura (2008: 460) agree: "those who achieve the grade required by their profession are assumed to be prepared to function successfully in their relevant professional context". The barrier between the testtaker and their goal is often language based, however, even when the grade required has been reached, the 'assumed' outcome mentioned above, has not always been achieved. The medical professional's communicative task is challenging and includes balancing the use of medical terminology (or jargon) with language known by the patient along with carefully selecting empathetic language, which is essential for a task such as breaking bad news. Hull (2015) called for a more contextualised test which prepares a learner more appropriately with language they need for the workplace. This paper considers whether the OET is better able than IELTS to do this for healthcare workers. BACKGROUND Currently the General Medical Council (GMC) and the Nursing and Midwifery Council (NMC) accept two tests for doctors and nurses to evidence their language 1 Evidence of language proficiency forms only part of the requirements needed for registration purposes with the NMC and the GMC. For example, the GMC also ask qualified doctors to demonstrate their "knowledge and skills necessary to practise medicine in the UK" by passing further exams: Professional Linguistic Assessment Board (PLAB) 1 and PLAB 2, before they can gain registration. ANDREA CARR Vol. 9(1)(2021): 89-106 while a nurse may be asked to talk to a carer about a patient's aftercare following an operation. For registration with the GMC, doctors are required to achieve a grade 'B' in each of the four skills in one sitting. Each sub-test is marked out of 500 and scores between 350 and 440 equate to a 'B'. For nurses, the NMC require 'B's in all skills except in writing, where they accept a C+. Furthermore, the NMC will accept the required scores being achieved over two separate sittings of the exam. According to OET (2019) a grade 'B' is equivalent to a band 7.0-7.5 in IELTS. At the time of writing, there are other healthcare professional bodies which do not yet recognise the OET, such as the General Dental Council (GDC) and the General Pharmaceutical Council (GPC), both of whom continue to accept only IELTS. While extensive research has been carried out to establish the suitability of IELTS, its suitability for professional registration purposes has been significantly less researched. A study by Merrifield (2009) looked at the rationale behind the decision of regulatory boards to use IELTS for registration purposes. One of the key aims was an evaluation of "the appropriateness of the test for the purpose of association membership or registration" (Merrifield, 2009: 6). However, only 14 of the 24 associations Merrifield approached agreed to participate, with a possible explanation being that "they were reluctant to be interviewed, even in an informal manner, on a subject about which they had limited knowledge " (2009: 8). This may demonstrate that professional bodies feel that their knowledge of English language testing is 'limited'. In her study, Merrifield (2009: 5) comments on the increase of those using IELTS and states that "the growing trend for IELTS to be adopted by users outside of academia, [….] may constitute a risk for the test owners if the assessment system cannot be validated for the purposes for which it is being used." She states what is needed as the 'growing trend' continues: [The] development of a body of knowledge of what is happening in the professional world is an important aspect of risk management for the managing partners. The IELTS partners need to understand the non-expert's perception of how good the "fit" is, the means by which entry level band scores are established, how often minimum levels are revisited and what support might be needed from the IELTS administration. (Merrifield, 2009: 5) OET is relatively new to the UK and Europe, so literature around the suitability of the test is scarce. Soon after the GMC's announcement, Ceri Butler was interviewed for the British Medical Journal (BMJ) and stated that "my gut instinct is that having a test like the OET, which is based on a clinical setting, is better as a measure of a doctor's ability. The OET gives us the opportunity to approach language in a way that is appropriate for a healthcare setting" (Rimmer, 2018: 298). She also commented on how IELTS is used for testing how someone speaks or writes at an academic level and points to the question of the 'appropriateness' of using IELTS to test doctors, unless they are wishing to do a PhD or a Masters (Rimmer, 2018: 298). HIGH STAKES TESTS Research undertaken by Saville (2009), which demonstrates the impact language testing can have on both society and individuals, has been considered in this study. High stakes tests, along with high band requirements to access a qualification, or even requalification with a professional body, can indeed have a high impact, and not only on the nurse who is desperate to return to her profession, but also on a society where there is a great shortage of such skills, yet where the need for effective communication is pivotal for patient safety. One could argue that the 'impact' on the individual is significantly increased in migrant workers and in refugees and asylum seekers. For this research, data has been collected from members of the organisation where I work. The organisation is called Reache North West. 2 At this organisation, refugee and asylum-seeking doctors and nurses are prepared with the skills they need to requalify in the UK. Before transitioning to OET in April 2018, members of this organisation studied IELTS. By considering the experiences and opinions of those who prepared for IELTS and/or OET within this organisation, a comparative analysis of the two high stakes tests was undertaken. AIMS The aim of this study was to evaluate the content of two English language tests: the International English Language Testing System (IELTS) and the Occupational English Test (OET); and through the investigation of student opinions and their experiences of preparing for the two tests, this study also aims to ascertain the suitability of each test, by evaluating how effectively each prepares the students for their career. METHODS This study used quantitative data collected from 50 doctors and nurses, over two periods of time. The first period was in April 2018, and the second was in January 2019. Fifty participants took part: 47 doctors and 3 nurses. All were aged between 28 and 55. Twenty-one participants had already obtained the required IELTS exam 2 Reache North West defines its organisation as being "an education centre that supports refugee health professionals to gain the necessary skills and qualifications to re-enter their profession and become safe and effective practitioners in the UK National Health Service. We are funded by NHS Health Education England and operate as part of Salford Royal NHS Foundation Trust. We run English language classes and clinical teaching programmes with a particular emphasis on practice in the context of British culture and the NHS" (2019). ANDREA CARR Vol. 9(1)(2021): [89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][106] result for professional registration. The remaining 29 participants had not yet passed their English exam and were studying towards OET. Data was also collected qualitatively through interviews with six of the participants at the latter stage of the data collection process. Ethical approval was sought and approved at both stages and before any data was collected. Participants were ensured full anonymity and confidentiality throughout the process, and the careful saving, handling and deleting of any data was also explained and carried out. When the GMC announced in February 2018 that they would accept OET as an alternative test to IELTS, many members of Reache North West and myself included, were shocked and somewhat apprehensive. The news had not been anticipated, and many members had already invested a considerable amount of time, money and energy into IELTS. Along with preparing the pilot group to take the OET test, it was important to also establish how the members felt about transitioning from IELTS to OET before any decisions to do so could be made. At this stage, the participants were therefore asked to complete a short questionnaire (Appendix A) and were also asked to write a reflective piece on their experience of the most recent IELTS exam they had taken. I chose to begin at this point and in this way in order to get a sense of how the members of Reache felt about their studies for IELTS thus far, their approach, and how effective they felt it had been, before they moved over to study for a different exam. I also wanted to discover what they thought about the language they had learnt so far in terms of its suitability. This was not asked explicitly, though I wondered if it would be mentioned in their comments. In January 2019, the second stage of the data collection process began, and an online questionnaire was distributed to the entire Reache North West membership. Of the 50 participants, the average length of time indivduals had been members of Reache North West was 2 years. Twenty-one had passed their English exam and there were 29 who had not yet passed it. Of the former group, 18 had passed OET and 3 had passed IELTS. It should be mentioned that the 3 who passed IELTS had never studied towards OET, and of the group that had passed OET, 7 had never studied for IELTS, meaning that 40 of the 50 participants had the experience of studying towards and/or taking both tests. The questionnaire consisted of 29 questions and was semi-structured, containing a combination of open and closed questions. Some of the Likert scale ranked questions were followed by an open-ended question where participants could explain their response. This was done in the hope that a more in-depth insight into the experiences of the learners, and how they felt about both tests, could be captured. The questions were framed positively to avoid bias. The objectives of this study remained a point of reference for each of the 17 questions written for the interviews. While some of the questions were semistructured, all were indirect, as the aim was to gain the participants' opinion throughout. ANALYSIS TOOLS For the first stage, the comments made by individuals regarding the transition from one test to another and which felt they would need to improve the most moving from IELTS to OET were examined. The reflective pieces were analysed to gauge how the leaners felt about studying for IELTS and the progress they thought they had made. Comments made by the healthcare professionals when asked about the appropriateness of the OET test in contrast to IELTS were also considered. For the second stage, the results from the online questionnaire were also analysed, and from the data collected at the two stages, themes began to emerge. I then analysed the transcribed interviews and looked for any continuation of the themes identified in the questionnaires. As well as identifying a continuation of themes, additional emerging themes were also recognised. Eventually, five themes were established and a colour was assigned to each theme. I then reviewed the questionnaire responses with the themes in mind, and colour-coded words or phrases thematically. First stage findings The findings at this stage in the research were that students held views that were generally negative. Many participants expressed a significant amount of frustration and felt that they had not progressed satisfactorily considering the amount of time they had spent studying towards their IELTS exams. Only two of the thirty participants who wrote the reflective pieces said that they were 'happy' with the progress they had made in their studies. Others used words such as 'depressed', 'devastated', 'struggling' and 'stressed' to express how they felt about their most recent IELTS test and/or their results. Dr AH said 'I feel sometimes that this is an impossible test and I cannot ever achieve the marks that I need to pass the exam'. A number of participants wrote about their many attempts to pass the IELTS exam, three mentioned that they had made at least five attempts. 3 Some said they obtained the required mark in three of the four areas tested, but failed in one. Then in another attempt, the individual passed in the area they had failed in previously, but failed in a different area. For example, Dr F said 'in the first exam, I scored 7 overall, with a 7.5 in writing and a 6.5 in reading. However, as I took the next two ANDREA CARR Vol. 9(1)(2021): 89-106 exams, I improved my reading and passed, but I got 6.5 in writing both times.' She also wrote 'scoring 6.5 in writing twice, in subsequent exams, shattered me and I lost my confidence. ' When asked (in question 6) for their initial thoughts on the news of the GMC accepting OET, the comments reflected a more optimistic outlook. Dr W said 'I am very optimistic. I think being more relevant to medical topics will make it easier to pass OET because we are familiar with these topics'. The word 'optimistic' featured quite frequently in the answers to this question. Another member, Dr H, stated 'I am very optimistic about taking the OET exam as a proof of my English ability as I believe the language that I'll be tested on is more relevant to what I was and will be practicing and easier in my opinion.' (Carr, 2018). Second stage findings When asked whether participants thought that IELTS had been beneficial in the improvement of their English language skills, a total of 68% either agreed or strongly agreed, and only around 10% disagreed or strongly disagreed. Reading seemed to be the skill that many considered as being developed the most, and when asked why, 11 explanations included words or phrases such as: 'reading quickly', 'skimming and scanning', 'speed reading'. Those who mentioned writing indicated that 'grammatical structures', 'sentence structure', and 'structuring an essay' were aspects they had 'specifically improved'. 'Learning new vocabulary' was also given as a reason. When asked whether participants thought that OET had been beneficial in the improvement of their English language skills, a total of 75% either agreed or strongly agreed, and only around 6% disagreed or strongly disagreed. In the responses to the question which asked what specifically had improved in their choice of skill, many commented that their writing of a referral letter had improved. One said that this was 'very important for doctors'. 'Communication' was one word which reoccurred frequently in the responses and one participant said 'I learnt how to avoid jargons, also the style of language that [is] used with patients'. Another who indicated that they were being more suitably prepared by studying towards this exam said 'I learnt to speak naturally, not like a robot'. When asked which part of studying OET had been most beneficial, many answered 'all' or 'most of them'. One person wrote: 'I'm pretty sure almost all the topics I studied in OET were relevant to my profession.' The word 'relevant' appeared numerous times and one person said 'everything, something related to my career and I need it in everyday practise.' When asked how interested they were in the topics studied for OET, 96% chose either strongly agree or agree. Some of the reasons why were that the topics are 'familiar', or they said they were 'interested' in them. One respondent said the Vol. 9(1)(2021): [89][90][91][92][93][94][95][96][97][98][99][100][101][102][103][104][105][106] topics 'put me on the right track for my career and help me improve in both sides, English and practically for medicine. ' Interestingly, some of the responses indicated that the level of engagement was increased as they enjoyed or understood the topics. One participant wrote: 'because the topics are familiar, I understand very well' and another said 'because when I studied English with my favourite topics, I found my English burst up'. When asked if the topics in OET related to their future career, one person stated: Studying for OET helped me to get my confidence back to study and prepare for my future career. Studying for OET played a major role in improving my speaking and writing skills. Because we are learning how to explain medical language in a language that can be understood by the patients. As well as the more positive tone detected in the responses, the relevance of the content also seemed to be of significant importance to the participants. One person reflected on the relevance of studying for OET: 'that is what I need and what I will use in the future. Otherwise, in IELTS, I have to know everything, spades, dinosaurs, oceans.' The percentage of those studying for OET, who considered the topics relevant to their future career, was significantly high with 83% in strong agreement. This was in contrast to the percentage of those who considered the topics studied for the IELTS exam to be relevant. The response was significantly lower, as only 14% either agreed or strongly agreed with this. When asked why they had responded to this question in such a way, the most common words or phrases used in the responses were 'irrelevant' (topics) or 'not related to' or 'close to' (medical field/my career/my field/future career). Words such as 'boring' and 'uninterested' also appeared. One person said: 'I'm not interested in reading talking about insects or animals', and another stated 'it was like having a branch from each tree in the jungle. History, environmental issues and building a dam.' The issue of time was also mentioned in relation to the IELTS preparation: 'it takes us away from our profession and let us forget most of our career information because of lack of enough time to keep updated in medical knowledge'. Themes of time, relevancy and engagement were emerging, but it was clear that for further explanations, quantitative data would need to be collected. The overall rating given to the experience of studying towards the two tests was significantly different. For OET, 44 of the participants rated it good or excellent, whereas for IELTS, just 3 rated it as excellent. Interviews Of the six participants invited for interview, all were doctors, four of whom were male and two were female. The ages ranged between 30 and 45 years, and apart 97 ANDREA CARR Vol. 9(1)(2021): 89-106 from one (Dr T), they had all studied at Reache North West for an average of two years. All of those interviewed had experienced preparing for and taking both tests, apart from Dr A, who had never sat an IELTS exam because he 'saw friends struggling to get results so [he] never attempted it'. The remaining doctors had studied IELTS for an average of one and a half years. Dr T had, however, been attempting to pass IELTS for four years. Whilst most of the interviewees had studied IELTS for a similar length of time, the time each had spent studying for OET differed. The average overall IELTS band score of the participants at the point of transitioning to OET was between 6 and 7. However, Dr H's and Dr T's (who both passed in April 2018) overall band scores were 7.5/8, and both reported that writing had been the issue, as they could only reach 6.5 in that skill each time they sat the exam. Dr T said 'I would get 8, 8.5, even 9 in listening and/or reading, but every time I would get 6.5 or even 6 in writing. It was very frustrating.' When asked about their experience of studying IELTS, the general response was fairly negative. Dr S stated 'it was all just a bit vague, I mean, you didn't know what to study. It's a general language exam [...] it was just a long tunnel, neverending. And my colleagues -no one was passing'. Dr M stated that 'the content wasn't very appealing. We were doing topics which weren't related to my career so I wasn't motivated at all. I wasn't really engaged. You just think it's going to take a really long time.' In contrast, when asked about their experience of studying towards OET, the comments were quite different. Dr M, for example, said 'OET felt like this is what I need. I was more engaged, so I did more homework and studied more, wrote a load of referral letters and I remembered the language from medical school so could still use the language I had learnt [there]. ' In terms of the development of the participants' English skills, when asked which skill they thought had improved the most, each of the four skills were mentioned. Furthermore, when asked whether studying for IELTS or studying for OET had helped the most to improve this skill, the answers were mixed. However, Dr F gave an interesting response. She said that OET had helped her to develop her strongest skills (writing and reading) the most, because she had been able to 'focus more on the language rather than the topic.' This comment prompted consideration of the idea of having the space to focus on the language when the topic is familiar. This had also been mentioned in the questionnaires. When asked what advice they would give to a healthcare professional who is trying to decide whether to study for and sit IELTS or OET for registration purposes, the responses were unanimous. All six of the interviewees expressed the view that they would encourage the person to work toward OET. Dr A said: 'OET is achievable and feasible and will help a healthcare professional in his future career and with his communication skills.' Dr M gave the following response: OET is the best exam for healthcare professionals in general, because it has a direct relationship with your profession and this is not just about passing the test, it's about learning the language that you will need in the world or you will need to deal with patients, so its very helpful for healthcare professionals to study OET. The final question asked for their opinions on the suitability of each test for testing the English skills of a healthcare professional. Dr F stated: OET is a strong test, it's not easy, but it's more useful. IELTS is useful for general English. Both are good, but with IELTS, I would struggle preparing for PLAB 1 and 2 and also working in the UK as I've been exposed to medical topics and the healthcare system here as I learnt some of that whilst preparing for the test. The idea that IELTS is 'unachievable' has been a constant theme throughout both stages of the research. First it was mentioned by participants of the pilot study, and then this idea was reflected on again, during the interviews in the second stage. Dr H said 'in the 3 exams I took I passed the modules except writing, I always got 6.5.' Table 1 shows the figures which have been taken from the database at Reache North West, and whilst they are not a statistic, they can act as a guide as to how the members of Reache North West have fared in their English exams over the last five years. members of Reache North West as they witnessed significant numbers of their colleagues pass through the part of their journey which had previously been considered almost unachievable. Through considering the comments and written responses of the Reache members who participated in this study, it would seem that, generally speaking, the members felt that passing the IELTS exam with an overall band of 7.5 (with a minimum of 7 in any skill) was an extremely daunting prospect. According to the feedback of many of the participants, it seems (overall) that despite the amount of time spent trying to study for and pass the exam, the variety of resources used to study, the mode of study and the long lengths of time spent studying, the exam was very difficult to achieve. The suitability of the style of writing being tested was also questioned by many, as was the appropriateness of learning language to deal with an extension range of non-medically related topics. The participants interviewed expressed a preference for being tested on language which was 'career-related' and that they can 'do' something with. Dr A said he had 'learnt how to speak like a robot for IELTS, but for OET, it felt like a real conversation, with a patient or a person, so it's the real English that we need in our career and in this country.' Dr A said when talking about preparation for OET: 'I felt more motivated, it seemed like something feasible, achievable, and related to my future career. I studied and worked harder as I knew all the vocabulary and information I would learn in that period would be used in my future career.' Time and how it was considered 'wasted' was also an emerging theme particularly in stage one of the data collection process. Many students had spent a long time attempting to pass IELTS. I would like to conclude this section by offering some insight into my own experience of teaching both OET and IELTS. The resources and materials used to deliver English for OET are, of course, more medical and therefore relate to the learner's career. However, what this has also allowed, is a greater opportunity for me and my colleagues to expose the learners to topics that are culturally different yet extremely important for healthcare professionals to be aware of before taking up work in countries such as the UK. The cultural perspective, which is intrinsic in a globalised world (Jiang, 2013), is given more time and space as the language needed for this test can be found, studied and practised within topics which lend themselves to the exploration of culturally relevant, but sometimes very culturally different themes. Topics such as Safeguarding, Palliative Care and Discharge Planning are all processes which can differ significantly in a clinical setting in different countries. Noun phrases such as 'care home' or 'home carer' are words which often do not translate, as in the home countries of many of my learners, these places and roles do not exist. Challenging or controversial topics such as euthanasia and abortion can be stressful and problematic for the learner and often require a great deal of sensitivity in the approach. However, it would be fair to say that exposure to such topics is vital when international healthcare professionals enter the world of work as they will undoubtedly work with others and treat patients who may have very different attitudes or views to their own. Most of my students are Arabic learners from Muslim countries, so topics such as alcohol and drug abuse can be problematic in that the existing knowledge is often very limited. For example, defining what constitutes a 'heavy' or a 'social' drinker could be very different from one person to the next, particularly if one person is from a country where alcohol is prohibited and is of a religion that forbids its consumption. The narrowed scope of topics in OET can be useful in that a greater exploration of topics, which are vital for healthcare professionals to be aware of if they are going to practice within a UK context, can be accessed and explored. In contrast, for IELTS, the focus of the lesson would need to be on topics which would be expected or anticipated to be seen in the exam. Topics such as global warming and space exploration would often leave no room for topics which are related to the healthcare professional's career. CONCLUSION This research has highlighted some of the key issues surrounding the use of a language test such as IELTS for purposes for which it was not designed. The issues around language tests which assess a person's competence in a 'standard' form of a language have been questioned as has the suitability of using one test for so many different functions. Beyond university, there is the English needed for the globalised world as well as the English needed for the world of work. 'Messy communication' is how out-of-class communication has been described (Badwan, 2017) but there is also the 'highly evolved, career specific, technical and culturalbound' language (Hull, 2015), which is needed by healthcare professionals at work. Having a test such as OET which deals specifically with the language needed in the occupation of a group of people can perhaps form the backdrop for us to see just how unsuitable a test such as IELTS is for some learners, and it may also provide a model for other specific purposes English tests. There was a need for formal evidence to prove that OET is a more suitable test for the healthcare industry than IELTS, and this study shows how the OET can enhance language proficiency in candidates in a more effective way. The GMC and NMC have acknowledged this, as they now accept OET as an alternative to IELTS for registration purposes. It is hoped that this research can provide some evidence for other healthcare regulatory bodies and for other countries employing Englishspeaking healthcare professionals to consider accepting the OET as evidence of language competency. OET is a test which is relatively new to the UK. The data for this study was collected from participants who had had a maximum of 11 months of experience of the exam. It may therefore be beneficial for further research at a later stage, when the experience of individuals who started to study towards the exam with a lower
v3-fos-license
2017-10-20T04:07:41.534Z
2017-02-01T00:00:00.000
17832008
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.veterinaryworld.org/Vol.10/February-2017/19.pdf", "pdf_hash": "e2761f0c0ee43e3f3c0840126487c18501f1487e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1522", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "e2761f0c0ee43e3f3c0840126487c18501f1487e", "year": 2017 }
pes2o/s2orc
Clinical and hematobiochemical response in canine monocytic ehrlichiosis seropositive dogs of Punjab Aim: As in India especially, the Punjab state sero-prevalence and distribution of ehrlichiosis in relation to clinico-hematobiochemical response remains largely unexplored. Thus, this study was designed to determine the prevalence of vector (tick)-borne tropical canine pancytopenia caused by Ehrlichia canis through enzyme labeled ImmunoComb® (IC) assay in dogs from in and around Ludhiana, Punjab. Correlation of prevalence was made with various clinico-hematobiochemical parameters. Materials and Methods: Seroprevalence study was carried out using IC® test kit (Biogal, Galed Labs). The study was conducted in 84 dogs presented to the Small Animal Clinics, Teaching Veterinary Clinical Complex, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana, Punjab. Results: Out of 84 suspected dogs for ehrlichiosis, based on peripheral thin blood smear examination 12 (14.28%) cases were positive for the morulae of E. canis and 73 (86.90%) dogs were found positive to E. canis antibodies through IC® canine Ehrlichia antibody test kit, respectively. Among the different age groups 1-3 years of aged group showed highest prevalence (41.09%), followed by the 3-6 years age group (32.87%), infection levels were lower in the <1 year of age group dogs (13.69%) and more than 6 years age group dogs (12.32%). The highest prevalence was seen in Labrador retriever. This study indicates that season plays a very important role in the prevalence of ehrlichiosis. The most common findings observed were anemia, leukocytosis, neutropenia, lymphopenia, thrombocytopenia, eosinophilia followed by hyperbilirubinemia, increased levels of aspartate aminotransferase, alanine aminotransferase and alkaline phosphatase, hypoalbuminemia, hyperglobulinaemia, decrease in albumin and globulin ratio, increase in blood urea nitrogen and creatinine. Conclusions: Serological techniques like IC® are more useful for detecting chronic and subclinical infections and are ideally suited to epidemiological investigations. Introduction Canine ehrlichiosis a tick-borne disease (Rhipicephalus sanguineus, the brown dog tick) is caused by Ehrlichia canis obligatory intracellular small, Gram-negative, pleiomorphic obligate intracellular cocci that infect blood cells in canines which come under vector-borne diseases affecting dogs [1]. Clinical signs vary based on acute, subclinical and chronic phase. However, the disease is mainly characterized by high fever (104-105°F), anorexia, weakness, epistaxis, lymphadenopathy, and edema of dependent parts [2]. Diagnosis is mainly based on routine blood smear examination. However, more sensitive and specific molecular and serological diagnostics techniques can be used for confirmation of cases negative by microscopy. The gold standard test for detection of canine monocytic ehrlichiosis (CME) is indirect immunofluorescence antibody (IFA) test. However, this test has to be performed in selected laboratories and requires extensive equipment and trained personnel. Whereas, the enzyme-linked immunosorbent assay (ELISA) is a semiquantitive test where small quantities of antigen were used to detect the specific antibodies. Especially the commercially available dot-ELISA kits are used to detect the E. canis immunoglobulin-G (IgG) antibodies [3]. Among them being the ImmunoComb (IC) ® (Biogal, Israel) dot-ELISA has been efficient in detecting anti-E. canis antibodies in sera from naturally infected dogs presenting symptoms [4]. Since not much work have been done on seroprevalence study in relation to hematobiochemical changes on ehrlichiosis in Punjab, India. Therefore, this study was conducted to investigate the serology Study area This study was conducted at Small Animal Clinics, Teaching Veterinary Clinical Complex, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana, Punjab, India. After complete clinical examination, 84 dogs with the signs of ehrlichiosis and reduced platelet count were screened by both blood smear examination, and IC ® dot-ELISA kit and samples were subjected to hematobiochemical studies. Hematobiochemical parameters The collected blood samples were subjected for complete hematology (hemoglobin [Hb], total leukocyte count [TLC], differential leukocyte count, and total platelet count) by ADVIA ® 2120 (Hematology System, Siemens Healthcare Diagnostics Inc., USA), and serum samples were used for biochemical analysis (total bilirubin, aspartate aminotransferase [AST], alanine aminotransferase [ALT], alkaline phosphatase [ALKP], total protein, albumin, blood urea nitrogen [BUN], and creatinine) by automatic biochemical analyser (Johnson & Johnson Diagnostic Kits, Mumbai, India). Results obtained from blood smear examination, hematobiochemical studies, and serological studies were compared and analyzed to get definitive diagnosis. Serological detection of IgG anti-E. canis antibodies by IC ® canine Ehrlichia antibody test kit (Biogal, Galed Labs) Serum samples obtained from the 0 th day blood samples of ehrlichiosis suspected dogs used for this study. As 0 th day animals were naturally infected by vector transmission presented to the clinic with the signs of ehrlichiosis. Serum samples from these animals were subjected to IC ® canine Ehrlichia antibody test (Biogal Galed Lab., Israel) on the same day at room temperature (20-25°C) and tests performed based on the manufacturer's instructions. The sensitivity of the test is 100% and specificity is 94.1%. The test does not cross-react with other blood parasite antibodies. An equivalent intensity of the color reaction in comparison with a positive reference point was used as guide to denote the level of antibodies in each sample: Intense color reactions as compared to the reference spot were considered positive for antibodies against E. canis. Whereas a colorless or faint gray color reaction indicates either a negative result or undetectable levels of antibodies. Antibody titers for the different "S" levels (IC ® scores) were followed as per manufactures protocol. The titers are graded as S1 and S2 (1:20-1:40), S3 and S4 (1:80-1:160), S5 and S6 (1:320-1:1280) [5]. Further to see the treatment efficacy, collected blood samples after 15 th and 21 st day of post treatment and subjected to nested polymerase chain reaction and to know the hematobiochemical improvement after 2 nd and 3 rd week of post-treatment. Statistical analysis The prevalence of the disease was determined with regard to months, season, age, breed and sex in the affected animals and possible hematobiochemical alterations and possible associations between the evaluated variables and positive reaction to the agents were determined. Further, to see any statistically significant differences among various hematobiochemical parameters between the positive groups and the control group were analyzed by one-way analysis of variance at 5% level of significance using SPSS software (Tukey multiple comparison test). Parasitological prevalence Examination of Leishman-stained peripheral thin blood smear revealed 14.28% (12/84) positivity for the morulae of E. canis. E. canis was observed as intracytoplasmic inclusion bodies of varying sizes and shapes in monocytes. The majority of morulae were homogeneous and dense inclusions and more were detected in monocytes. The most commonly encountered form was the large spherical morulae of size 5.4 μm (Figure-1). Seroprevalence Among 84 suspected dogs, 73 (86.90%) dogs were seropostive to E. canis antibodies. High positive reaction to E. canis was seen in 53.57% (45/84) cases, medium positive reaction was in 22.61% (19/84) and 10.71% (9/84) cases showed low positive reaction. Negative reaction was seen in 13.09% (11/84) cases. Reactions were characterized based on the intensity of the dot developed on the comb which was crossmatched with the combscale. The titer was graded according to the "S" levels on combscale on matching. It is a dot-ELISA detected by naked eye. Results were shown in Table-1 and interpreted according to standard data provided in the instructional manual with the ELISA-kit (Table-2). All blood smear positive cases (12) were also found to be seropositive. Age-wise and sex-wise prevalence Among the different age groups, 1-3 years of age group showed highest prevalence (41.09%), followed by the 3-6 years age group (32.87%), infection levels were lower in <1 year of age (13.69%) and >6 year age group dogs (12.32%). Higher prevalence was recorded in males (71.23%) in comparison to females (28.76%). Breed-wise distribution The highest prevalence was seen in Labrador retriever. A case was showing clear signs of ehrlichiosis positive by both microscopy and serology showed high positive titer to E. canis antibodies (1:320-1280) (Figure-2). Usually, in study area and within the study period, Labrador retriever and German Shepherd dog (GSD) breeds were presented more to the clinics. So not much significance can be found on seasonal distribution of infected dogs in terms of breed wise. The detailed distribution and number of dog breeds seropositive to E. canis are shown in Table-3. Five cases of Labrador retriever, four cases of GSD were of aged between 2 and 5 years and two cases of Pomeranian 3 years of age found negative by serology. Season-wise distribution This study indicates that season plays a very important role in the prevalence of ehrlichiosis. This study shows a significant relation between the various seasons and the prevalence of the disease. Most of the cases seen in rainy season (50.68%) followed by summer (27.39%), autumn (12.32%), and least in spring (9.58%). No cases were reported in winter, which indicates a decrease in prevalence with a decrease in ambient temperature (Table-4). Vital body parameters The mean±standard deviation values of rectal temperature (°F) of seropositive dogs (104.13±1.52°F) showed significant difference from the rectal temperature of control group (92.01±0.71°F). Whereas no significant difference in heart rate and respiration rate were noted between the infected group and control group (Table-6). Blood smear examination and clinical findings Our microscopic study was agreeing with the findings of Eljadar [6] who reported 7.9% (75/951) of the cases were positive for ehrlichiosis by blood smear examination. Milanjeet [7] found 2.34% of cases to be positive for E. canis morulae in the same region of Punjab. Dhankar et al. [8] found 11.35% dogs positive for ehrlichiosis in Haryana and Delhi states. Our clinical findings in dogs with canine monocytic ehrlichiosis are agreeing with the findings of Das and Konar [9] and Sacchini et al. [10]. Shipov et al. [11] mentioned in their study about 37.5% of positive cases were having rectal temperature more than 107.25°F. Age-and sex-wise distribution In this study, we have seen that 1-3 years of age group dogs showed the highest prevalence. Harrus et al. [12] observed disease in all age groups. Harikrishnan et al. [13] reported dogs aged from 15 days to 15 years were affected with ehrlichiosis indicating that all the ages of dogs are susceptible to ehrlichiosis. Abiramy et al. [14] observed that maximum cases of canine ehrlichiosis (36%) were observed in dogs of 5-10 years of age and maximum cases were noticed in female dogs. Costa et al. [15] observed male dogs more than 5 years of age had higher rates of anti-E. canis antibodies. Breed-and season-wise distribution In this study, the disease prevalence was highest in Labrador retriever breed of dogs as compared to others (Table-1). Chandrasekar et al. [16] and Bhadesiya and Modi [17] also found that Labrador breed of dogs was also most susceptible. In our study, the disease was found to be most prevalent in rainy and summer followed by autumn and least in spring season. The probable reason behind this trend may be correlated to the seasonal activity of the brown dog tick, R. sanguineus was more abundant in hot and humid period of the year by Soulsby [18]. Similarly, Eljadar [6] from Ludhiana, Punjab recorded maximum prevalence of the disease during the summer season with the prevalence rate of 56% followed by rainy season (37%). Serological examination Mainly our results are agreeing with the similar work previously done by Eljadar [6] in the same region of Punjab, found that 93.33% (70/75) cases were positive by serology. Harikrishnan et al. [13] detected E. canis antibodies in sera from 21 out of 56 dogs (37.5%) in ELISA and 23 dogs (41.1%) in dot-ELISA. They stated that ELISA is a valuable tool for diagnosing the subclinical and chronic forms of canine ehrlichiosis. Akhtardanesh et al. [19] found overall seroprevalence of ehrlichiosis was 14.63% which was determined as 13.8% and 10.6% using IFA test and rapid immunochromatography, respectively. de Castro et al. [20] in their work stated that after 30 days of inoculation all the infected dogs showed positive titers for E. canis by testing all the samples for specific IgG response to E. canis with dot-blot ELISA kit (IC ® , Biogal). Sasanelli et al. [21] reported a case with an antibody titer of 1:160. Castro [22] and Oria [23] used the IC test to determine IgG antibodies specific for the organism. Variable prevalence of ehrlichiosis has been reported from various parts of India. Kumar et al. [24] reported overall positivity for E. canis 6% (29/485) in canines from Chennai city. Chipde et al. [25] had shown 42.85% prevalence of canine ehrlichiosis in Nagpur city. Ybanez et al. [26] found 438/913 cases were serologically positive for E. canis using IC ® (Biogal) test kit and positive dogs produced varied clinical signs that may be influenced by the thrombocytopenic and anemic states of affected animals. Hematobiochemical findings of ehrlichiosis Thrombocytopenia, anemia, hypoalbuminemia, increase in ALKP, decreased albumin and globulin ratio were the most common findings in diagnosing canine ehrlichiosis. This study depicts 100% prevalence of thrombocytopenia in E. canis seropositive dogs. A similar study by Bhadesiya and Modi [17] evidenced that the mean values of Hb, PCV, TEC, TLC, and total platelet count were significantly deceased in dogs which are positive by IC ® test kit. Sasanelli et al. [21] showed increased levels of ALT, AST, ALKP, BUN, creatinine and total bilirubin. Asgarali et al. [27] stated that thrombocytopenia is a common finding in dogs with ehrlichiosis. Akhtardanesh et al. [19] found 16.66% seropositive cases displayed hyperglobulinemia, thrombocytopenia, leukopenia, anemia, and high ALKP level. Kuehn and Gaunt [28] reported low albumin globulin ratio as serum biochemical abnormality in natural infection with E. canis. Mylonakis et al. [29] observed hypoalbuminemia and increased level of ALT activity in dogs with ehrlichiosis. In summary, it can be concluded that IC ® canine Ehrlichia antibody test kit can be used for both prevalence study as well as pen side diagnostic tool in diagnosing CME, apart from the routinely used conventional methods and above-mentioned hematobiochemical alterations must be included in the differential diagnosis when these are observed during routine laboratory evaluations. Conclusions Since in India, prevalence and distribution of ehrlichiosis remain largely unexplored, serological techniques like IC ® are more useful for detecting chronic and subclinical infections and are ideally suited to epidemiological investigations. IC ® canine Ehrlichia antibody test kit can be used as a pen-side test kit in diagnosing canine monocytic ehrlichiosis. Authors' Contributions MRK: Conducted research work and prepared manuscript: PSD: Designed research work and Procured IC ® antibody test kit,; LDS: Conducted microscopic examination of the blood smear, helped in preparation of manuscript; BKB: Provided research : 5% level of significance materials to carry out research work; SKU provided useful technical inputs, helped in collection of samples. All authors have read and approved the final manuscript.
v3-fos-license
2021-11-10T14:52:22.313Z
2021-11-10T00:00:00.000
243866240
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://injepijournal.biomedcentral.com/track/pdf/10.1186/s40621-021-00358-2", "pdf_hash": "91aaab6a29b066b6318a28bafbbb666a5117c00f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1524", "s2fieldsofstudy": [ "Medicine" ], "sha1": "91aaab6a29b066b6318a28bafbbb666a5117c00f", "year": 2021 }
pes2o/s2orc
Firework-related injuries treated at emergency departments in the United States during the COVID-19 pandemic in 2020 compared to 2018–2019 Background Despite a national decrease in emergency department visits in the United States during the first 10 months of the pandemic, preliminary Consumer Product Safety Commission data indicate increased firework-related injuries. We hypothesized an increase in firework-related injuries during 2020 compared to years prior related to a corresponding increase in consumer firework sales. Methods The National Electronic Injury Surveillance System (NEISS) was queried from 2018 to 2020 for cases with product codes 1313 (firework injury) and narratives containing “fireworks”. Population-based national estimates were calculated using US Census data, then compared across the three years of study inclusion. Patient demographic and available injury information was also tracked and compared across the three years. Firework sales data obtained from the American Pyrotechnics Association were determined for the same time period to examine trends in consumption. Results There were 935 firework-related injuries reported to the NEISS from 2018 to 2020, 47% of which occurred during 2020. National estimates for monthly injuries per million were 1.6 times greater in 2020 compared to 2019 (p < 0.0001) with no difference between 2018 and 2019 (p = 0.38). The same results were found when the month of July was excluded. Firework consumption in 2020 was 1.5 times greater than 2019 or 2018, with a 55% increase in consumer fireworks and 22% decrease in professional fireworks sales. Conclusions Firework-related injures saw a substantial increase in 2020 compared to the two years prior, corroborated by a proportional increase in consumer firework sales. Increased incidence of firework-related injuries was detected even with the exclusion of the month of July, suggesting that the COVID-19 pandemic may have impacted firework epidemiology more broadly than US Independence Day celebrations. Background Firework displays remain an integral part of the American cultural experience, punctuating national and local holiday celebrations, sporting events, fairs, and festivals. Both commercial (Walger et al. 2020) and consumer based firework sales peak in January and July, during New Year's and Independence Day celebrations, with a corresponding rise in firework-related injuries during these months (Canner et al. 2014). Despite public education, improved firework safety, and down trending annual rates of firework injuries, the distribution of injuries by age and sex has remained largely unchanged since the 1980's (Billock et al. 2017). Children account for more than 50% of all firework injuries in the US (D'Ippolito et al. 2010), with males being three times more likely to be injured than females (Billock et al. 2017). The COVID-19 pandemic has had a substantial impact on healthcare utilization in the United States, particularly regarding emergency department (ED) visits. The Centers for Disease Control and Prevention (CDC) reported a 42% decrease in ED visits immediately following declaration of a national emergency for COVID-19 in mid-March, with levels consistently lower than parallel pre-pandemic months until 2021 (Adjemian et al. 2021). Despite this, the Consumer Product Safety Commission (CPSC) preliminarily reported significant increases in emergency department visits related to a number of products including skateboards, scooters, all-terrain vehicles, and fireworks (Schroeder 2021). Additionally, press reports from the American Pyrotechnics Association (APA) suggested an "all time high" for consumer firework purchases during the summer of 2020 (Association 2020). With the pandemic came a series of mandates and prohibitions pertaining to large in-person gatherings and social distancing recommendations. The social changes imparted on society by the pandemic likely influenced the proportion of consumer firework sales during 2020 and therefore may have also influenced the epidemiology of firework injuries during the same time period. We sought to determine rates of firework-related injuries during 2020 compared to years prior, evaluated in conjunction with firework sales data, to direct injury prevention strategies in light of an ongoing global pandemic. Methods The National Electronic Injury Surveillance System (NEISS) was used to analyze firework-related injuries. The NEISS database collects information pertaining to consumer product-related injuries from approximately 100 emergency departments across the country under the United States Consumer Product Safety Commission (CPSC). The CPSC then uses these data to generate national estimates for product-related injuries. NEISS data consist of demographic, injury-related, and narrative descriptions for each case (Commission 2020).The NEISS was queried from 2018 to 2020 for all encounters with product codes for firework-related injuries (product code 1313). Additionally, narratives were individually evaluated to confirm injuries related to fireworks. Populationbased estimates and 95% confidence intervals of ED visits for firework-related injuries were calculated using US census data and NEISS calculated sample weights. Published estimates from the US Census Bureau were used to calculate population-based estimates for 2020, using the "high" estimate to prevent over-estimation of injuries for that year. Available patient demographics and injury-related information were acquired for each year. The variable specifying body part injury was split into the four most commonly injured individual body parts (hand, eye, finger, face), while all remaining body parts were included in other (lower extremity, trunk, shoulder, foot). Variables such as geographic location of injury and disposition were categorized according to the NEISS data set. National estimates for monthly firework-related injuries per million were compared between 2018, 2019, and 2020 in a time-series analysis using Poisson regression with each month as a cluster to account for within-month correlation of data (Fig. 1). Offset using log transformed population estimate was included in the model to derive incidence rate. Comparisons were performed with and without inclusion of the month July to investigate effects of the pandemic beyond the month with the greatest incidence of injuries. Additionally, firework sales data were acquired from the American Pyrotechnics Association (APA) for the same time period (2018-2020) to correlate with ED admission data. Firework sales are expressed as consumption in pounds (lbs.) of fireworks and divided into professional (display fireworks) and consumer usage. Statistical analysis was completed using SAS 9.4 (Cary, NC) software. Statistical significance was set at p < 0.05, two-sided. Given that this is a cross-sectional study of a large national dataset, it was not feasible to involve patients or the public in study design. This study was considered exempt from review by the institutional review board of the Yale School of Medicine. Patient demographics and injury data Between 2018 and 2020, there were 935 cases of fireworkrelated injuries reported within the NEISS, of which 440 (47%) were during 2020. The majority of recorded cases were in patients < age 30 (57-61% of all cases each year). The 'hand' was the most frequently injured body part (22-25% of all cases each year). Patient disposition after presentation to the ED was similar between each year, with a slight increase in the number of patients transferred and those that left without being seen in 2020. While the geographic location where injuries took place remained overall similar, there was a slight increase in injuries at home and in the street, with a slight decrease in "other public property". All available patient and injury information are listed in Table 1. Notably, children and young adults remain the highest age groups by percentage through the years studied (Table 1). Time-series analysis of monthly firework-related injuries National estimates for firework-related injuries per million are plotted over time in Fig. 1, demonstrating a small annual surge in January as well as a larger spike during the month of July and lesser increases in the fringe months of June and August. On time series analysis, national estimates for monthly injuries per million were 1.6 times greater in 2020 compared to 2019 (3.9 vs 2.5, respectively, p < 0.0001) with no difference between 2018 and 2019 (2.3 vs 2.5, p = 0.38). When the month of July is excluded from each year, estimated monthly injuries per million were still 1.9 times greater in 2020 compared to 2019 (1.5 vs 0.8, p < 0.0001), again with no difference demonstrated between 2018 and 2019 (0.9 vs 0.8, p = 0.49). National estimates for firework-related injuries used to generate Fig. 1 and monthly raw case counts for each year are presented in Table 2. Firework sales data Between 2018 and 2020, the APA reported 955 million lbs. of fireworks consumed in the US, 42% (401 million lbs.) of which occurred in 2020 (Fig. 2). This is 1. Discussion While the COVID-19 pandemic and subsequent quarantines drove down broad categories of ED presentation (with the largest declines observed in the pediatric population), certain types of recreation-driven injuries presented with greater frequency than in pre-pandemic years (Adjemian et al. 2021). This study demonstrates a concerning increase in firework-related injuries in 2020 compared to years prior. An increase in firework-related injuries was observed not only during the "hotspot" month of July, but throughout the year, suggesting that the pandemic impacted the epidemiology of fireworkrelated injuries beyond traditional celebratory clusters. Indeed, the months of June through November saw a 2-6-fold increase in national injury estimates over the same months during the 2 preceding years. This increase in injuries was paralleled by a surge in consumer firework consumption as reported by the APA, with a corresponding drop in professional firework displays in 2020. The startling increase in firework-related injuries highlights the need for constant injury surveillance and most importantly for innovation in injury prevention program Fig. 1 Trends in monthly firework-related emergency department visits in the United States development and messaging, based on timely assessment of trends in injury epidemiology. Work by the CPSC in terms of surveillance and dissemination of information, and educational messaging efforts by organizations like the American Academy of Pediatrics should be expanded upon for greater impact. A national lockdown, reduced public celebrations, and increased time at home may explain the increase in consumer firework sales which led to corresponding rise in injuries. Traditional firework displays produced for large communities were mostly cancelled during 2020 due to social distancing recommendations and federal mandates (Courtemanche et al. 2020;Creswell 2020), yet consumer firework consumption soared. Prior research has correlated increased firework sales with injuries (Morrissey et al. 2021). Furthermore, injuries are more common with consumer fireworks operated by novice operators compared to display fireworks operated by professionals (Canner et al. 2014;Sandvall et al. 2017). Review of the 2020 CPSC annual Fireworks Report revealed that patients aged 0-24 accounted for 49% of all injuries during fiscal year 2020. Fingers, hand, head, face, and eyes are the most commonly injured body parts and could lead to life-long disability (Allison Marier and Lee 2021). As with all cross-sectional reviews of a national data repository, possible limitations of this study include coding error, misclassification, or underestimation of injury incidence. Furthermore, US public health policy has varied widely by state and by pandemic month resulting in disparate governance over public gatherings and celebrations. Over half of reported data for geographic location is missing in this database; such data might permit a more nuanced understanding of firework usage. Additionally, information regarding case severity is limited, which could impact the significance of the increased incidence of firework-related injury. This is a large cross-sectional investigation of fireworkrelated injuries presenting to EDs in the US demonstrating a dramatic increase in injuries in 2020 compared to years prior. This is in conjunction with a corresponding rise in firework sales the same year and warrants further investigation into injury severity and strategies for injury prevention.
v3-fos-license
2018-04-03T01:05:05.460Z
2008-11-04T00:00:00.000
269523086
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmchealthservres.biomedcentral.com/counter/pdf/10.1186/1472-6963-8-224", "pdf_hash": "1ecfc5c0a1f2e8bd8d51fdccea16b3766bc50850", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1525", "s2fieldsofstudy": [ "Medicine", "Business" ], "sha1": "1ecfc5c0a1f2e8bd8d51fdccea16b3766bc50850", "year": 2008 }
pes2o/s2orc
Empowering employees with chronic diseases; development of an intervention aimed at job retention and design of a randomised controlled trial Background Persons with a chronic disease are less often employed than healthy persons. If employed, many of them experience problems at work. Therefore, we developed a training programme aimed at job retention. The objective of this paper is to describe this intervention and to present the design of a study to evaluate its effectiveness. Development and description of intervention A systematic review, a needs assessment and discussions with Dutch experts led to a pilot group training, tested in a pilot study. The evaluation resulted in the development of a seven-session group training combined with three individual counselling sessions. The training is based on an empowerment perspective that aims to help individuals enhance knowledge, skills and self-awareness. These advances are deemed necessary for problem solving in three stages: exploration and clarification of work related problems, communication at the workplace, and development and implementation of solutions. Seven themes are discussed and practised in the group sessions: 1) Consequences of a chronic disease in the workplace, 2) Insight into feelings and thoughts about having a chronic disease, 3) Communication in daily work situations, 4) Facilities for disabled employees and work disability legislation, 5) How to stand up for oneself, 6) A plan to solve problems, 7) Follow-up. Methods Participants are recruited via occupational health services, patient organisations, employers, and a yearly national conference on chronic diseases. They are eligible when they have a chronic physical medical condition, have a paid job, and experience problems at work. Workers on long-term, 100% sick leave that is expected to continue during the training are excluded. After filling in the baseline questionnaire, the participants are randomised to either the control or the intervention group. The control group will receive no care or care as usual. Post-test mail questionnaires will be sent after 4, 8, 12 and 24 months. Primary outcome measures are job retention, self efficacy, fatigue and work pleasure. Secondary outcome measures are work-related problems, sick leave, quality of life, acquired work accommodations, burnout, and several quality of work measures. A process evaluation will be conducted and satisfaction with the training, its components and the training methods will be assessed. Discussion Many employees with a chronic condition experience problems in performing tasks and in managing social relations at work. We developed an innovative intervention that addresses practical as well as psychosocial problems. The results of the study will be relevant for employees, employers, occupational health professionals and human resource professionals (HRM). Trial registration ISRCTN77240155 Background Persons with longstanding health problems or handicaps have paid jobs less often than healthy persons. The employment rate in several countries in Europe is approximately one third lower for these individuals [1][2][3]. These figures differ substantially for various chronic diseases. The majority of rheumatoid arthritis patients in the USA and the Netherlands are employed (59% and 56%), although the prevalence of premature work cessation rises steadily with disease duration [4,5]. For inflammatory bowel disease the figures are roughly the same: about 60% [6], or even more [7,8] are employed. For the USA the figures are somewhat higher, for Europe somewhat lower. In addition, patients with chronic obstructive pulmonary disease (COPD) perform rather well: for Dutch patients between the ages of 45-60, 52% are employed [9]. More dramatic are the figures for dialysis patients or people with Parkinson's disease, where less than one third of the patients of working age report being employed [10][11][12]. For multiple sclerosis patients, comparable figures are available: only 20 -40% are employed [13]. If employed, many persons with chronic diseases experience problems at work. Lerner et al. [14] studied a large sample in the USA and concluded that, depending on the chronic disease, between 22% and 49% of the employees experienced difficulties in meeting physical work demands, and that between 27% and 58% had difficulty meeting psychosocial work requirements. Compared to healthy workers, chronically ill workers have higher scores on scales measuring fatigue and emotional exhaustion, which are correlated with perceived work stress [15,16]. Research focussing on the patients' perspectives provides insight into possible sources of stress and fatigue, and offers suggestions for remedies. Patients with diabetes, rheumatoid arthritis or hearing loss stated that important factors that helped them to continue working were the ability to cope with the illness, support from management and colleagues, and adequate work conditions [17]. A focus group study among employees with inflammatory arthritis reveals that they faced difficulties managing interpersonal and emotional difficulties at work, in addition to managing fatigue and other symptoms, and that they had trouble managing working conditions [18]. Asked what they expected in the way of work-related support, employees with multiple sclerosis mentioned support with managing work performance and support with managing social and personal expectations [19]. These findings suggest that vocational rehabilitation efforts should pay attention to psychosocial as well as practical bottlenecks at the workplace. For the past several decades, social policy in many countries has been focussed on helping individuals with a chronic disease or handicap enter or re-enter the labour market, whereas less attention is paid to efforts aimed at helping employees to stay at work. Finding a new job is more difficult than trying to keep one, as one has the extra task of convincing a new employer of one's capabilities. This might be a reason to focus attention on structural vocational rehabilitation efforts aimed at job retention. A systematic review shows that there is some evidence for the effectiveness of interventions of this kind. However, the number and methodological quality of the studies is not sufficient to tell which one will be most successful [20]. Based on this review and discussions with experts, we developed training for employees with chronic diseases that supports them in solving practical and psychosocial problems. The aim is to prevent the unnecessary loss of their job. The objective of this article is twofold. First, the development, set-up and contents of the intervention will be described. Second, we will specify the design of the study to evaluate its effectiveness. Target group and purpose This intervention is meant for employees with a chronic physical (i.e. not a predominant psychiatric) disease, who experience work-related problems and fear job loss or loss of work pleasure. We decided to include a wide variety of chronic diseases, such as musculoskeletal diseases like arthrosis and rheumatoid arthritis, neurological diseases like multiple sclerosis and Parkinson's disease. We included endocrinological diseases like diabetes, heart failure, pulmonary conditions, inflammatory bowel disease, chronic fatigue syndrome, and visual impairment, as well as any other chronic disease or handicap that results primarily in physical limitations. Work-related problems are broadly defined -they may be practical, social, mental or a combination of the three. The aim of the intervention is twofold: job retention as well as maintenance or increase of work pleasure. Program development We started to carry out a systematic review of vocational rehabilitation interventions aimed at job retention for employees with chronic diseases [20]. Effectiveness studies, though often of low methodological quality, gave evidence of positive effects. This inspired us to develop an intervention of the same kind. Four patient organisations were contacted to ask whether they thought that there was a need for this kind of intervention. Three employees with chronic diseases who had experienced serious workrelated problems were interviewed by telephone in order to assess their needs. A first draft of a program was developed, based on international examples. In addition, ele-ments of the program were derived from two current Dutch vocational rehabilitation programs aimed at job retention for employees on long-lasting sick leave. One is tailored to workers with burnout [21], the other to workers with severe depression [22,23]. The pilot version of the training was tested in a group of eight employees. On the basis of the trainers' experiences, the researchers' observations, a pre-and post test evaluation and an interview of the participants by telephone, the pilot version was adapted. In the process of adaptation, decisions were reached about the optimum length of the training period. Elements of the pilot training were prioritised, which resulted in the elimination of several elements. The most important post-pilot changes included a new final meeting, two months after the sixth meeting. More time was reserved for role-playing, and two individual consultations were added to the first intake consultation. A 'Quality of work' model, used to clarify work-related problems and based on the ICF disability model [24], was not helpful in clarifying work related problems, because many problems experienced at work originated in 'the environment', a concept that is present but not elaborated well in the ICF. Therefore, this model was substituted for a new version that emphasises the positive or negative influence of work tasks, social relationships at the workplace and working conditions on wellbeing at work. After the decision-making process on the outlines was finished, the essential elements, procedures and objectives of each component of the group sessions, as well as of the individual counselling sessions, were discussed and described in detail in the trainers' manual. Together with the trainers' manual, a textbook for the participants was written. This textbook gives an overview of the content of every group session, homework to be completed for the next session and an appendix that offers theoretical background and exercises. Experts from two patients' associations commented on the training and the textbook. Rationale of the training The training is based on a number of notions: Empowerment Participants are invited to participate in a program 'to provide knowledge, skills and a heightened self-awareness regarding values and needs, so that patients can define and achieve their own goals', corresponding to the definition of empowerment by Feste and Anderson [25]. Such a program requires an active attitude, in which participants define what is problematic at work and subsequently try to get a hold on their situation. Counselling can be a component of such an empowerment program. The importance of personal and environmental factors Work-related problems and work disability can be understood as the result of the specific combination of disease, person and workplace. A serious medical condition can be decisive; causing so many problems that continuing work is impossible. On the other hand, whether an employee with a chronic disease becomes work disabled often depends on factors other than the severity of his disease or bodily impairments. The actual disability may depend on personal and environmental factors that can hinder or promote work capacity and functioning. This point of departure corresponds well with the WHO's International Classification of Functioning, Disability and Health [24,26]. However, the ICF-model is not elaborate enough to serve as a model to clarify work-related problems. These must be understood in a broader context in which work tasks, social relationships at the workplace, working conditions and terms of employment are understood as significant for well-being at work. Communication is important and can be difficult Working together and discussing tasks and responsibilities requires communication skills. However, having a chronic disease may hamper communication and have a negative impact on social relationships with supervisors and colleagues. Employees need to explain to the supervisor or colleagues what their disease implies and to elucidate its consequences for work performance. At the same time feelings of sadness, shame or anger about their disease may prevent speaking out [27]. Not speaking out or non-assertive behaviour is an impediment to the solution of work-related problems Perceived self-efficacy is a prerequisite to resolving work-related problems According to social learning theory, active coping behaviour aimed at solving problems will improve when perceived self-efficacy increases [28,29]. Expectations of personal efficacy will be enhanced by performance accomplishments, vicarious experience and verbal persuasion. The above-mentioned principles resulted in the development of a stepwise intervention for employees with a chronic disease: a) exploring and clarifying work-related problems, b) communication at work, and c) thinking out and realising solutions. It is organised mainly as a group intervention, since group meetings are a suitable method for enhancing perceived self-efficacy. Set-up of the training The training is a group training consisting of seven threehour sessions every two weeks. The last session takes place two month after the sixth session. The group comprises eight participants and one trainer. The trainer is experi-enced in working with groups, has psycho-therapeutic knowledge of the principles of rational emotive therapy as well as knowledge of occupational psychology and a basic understanding of chronic diseases and their consequences. Participants are requested to read material from the textbook before each session, and to do homework that is discussed at the start of the following session. The exchange of experiences forms an important part of the training. Guest speakers are invited at three sessions. An actor is invited twice to assist with role-playing. An occupational physician and an employment expert are invited to discuss matters concerning work accommodations, sickness absence, disability pensions and other practical topics. In conjunction with the group sessions three individual consultations are offered: one at the beginning, one halfway through the training, and one after the sixth session. These consultations offer the trainer the possibility of giving feedback, and participants the possibility of discussing anything they want in private, or to pursue questions in greater depth. Contents Every session focuses on one theme, which will be discussed briefly. What bothers you; consequences of a chronic disease in the workplace The participants get to know each other well in this session; group dynamics and the feeling that one can exchange experiences and practice exercises safely are essential for the success of the training. Attention is paid to possible consequences of chronic diseases in terms of difficulties in performing tasks, in carrying on, and in the risk of sickness absence or work disability. The 'Quality of work' model is used to explore workrelated problems (figure 1). This model contains groups of factors that are known for their influence on quality of work. It is based on theoretical ideas about work demands and work capacity [30], research on employees with chronic diseases, and recent views developed in occupational psychology on work factors that yield or absorb energy [31]. It is explained that, for some factors, it holds that not only 'too high' or 'too much', but also 'too low' may be problematic. For instance a high mental burden can be as problematic as monotonous work without any mental challenge. Two participants are asked to explore the negative and positive factors of their work in the group. They do so with the help of a large laminated poster of the 'Quality of work' model in which plus signs or minus signs show aspects of their work that they experience as positive or negative. The others are asked to fill in the model for the next sessions. The input of all participants will be discussed extensively in the group at least at one session. Insight into yourself: feelings and thoughts about having a chronic disease Persons with a chronic disease experience that talking about one's disease or consulting about work accomodations with a supervisor require good communication skills. However, negative thoughts or feelings about the disease can be an obstacle. Feelings of sadness or shame and thoughts of worthlessness can lead to non-assertive behaviour. Feelings of anger may induce aggressive verbal behaviour. The purpose of this meeting is to explore feelings and thoughts. The intention is not to replace them, but to understand how these feelings and thoughts might affect coping behaviour and might lead to ineffective communication. Homework for this session is to formulate predominant thoughts around work and illness. A second task is to request a consultation with the supervisor, to discuss how he or she appreciates the job performance. This is regarded as a preparatory consultation; a following consultation will be about concrete problems and solutions. Communication: practicing in daily work situations Employees with a chronic disease do not always stand up for themselves. The actor in this session shows what the difference is between non-assertive, assertive and aggresive verbal behaviour. This is followed by a role playing exercise; the participants explain their chronic disease to 'a new colleague', and talk about what consequences it has for daily functioning at the workplace, why this colleague should know about it, and how they would like the colleague to deal with it. The other participants give feedback. Practical matters; the occupational physician, the employment expert, legislation and facilities for disabled employees The textbook gives an overview of the occupational physicians' function, as well as legislation concerning sickness absence and work disability. Furthermore, work accommodations and other facilities for disabled employees or their employers are listed. By way of homework every participant formulates one question for the occupational physicain and one question for the employment expert on matters that are relevant to themselves. The guest speakers have received these questions beforehand and discuss them in the group. Homework for the following session is to consider which work accommodations might be appropriate, and to initiate a second consultation with the supervisor about work-related problems and solutions. If appropriate, a consultation with the occupational physician of the company is recommended. Model 'Quality of work' Figure 1 Model 'Quality of work'. Energy absorbers and energy suppliers Burden and task content: Communication and standing up for oneself: continuation Examples and theorising about short-term and long-term functions of different manifestations of verbal behaviour are given to deepen understanding of assertive, non-assertive, and aggressive behaviour. Subsequently, the participants practice with the actor situations they find difficult at work, for instance negotiations with their supervisor or conversations in which they deal with their colleagues' lack of understanding. A plan to solve problems The homework for this session is to develop a plan to tackle one or more of the resulting work-related problems. This plan is developed along SMART-lines: Specific, Measurable, Acceptable, Realistic, and Time specific. The plans are discussed in small groups and adapted if necessary. Follow-up: what works and what not? The last session is meant as a follow-up meeting. Experiences with the implementation of the plan are discussed. By way of conclusion the participants write a letter to themselves, in which they describe how far they have gotten and what they want to have achieved in a half year's time. This letter is meant to keep them active and will be sent a half year later. Study design, research question, and follow-up The study is designed as a randomised controlled trial. Eight training groups, with 64 participants in total, will be compared to about 64 persons in the control group. The follow-up is two years, with one baseline questionnaire and four follow-up questionnaires at 4, 8, 12 and 24 months. The research question is twofold: a) Which work-related problems do employees with a chronic disease experience at the workplace, b) Does participation in the training increase self efficacy, establish work accommodations, decrease fatigue, enhance work pleasure, improve quality of work, and contribute to job retention? Persons in the control group receive care as usual. However, the usual care for this group of patients for workrelated issues varies from nothing at all to counselling or support by occupational health professionals or medical professionals from outpatient clinics. The Medical Ethics Committee of Academic Medical Center in Amsterdam informally approved of the study idea, but deemed ethical review unnecessary because they perceived no question of 'medical' research. Inclusion criteria Participants are eligible for the study when they have a chronic physical disease, have a paid job, experience problems at work, fear losing their job or job satisfaction, and are willing to undertake actions to solve problems. Workers with predominant psychiatric conditions are excluded; people with a chronic physical disease in combination with depressive feelings are not excluded. Workers on long-term 100% sick leave that is expected to continue during the training are excluded. Recruitment of participants Participants are recruited via outpatient clinics, occupational health services, patient organisations, employers, and a yearly national conference on chronic diseases. Presentations are given at outpatient clinics and occupational health care services; specialised nurses, medical specialists and occupational physicians are asked to draw attention to the project by offering potential participants a leaflet. The leaflet is also available digitally. Patients' organizations are asked to publish calls for participation in their magazines, electronic newsletters and websites. A mailing is sent to a large number of employers, who publish calls for participation in house organs or approach potential participants directly. Presentations are given for meetings of patient organizations. Potential participants or medical professionals have the possibility to ask for information by mail or telephone. The training is offered for free eight times in the course of one and a half years. Organisation of enrolment Candidates apply by telephone. They can not be presented by others, (e.g. medical professionals). A first check at the moment of registration is on the objective inclusion criteria: chronic physical disease, paid job, and no long-term full-time sick leave. Candidates receive a written confirmation of their registration, explaining the procedures. Candidates receive the baseline questionnaire and the informed consent form three weeks before the randomisation. After a first and a second reminder, all participants who have returned the questionnaire are randomised. Randomisation Since not all questionnaires will be returned, the ideal group size is 18. If four or more persons have the same disease, randomisation is stratified on this disease, in order to prevent a coincidentally large group within the training group that shares the same disease. Randomisation is performed by the researcher in the company of another person, and with help of a computer program generating random numbers. Since ethical considerations preclude individual consultation before randomisation, persons randomised in the training group receive the invitation for a first individual consultation afterwards. If the trainer or the participant decide that the program does not meet the participants' expectations, a new randomisation procedure starts with the remaining persons in the control group. Outcome measures Primary outcome measures are job retention, self efficacy, fatigue and work pleasure. Not having a paid job, or having more than six months full-time sick leave in combination with the expectation that return to work is impossible or improbable is considered as job loss. Self-efficacy is measured by a situation-specific instrument, measuring self-efficacy in solving work and disease related problems. It is developed according to the principles formulated by Bandura [32]. The fourteen items are measured on bipolar five-point Likert scales. Work-related problems are measured with eight items: having problems with specific work tasks, finishing work, arranging the workplace, commuting, communicating with colleagues, communicating with supervisors, accepting the disease, and balancing work and life at home. The three answer categories are counted as 0 (no), 1 (yes, slightly) or 2 (yes, severely) and are added up to an index measure. Sick leave is measured as the number of days on sick leave during the last four months. Our intervention resembles Stulemeijer's intervention in its aim to decrease fatigue. However, whereas Stulemeijer's intervention focused on the disease and its symptoms, our empowerment training for employees focuses on work related problems and resulting work stress and fatigue. The chronic disease itself will remain and might even progress, which means that fatigue levels comparable with those of the healthy workforce are not to be expected at follow-up. This is the reason why we have chosen a larger sample, of 64 persons in the intervention group and the control group each. Statistical analysis Statistical analyses will be performed according to the intention-to-treat principle. Job retention will be analysed using survival analysis. The other variables will be analysed with repeated measurement analysis and mixed linear models. Process evaluation The process of the intervention will be evaluated in three ways. First, we will describe the recruitment of participants and evaluate if we reached the target group and if our recruitment methods worked or failed. The researcher keeps a recruitment diary for this purpose. Second, the group sessions and the individual sessions will be evaluated by the trainers. They fill in a process evaluation form and note the attendance of the participants, whether each subject for that session has been discussed, whether the participants experience emotional or cognitive difficulties with the subject, whether they feel involved, and whether the goal of the specific subject is reached. Third, the participants are asked their opinion in the post-test questionnaires. They are asked to evaluate the whole training, the various themes and procedures, and the textbook. They are also asked to evaluate whether skills they wanted to improve have actually improved, whether they have passed successfully through the three stages: clarification, communication and problem solution, and whether they have attained the goal they had in mind beforehand. Discussion Vocational rehabilitation interventions for persons with chronic diseases generally focus on entering or re-entering the labour market. Structured vocational rehabilitation interventions aimed at job retention are rare, notwithstanding demands for evidence-based vocational rehabilitation programmes aimed at preventing work disability for this group of employees [38]. Just a few of this kind of interventions could be traced in a systematic review [20]. A reason for this lack of initiatives or lack of documentation and evaluation might be that the societal consequences of work related problems are not felt clearly as long as people are still struggling to retain their jobs. However, when serious problems in work functioning finally result in long-term sickness absence, complete work disability or loss of a paid job, it is difficult to return to work. The intervention we developed originates from an empowerment perspective and aims to help employees restore the balance of work capacity and work demands. We used a stepwise approach, starting with exploring practical, psychological or social problems, followed by communicating with the supervisor or others at work, and finally developing and implementing solutions. Most studies on interventions aimed at job retention claim effectiveness. However, these claims are seldom underpinned with a study design offering strong evidence. Studies seldom use pretesting, a control group, a sufficient number of participants or a long-term follow-up. Our study design involves a control group and outcome assessments at five points over two years. We also try to include 128 participants randomised over two conditions. An inevitable drawback is that participants are not blinded. The research project may trigger the awareness of the problems of participants, which can result in more than usual active coping behaviour in members of the control group. The results of this study will generate knowledge about the nature of work-related problems and will possibly contribute to better vocational rehabilitation services for employees with chronic diseases. It will put issues at the crossroads of chronic disease and work, and of health care and occupational health on the agenda.
v3-fos-license
2021-04-16T04:05:55.507Z
2020-02-25T00:00:00.000
238916716
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "http://kjie.ppj.unp.ac.id/index.php/kjie/article/download/53/pdf", "pdf_hash": "5bcb8859aad395e4a16e41259324193e1b781884", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1527", "s2fieldsofstudy": [ "Education", "History" ], "sha1": "b509a2b42f3f79068e82d8113dfffea81fe2cb97", "year": 2020 }
pes2o/s2orc
WASAKA Concept Implementation in Islamic Education towards Banjar Society of South Kalimantan in 4.0 Era This paper is aimed to analyze WASAKA concept as Banjar society, Islamic education and how to implement it in Banjar society in 4.0 era. WASAKA (Waja Sampai Kaputing) is pronounced by Antasari Prince, national hero from South Kalimantan, and be the society character in South Kalimantan The character values are religious, tough, honest, intelligent and caring .These values can be relevant and synergize with Islamic education in Banjar society in 4.0 era due to education character in 4.0 era receives a lot of challenges from outside. Nowadays, foreign character and culture from outside intrude without filtering process so it can influence and be character degradation of our education. Therefore, it is important to understand deeply and comprehensively how innovation based Islamic education and WASAKA character implementation in 4.0 character education. Then, WASAKA character values in character education in the 4.0 era will be assessed using the literature study methodology and relevant literature materials as the sources. The results is WASAKA concept in Islamic education in Banjar society of South Kalimantan is as reinforcement of character education in the 4.0 era. In sum, WASAKA concept implementation in Banjar society in 4.0 era can be solutions to face character degradation in 4.0 era. Introduction Education is primarily factor to advance the state and state's future can be seen into what extent its commitment and society in providing education (Muarif, 2005). State development is mainly determined by human sources. In order to develop human sources proficiently, it can be done through formal, non-formal or informal education (Ahmadi & et al. 2009). The importance of education for a state affects civilization which will be built by a state. This civilization refers to not only to show how great technology development and science but also beyond the education concept (Putra et al. 2020). Moreover education is a mindful and deliberate activity which is adult's responsibility towards children. As the result, the interaction between them attains child's maturity and it happens continually (Ahmadi & Uhbiyati, 2015). Education concept in the Act of SISDIKNAS No 20 2003 is defined as "Education is a mindful and planned activity in order to create active learning environment and process for students, develop students' potential to have religious spiritual power, self-controlled, character, intelligence, noble character and skills which are required for themselves, society, nation and state" (Nuansa, 2012). The definition of education included in the Act of SISDIKNAS No 20 Year 2003 consists of several aspects which are developed in education. The most important aspect is religious spiritual power which refers that religious education is also important (Kasmar et al. 2019;Kosim, 2020). Religious education and its implementation in school should emphasize more religion deep understanding, so it build high religious spiritual soul (Ahmadi & Uhbiyati, 2015). A. Syaifullah and Surawardi WASAKA Concept Implementation in Islamic Education towards Banjar Society of South Kalimantan in 4.0 Era Khalifa: Journal of Islamic Education :: Volume 4, Number 1, March 2020/1441 P-ISSN : 2541-6588 ; E-ISSN : Observing how our state is eager to form spiritual and noble character in the next generation, Islamic education also has to achieve the aim of national education state (Nafisah & Zafi, 2020). The goal of Islamic education is to instil noble character to each of people who is khalifa on the earth (Wijaya et al. 2020). As the result, Islamic education will rule and guide human behavior in order to be better, devote themselves to Allah and also helpful to other people (Rasyid, 2017). Since Islamic education goal is morals and character building, it is also significant for Islamic education to inserted specific character in education character. In addition, Islamic education has a duty to encourage humans to develop their potentials, thus they serve optimally. For instance, they can be khalifa on the earcth who have noble character (Sholihah & Maulida, 2020). Islamic education end goal is to build character or moral which is in line to character education goal (Murnyetti et al. 2016;Nasution & Harahap, 2020;Jaafar et al. 2020). Presently, character education is put forward in various elements for instance government, education institutional and society because of multidimensional challenges faced by the government, improper morals (Hidayat & Sukitman, 2020). Thus it affects the life of nation and state in terms of inappropriate manners and widespread corruptions. In spite of educating people by science for their intelligence, they cannot be helpful for nation and state (Ridhahani, 2013). Therefore, in 4.0 era character education is mandatory to nurture children of the generation so they can contribute to the science development and build their character (Saihu & Marsiti, 2019). The urgent of character education is to start it on children early age (Muslich, 2013). People realize that national education prepares students to pursue their higher education rather than build their character. Thus students will be highly intelligent yet immoral (Muslich, 2013). In 4.0 era, everything can come into without filtering or scanning process. In some cases, foreign values can be inacceptable to the nation. Advanced technology can be fruitful not only for adults but also children. (Hendayani, 2019), its negative effects intrude easily which are inadequate to the nation character (Putri, 2018). Furthermore, 4.0 era shows that moral degradation happens to our nation such as corruption, child abuse, tribal disputes, drug abuse and bullying. Consequently, it weakens nation character that causes various challenges (Putri, 2018 (Rohimah, 2018). In sum, everything is connected to internet that every life aspect depends on digitalization for instance ordering food or booking a flight. This phenomenon is called as digitalization era in our life which requires modern media to be connected to internet (Rohimah, 2018). Conceiving the advanced of technology, Islamic education should innovate and rectify nation character Ningsih, 2019). Thus the nations is threatened by advanced digitalization technology which remove world boundaries. Islamic education which is contained in nation character, can give innovation with local character synergy to strengthen nation character (Salsabila, 2019). Therefore, it is need to collaborate Islamic education with local based character education (Hidayat & Haryati, 2019). Current cases in homeland such as promiscuity, drug trafficking and even online prostitution are undeniably impact of technological advances. Based on these challenges, character education must be defend nation to face current development (Mujiburrahman, 2017). (Dradjat, 2000). Islamic education goal is to build kaffah personality, develop all human potential physical and spiritual (Baharun, 2017). Islamic education is well planned and systematic activity to develop students potential based on Islamic teaching (Bashori, 2020). Its goal is to achieve growth harmony in human comprehensively by performing exercises of mental, mind, intelligence, feelings, and five senses (Sari, 2020). Moreover, Islamic education final destination is building Islamic behavior (virtuous) and resignation (faith) to Allah based on Islamic teaching (Al-Qur'an and As-sunnah) (Safitri & Az-Zafi, 2020). Muhammad Fadhil Al-Jamaly in Jannah's article defines Islamic education as developing, reinforcing, an inviting students to live dynamically based on high values and glorious life. Thus this process is expected to build perfect students character (Jannah, 2013). Character education: Character refers to mental quality or morals, morals power name or reputation. In Indonesian dictionary, it is defined as psychiatric traits, moral or character that distinguish people's character from each other. Having character is associated to having personality and moral (Hidayatullah, 2010 educate students in order to make wise decision and implement in daily life, so they can contribute positively in their environment (Munawwaroh, 2019). Fakry Gaffar in Dhrama Kesuma also defines character education as transformation process of life values which are developed people's character to be unify in them (Kesuma & et al, 2011). In addition, Rami in Gunawan defines that character education has similar gist and meaning to moral and character education. Its purpose is building child's character to be good human beings and citizens (Gunawan, 2012). Hamdani Hamid and Beni Ahmad Saebani describe that character education is character value instilment system to students which consists of knowledge and awareness and action to implementation the values towards God, themselves, other human, environment and nation. Therefore, students will be decent human beings (Hamid & Saebani, 2013). Furthermore, Zainal Aqib and Sujak in Character Education Guidance and Application elaborate that education character is planned efforts and implemented systematically in assisting students to comprehend human behavior values towards God, students themselves, human, environment and nation which are actualized in mind, behavior, feelings, words and actions based on the norm of religious, law, manners, culture, and customs (Aqib & Sujak, 2011). Zainal Aqib expresses his opinion that character education is whole interpersonal relational dynamic with various dimension inside and outside of the students (Aqib, 2011). Ridhahani in his international seminar, Promotion and Implementation of Character Education, define character education a conscious effort to educate students so they can make wise decisions and practice them in everyday life, so that they can make a positive contribution to their environment (Ridhahani, 2010). Era in digitalization: This era is also called as technology progress which has broad complexity towards life shift and human work fundamentally (Fonna, 2019). Further, this era is happened due to assimilation among technology industrial revolution is identified as rapid technology development that integrates sensor technology, interconnection and data analysis in various industrial sectors (Lase, 2019). It is also marked by the existence of digitalization and automation in every life aspects. Furthermore it eliminate boundaries among countries and continents. People can send information without being present in certain place (Anwar, 2019). As the result, it will facilitate improper foreign character value to our nation get in students' personal, the next nation generation can be students' character and prevention of improper foreign character values towards the students themselves. The meaning of Waja Sampai Kaputing is a steel from edge to edge which refers to an effort from the beginning to the end, fight tooth and nail (Sarbaini & et al. 2012 (Daud, 1997). Method The research method used in this study is qualitative research which describe the collected data (Sulistiono, 2019) Findings and Discussion The research finding will explains local character in Islamic education as innovation to solve 4.0 challenges comprehensively. First, the writer will describe 4.0 education character, then local character which can be inserted in Islamic education as innovation. Character values that are inserted in Islamic education are religious, caring, honest, tough and intelligent. Character Education Education is conscious and planned effort to form students to be helpful for nation and state. This formation is intended to build students' character. Character is not inherited, it was developed thoroughly and continuously day by day, mind by mind and act by act (Pratama, 2019). Character can be defined as the unique way of values to their daily life as helpful citizen (Muslich, 2018). Along with current development, it catalyze us to 4.0, digital era and industrial revolution. 4.0 era definition is a term which refers to technology combination among physic dimension, biological and digital which form inseparable combination such as 2 people exchanging information rapidly without having present in one place and time physically and biologically. Therefore, education character in 4.0 era is education character which teach nation character values to face technology development in 4.0 era (Putrawangsa & Hasanah, 2018). Islamic Education Innovation based on WASAKA Character Islamic education innovation is an effort to enhance Islamic education which is compatible with current development. Generally, innovation refers to improve teachers, infrastructure and Islamic education system (Atnawi, 2017). Whether Islamic education could solve nation challenges and current development or not, Islamic education system is oriented to text memorization and understanding of textual normative thinking, thus people will think in textual normative way which fundamental view of Islamic will be bloated. Consequently, radicalism is instilled in Muslim (Yunanto, 2018). It will be beneficial to perform methodology reinforcement so historical and empirical way of thinking can change and improve people (Ismail, 2017). Without performing innovation in Islamic education system, it will be harmful for Muslim. In this article, Islamic education innovation refers to build students with Islamic character which is in accordance to local character (Dewi, 2020). Therefore, students are not only taught to respect each other, be helpful and devout but also have local character. Students will be open to the world and understand the society life diversity (Usman, 2019). Education character is emphasized on teaching of particular values which are should be taught and its moral quality such as honest courage, generosity, in order to be known and understood by students. In addition, education character reach cognitive, affective and psychomotor (Mahmud, 2013). (Sarbaini & et al. 2012). The target religious values in WASAKA education character is attitudes and behavior in performing religious orders as well as being tolerant to other religion adherents and living in harmony with them (Sarbaini & et al. 2012). This religious values should be inserted in education character in 4.0 era because of students neglect their religious duty such as: playing game online, watching Youtube and surfing in social media platform (Octaviana et al. 2019). They likely access and play them and neglect their religious duty. If these religious values are achieved, there will be religious government officials who will not do misbehavior of religious teaching for instance corruption, fund manipulation and overthrowing others . Tough is attitude and behavior which shows remarkable effort in overcoming learning obstacle and duty and completing task well (Sarbaini & et al. 2012). This value is inserted in 4.0 education character because students tend to choose instant way without filtering process in grasping internet sources. As the result, students take from bogus internet sources due to their careless effort in finding online articles or reading and finding books in library. They will get high score yet meaningless knowledge, so educated unemployment rate will be higher as well. Tough in Islamic character is relevant to Syajaah moral which is defined as the ability to prevent emotional potential on students. After being instilled and synergized to Syajaah, tough will be form of prevention when students struggle to reach their dream Maula (2020), shows their greatness yet obtrude. Therefore, Syajaah will prevent student to control their emotional potential (Ainiyah, 2013 Honest (Mustofa, 2010) is attitude and behavior which is based on becoming trusted people in words, actions and jobs (Sarbaini & et al. 2012). This character value instilment expects students to be honest in answering exam, doing their duty and treating their friends. Nowadays children have neglect their honesty by cheating as long as they can get high score. As the result of this action, official government will perform corruption, bribery and power abuse (Hayati & Kurniawan, 2020). Intelligent is attitude and behavior which implement and find information from environment (Sarbaini & et al. 2012). Furthermore it utilizes various sources in logical, critical and creative way (Pamungkas, 2012). Nowdays children's intelligent is replaced by information overload or infobesty, yet their criticality in seeking information is diminished (Nurhuda & Andrea, 2020). Caring is mental state which makes people identify and aware that they have similar mind and feeling towards others (Pamungkas, 2012). By inculcating this attitude, students will prevent harm in social environment, culture or natural environment. Presently, people are apathetic to their social surroundings, as smartphone is taken over their time with people, and they constantly prefer play with their phone to friends or family. We can see that everyone in dining table are busy with social media feeds instead of eating and talking to each other (Hasnidar, 2019). Caring is relevant to Qanaah which refers to pleasing feeling to current condition after trying the best. If students are caring, qanaah is also instilled to them which they will be caring and please to themselves and improve their surroundings (Saputro & et al. 2017 Conclusion WASAKA concept in Islamic education is highly required to be done, so it can transform along with times to face nation character lost. Education character in 4.0 is urgently needed because improper foreign character values infringe without filtering process and influence nation core character. In order to solve and reinforce character, it is required to plan local based character education which is WASAKA character to examine foreign character as well as reinforce education character. It also synergizes to Islamic education such as religious, tough, honest, intelligent and caring which are in line with to Islamic education to face time challenges. If Islamic education which is based on local character in 4.0 era is performed, students will be helpful to nation and state as well as have nation character.
v3-fos-license
2021-09-01T15:10:11.868Z
2021-06-24T00:00:00.000
237855306
{ "extfieldsofstudy": [ "Art" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://brill.com/downloadpdf/journals/fasc/10/1/article-p166_166.pdf", "pdf_hash": "11fd06197ccbedba0c747ff24f92695b8016a858", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1528", "s2fieldsofstudy": [ "History", "Sociology" ], "sha1": "21c58eaf18021639754cecb4e23b33d66d633238", "year": 2021 }
pes2o/s2orc
‘The Girl Who Was Chased by Fire’: Violence and Passion in Contemporary Swedish Fascist Fiction Fascism invites its adherents to be part of something greater than themselves, invoking their longing for honor and glory, passion and heroism. An important avenue for articulating its affective dimension is cultural production. This article investigates the role of violence and passion in contemporary Swedish-language fascist fiction. The protagonist is typically a young white man or woman who wakes up to the realities of the ongoing white genocide through being exposed to violent crime committed by racialized aliens protected by the System. Seeking revenge, the protagonist learns how to be a man or meets her hero, and is introduced to fascist ideology and the art of killing. Fascist literature identifies aggression and ethnical cleansing as altruistic acts of love. With its passionate celebration of violence, fascism hails the productivity of destructivity, and the life-bequeathing aspects of death, which is at the core of fascism’s urge for national rebirth. living dead, seeking in vain to satisfy their hunger in the great emptiness of mesmerizing material abundance.5 Borrowing from Antonio Gramsci, Andersen suggests that we are living in an interregnum between two stable orders: 'The old is dying and the new cannot be born' . The old system of sovereign nation-states, in which responsible political administrations secured the wellbeing of their supposedly homogenous constituencies within their countries' secure territorial borders, is dying, and the global utopia of multiculturalism cannot be born. Fortress Europe is caving in, and entire European regions, cities and city quarters are transforming into alien islands of eerie nonwhiteness: 'Toto, I've got a feeling we're not in Kansas anymore' . The white bourgeoise is increasingly terrified. They drug themselves with consumption and seem ready to follow any anti-immigrant populist who promise to return things to normal. But there is no going back. The white world supremacy of the twentieth century is not returning. While white liberals and mainstream white conservatives see this as negative and frightening, white fascists should, Andersen argues, seize the opportunity to define the new order that will arise out of the ashes of the rapidly approaching Ragnarök [doom of the Gods] that will engulf the dying order. 6 To define the becoming new order, Andersen stresses that 'the New Right' needs the synergies of full spectrum dominance: 'we need to control not only the street, but the Internet, the parliament as well as the academy' , which requires 'the ability to divide labor' .7 Hence the 'importance of metapolitics, of creating and maintaining alliances, disseminating our concepts, analyzing and making an accurate assessment of the common sense of the people, and targeting the weak spot of the opponent' .8 In this respect, Gramsci is to Andersen an indispensable teacher, but should, Andersen claims, be combined with Georges Sorel's views on violence, Gustave Le Bon's mass psychology, Michel Foucault's theories of discourse, Aleksandr Dugin's emphasis on ethnos, tradition and being, Curtis Yarvin's (Moldbug's) ideas on the relations between groups and ideas, Peter Sloterdijk's thoughts of thumos (the pursuit of glory); and Carl Gustav Jung's, Julius Evola's, and J.R.R. Tolkien's work on myth and blood. Above all, to establish a genuine deep right and plant the seeds that will bring forth the New Man, fascism needs to base itself on spirit and culture: 'In order to make a definite and lasting mark we need music, fiction, poetry and art that convey our myths and ideals.'9 5 Andersen, Rising From the Ruins, 6-12. 6 Ibid., 9. 7 Ibid., 302. 8 Ibid., 300. 9 Ibid., 301, 113. Bylund's Sweden is a decaying Mad Max world in which innocent white women, honest white workers and honorable white seniors are fair game to violent nonwhite perverts. The protagonists are a wolf pack of militant white nationalist dissidents. As in The Turner Diaries, the underground network evaded the regime's troops that brought all other political dissidents of patriotic persuasion to concentration camps, and then challenges the powers-that-be in a series of violent confrontations. Eventually, the wolf den is located and besieged by government agents. 'We are a small pack but we are wolves, Nordic wolves' , says the chief of warriors to his beleaguered wolf clan and raises a horn of mead for the Free North. 'We will not rest before we've won -or lost.' The wolf pack manages to escape and joins the national resistance movement that has taken power in the province of Dalarna from where the national liberation of Sweden will take place (as it often does in Swedish ethno-nationalist imagination).22 Vargarna is explicitly Third Positionist (i.e., adheres to socialist fascism/ national bolshevism) and Bylund repeatedly scorns rightwing nationalism as a fallacy that betrays the true interest of the racial nation. Not so in . To Falk, it is not the working-class but the bourgeoise middle-class of the homeowners' associations and the steering committees of the prosperous gated communities that will pave the way for national liberation. This novel is also set in the near future, a few years from now. The honorable white Folk has been betrayed by the social democratic government and their masters in the economic and intellectual elites. The protagonist is not a wolf pack but a lone wolf, Lars, who initiates a one-man race-war after his pregnant wife had been gangraped and murdered by a band of black barbarians who celebrate when acquitted in court by the politically correct jurors.23 Lars realizes that black on white crime is part of the invading colonizers' tactics to humiliate the 'indigenous white population' and its 'feminized males' who no longer know how to fight.24 Lars was not going to let that happen. With a knife tucked into his boots, two pistols with homemade silencers in holsters over his muscular chest, hand grenades in belt, an automatic battle rifle in hands, and a balaclava over his distinctly Nordic face, he looked like a hero, he thought, when catching a glance of himself in the mirror reflection of his car's black glass, before embarking on a nocturnal shooting spree, killing black, Latin American and Muslim people in Hammarkullen, a stigmatized underclass area in Gothenburg. Eventually, Lars comes in contact with The Network, a web of nationalist underground resistance cells in the more affluent suburbs. Lars' contact guides him politically and gradually exposes the Jewish conspiracy behind the current mass-invasion that will replace the land's indigenous white folks with racial aliens. 'Now terror will be answered by terror' , explained the Sage of the Network, with implied reference to Hitler. Falk invites his readers to enjoy the violent clashes between the Aryan warriors and their racial enemies which culminates when Lars 'slaughters the pig' , i.e., kills the treasonous Prime Minister, followed by the nationalist victory, and the heroic funeral of Lars who, much like Earl Turner of The Turner Diaries, fell in the final battle.25 A widely acclaimed fascist writer is journalist Henrik Johansson, whose debut crime fiction Sista steget [Final Step] (2004), ends with a suicide bombing that destroys the headquarters of the Swedish Armed Forces, another detail we recognize from the The Turner Diaries. In the Final Step, we meet two lone wolves, the truck driver Kenneth and a single mother. Inspired by vigilantes who kill Roma people, Kenneth takes out his revenge on politicians and nonwhites when his elderly parents are robbed and killed by racial strangers. He bombs a police passport office that issues Swedish passports to children of Arab immigrants, and a civic center that organizes a Day of Tolerance in response to immigrant crime. In the commotion caused by the bloody bombings, Kenneth sets out on a 'mulatto hunt' at a shopping mall that is described in gruesome detail. White fascist lone female wolves are still relatively rare, both in real life and fascist literature. But there are exceptions, such as the single mother in the Final Step. She has no name, age, or personal history. Her one function is her motherhood of which she is deprived when her daughter kills herself after having been raped by nonwhite 'apes' . Representing the despair of all white mothers in multicultural Sweden, and the cleaning violence unleased when Mother Svea [the Swedish nation] finally awakens, the single mother exacts revenge by stabbing young adult nonwhite males to death with a chef's knife. 'It was so easy' , the narrator explains. She entered tenement houses, knocked at the doors to homes with non-Swedish residents selected by the names of their apartment doors, went in and killed the sons in front of their horrified families. One chop in the stomach, one in the groin, one in an eye, and then slit the throat. One, two, three, 'fucking ape' , four; out of the door, next floor, there you go. 26 Johansson was awarded a fascist book prize for a short story originally pub- , tells the story of a single father's rightful revenge on three black rapists who defiled his daughter, and the justice system that let them walk. The father visits a psychologist and describes in grisly detail how he buried the rapists alive in coffins, for them to die, slowly and in torment. When the psychologist realizes that this was the case where the court went on his testimony and acquitted the perpetrators, it is too late. He wakes up with an aching head and bloody scalp, realizing that he cannot move more than a few inches in any direction. It's totally dark, it smells of earth and sawdust. He realizes that he lies in a coffin and screams in horror. Above his grave, the father smiles when he hears the dampen sounds of agony from deep underground, leans back against a house wall, pulls out a list of names, crosses out the psychologist and studies the names of those who remain.29 With Flickan som jagades av elden [The Girl Who Was Chased by Fire], a title obviously referring to the second volume in the Millennium series by Stieg Larsson,30 Sebastian Bjurman of the now defunct radical nationalist projects Pegida and the Swedes' Party, suddenly made an impact in the radical nationalist landscape. The crime fiction tells the story of the protagonist Sandra who is raped by celebrated anti-fascist immigrant Achmed Mustafa. Everything she believes in is turned on its head. The establishment, the anti-racist community, and all her friends take exception to her. Seeking revenge, she wants to buy a gun from a criminal immigrant, but is fooled, robbed, beaten, and about to be raped again when she is miraculously saved by a 'real man' , a true man, a Swedish man, a white national socialist man, Alexander, a cleanshaven, well-mannered, muscular hero. He helps her contextualize her experience by unmasking the evilness of the anti-white System and guides her to adopt fascist truth. The national socialist hero helps the protagonist take revenge on the rapist and the System before they escape with the help of a police officer sympathetic to their cause. Sandra realizes the necessity of violence. 'Not even murder is necessarily evil or frightening; on the contrary, it may be self-defense' and serves the cause to secure the liberty and dignity of the white Nordic race.31 Fascist Nordic Noir The crime novels briefly discussed above are not the only Swedish-language titles, but examples of contemporary fascist crime fiction written in the style of Nordic Noir.32 As in the pioneering Wallander series by Henrik Mankell and the Martin Beck series by Maj Sjöwall and Per Wahlöö, fascist Nordic Noir investigates the shadowy underworld of modern society, and critically address social issues, but from a fascist political perspective. Stylistically, fascist Nordic Noir keeps the plain language and dark tone of the genre but may feature less complex and multilayered storylines and less melancholic protagonists. As in the wider genre, murder, rape, misogyny, and racism reflect systemically entrenched dynamics, but again interpreted from a white fascist viewpoint. The narrative is set in the decaying world of multiculturalism, political correctness, and meaningless consumerism, with the once great Swedish nation aimlessly drifting towards destruction. White majoritarian Swedes are positioned as repressed and depressed, bereft of their aboriginal homeland, mocked by the intellectual elite, silenced by the pc media, abused by racial strangers, and betrayed by the political class, in a society contemptuous of everything Swedish, including Swedish history, values, people, and traditions. Elderly white Swedes are robbed and mocked, beautiful young blonde Swedish women are violated and raped, white Swedish men are feminized and ridiculed. Any deviation from what is healthy and natural is elevated at the expense of the hardworking white heterosexual man to the public acclamation of anti-racist white self-haters and the gay lobby, two factions held in great contempt by the invading racial strangers who benefit from their treason. A common feature of fascist noir is violent rape. Of course, black sexual desecration of innocent white women is a figure of longstanding prominence in the history of racism, in which black male hands on white female skin recurrently has sparked violent reaction and set the stage for the entry of the valiant knight to come at her rescue.33 In fascist crime fiction, the righteous patriots are few but honorable. The stories are typically spun from below, from the perspective of the white working-class or lower middleclass man, or (more rarely) woman, the white self-made small business owner (always a man), or the white high-school student (male or female), and depicts his or her awakening, often triggered by the straw that finally broke the camel's back. The protagonist is typically a decent young white man or woman, who initially believes what he or she has been told: that all cultures are equal, races do not exist, and fascism is evil. By being exposed to some shockingly violent crime, rape, assault, or murder committed by racial strangers protected by the corrupt system, the protagonist wakes up to the realities of the ongoing white genocide and realizes the need to take action. Seeking revenge, the protagonist learns how to be a man, or meets her hero, and is gradually introduced to fascist political convictions. In the first stages, he or she typically rejects what she/he sees or hears and becomes convinced only by embarking on a critical search for truth. He or she then sees the light, and converts to fascism, something the author most likely hopes that his reader will also do. As William Pierce explained when we sat at his home on the outskirts of Hillsboro, West Virginia and discussed his novels The Turner Diaries and Hunter: This is the way to teach people. Write novels, write plays, write film scripts, because a person not only experiences the actions of the protagonist, but if you have the protagonist in decision-making situations, when he has some sort of a conflict that he has to resolve, the reader, or the viewer, undergoes the same thought processes, and then you can carry the audience along, to educate them, to get them to change their minds, to get them to see things the way the protagonist learns to see things.34 In the course of the protagonist's journey, the reader of fascist crime fiction may be introduced to white nationalist classics, (e.g., The Passing of the Great Race by Madison Grant; The Decline of the West by Oswald Spengler; For My Legionaries by Corneliu Zelea Codreanu; My Awakening by David Duke; Might is Right by Ragnar Redbeard; and White Power by George Lincoln Rockwell), and revolutionary white racist heroes whom the System desperately seeks to prevent white people from knowing, including Robert Jay (Bob) Matthews (who launched the first post-world war two white nationalist guerilla campaign in the Pacific Northwest in the early 1980s), Joseph Paul Franklin (the white racist lone wolf serial killer to whom Pierce dedicated Hunter), and David Lane, who coined the holy Fourteen Words: 'We must secure the existence of our people and a future for white children' to galvanize white resistance against the ongoing white genocide. The protagonist of fascist noir is not only schooled in political philosophy, but in the way of the warrior. Having initially taken exception from violence, he finally has enough of being a feminized weakling. The warrior instincts that are embedded in the nature of the white race is eventually rekindled and the protagonist learns to master the art of killing, how to build a bomb, get away with murder, travel unnoticed, and the basics of militant underground tactics. Reflecting ongoing white nationalist debates about whether white unity should be in organization or purpose, if white resistance at this time and stage is best wrought by centralized command or leaderless resistance, if white racial survival may be secured by ousting the current administration or only by bringing on the apocalypse, fascist crime fiction may feature the heroic accomplishments of some white nationalist liberation front or the lone white wolf assassin. In many plots, the author discusses the pros and cons of both ways of waging race war, and how they may be combined. The Turner Diaries and Hunter may be required reading among contemporary white fascist radicals, and both have been translated to Swedish by Magnus Söderman, a veteran fascist organizer and author who has introduced several Klan, Cosmotheist, Identity Christian, and Odinist thinkers to a Swedish language audience, and who is currently involved with the Casa Pound inspired Det fria Sverige [The Free Sweden] project to build fascist social centers as nodes of white fascist resistance and islands of refuge from the ongoing white genocide.35 While fascism is a radical nationalism (Griffin's ultranationalism) it has always transcended national borders, and The Turner Diaries and Hunter have been entangled with the history of Swedish fascism and the personal lives of violent radical nationalists, including white racist serial killer Peter Mangs who was 'mesmerized' by the former and adopted the tactics of Joseph Paul Franklin, whom he learned about through the latter.36 Of course, The Turner Diaries and Hunter are fiction, and so are the works in the fascist noir genre written by Swedish-speaking authors. As a precaution, possibly to serve the dual objective to avoid charges of incitement to racial hate crime, and to boost the Final Step, the book he was talking about at that moment, Henrik Johansson pointed out that 'it is not a manual I wrote, but entertainment pure and simple' with the intent of contributing to a 'nationalist culture' based on reality and sound morals in contrast to the deranged popular culture of his time. Russia is no longer the land of the Russians. We will share it with you as a new homeland of all white people in the world.'40 Of course, whiteness is not a product of nature but of classification. There are no given criteria by which certain people are classified as white and others not. Whether a particular people or person will be included or excluded as white varies according to time, place, context and perspective.41 To include Russians as white, and look at Russia as not only a but the white homeland, as did Johansson, would not necessarily be or have been acceptable by fascists of other times or places. William Pierce followed in the footsteps of his political mentor, American Nazi Party founder George Lincoln Rockwell, in expanding the borders of whiteness to include Russians, Poles, Ukrainians, and other East-European or 'Euroasian' peoples (to borrow a term from pioneering post-Second World War fascist reconstructionist Francis Parker Yockey).42 Yet, Pierce and Rockwell were caught up in a Cold War context in which Russians hardly were allies, much less identifiable as of the same people and kind. Johansson's pro-Russian perspective has not been unanimously accepted in the political landscape of Swedish fascism. From a national Swedish perspective, the Russian Bear has not always been seen as an ideal neighbor, and far from all Swedish fascists have recognized an ally in Putin's Russia. When war broke out in the Donbass region of eastern Ukraine in 2014, a section of Swedish fascists (including the above-mentioned Söderman) organized support to the Volunteer Ukrainian Corps and the Azov Battalion with some thirty Swedes joining Ukrainian paramilitary militias as volunteer fighters. However, the Swedish Resistance Movement (today the Swedish section of the Nordic Resistance Movement, nmr) disagreed, and aligned with Russia and the Russian separatist militias in Ukraine. At least three nmr members received military training by Partizan, an urban warfare training center associated with the Russian Imperial Movement that send volunteers to Russian paramilitaries in Donbass. The ambivalent attitude towards Russia is reflected in Perfekt Storm by Arne Weinz, which was published in 2019.43 A war thriller more than a crime fiction, Perfekt Storm is set in 2027 when the Muslim Brotherhood literally invades Sweden by a naval fleet carrying Muslim elite troops. In preparation for the armed assault, Muslims had trickled into Sweden for years, increasing their numbers through high birthrates, and established Jihadist strongholds in Muslim controlled no-go zones dotted across the country. Assisted by this fifth column of poorly trained but highly motivated jihadi warriors, the Muslim Brotherhood elite force rapidly gain control over most of Sweden and instill a reign of terror. Sweden's cultural heritage, including its churches, castles, operas, theatres, museums, beer halls, and old city quarters, are bombed to pieces and replaced with mosques, bazaars, and Muslim military academies. Having a beer, a pet dog, or a snus44 is criminalized. Dissidents, leftists, and multiculturalists are publicly executed; traitors could not be trusted. Based in the countryside, pockets of Swedish patriots launch an armed resistance, and are joined by migrant communities with an inherent hatred of Islam seeking to exterminate the Muslim brutes, e.g., Serbs, Croatians, Assyrians, Syrians, and Armenians. Initially disgusted by the thought of ethnic cleansing, a Swedish resistance commander admits that the raging fires that consumes thousands of screaming Muslim civilians trapped in multi-story housing complexes in the underclass areas of Southern Stockholm actually looks beautiful, as the blazing flames are reflected in the snow and reminds him of burning candles on a birthday cake.45 Despite exceptionally brave and intelligent fighters, the alliance of Swedes and honorary Swedes cannot defeat the massive Islamic occupation forces on their own. Fortunately, the no less anti-Muslim Russians come to their rescue. However, in contrast to Johansson's short story, the Russians do not act on altruistic grounds but with geopolitical motives. They seize the opportunity to transform the Baltic Sea into a Russian inland sea. By the end of the day, the peace agreement suggested by the United Nations is accepted by the warring parties, and Sweden is split in three zones, controlled by Muslims, Russians and Swedes respectively. Bad, but not as catastrophic as initially feared. Definitely, there is hope for national survival and rebirth; especially since the new government of Sweden thenceforth is run as a corporation with a ceo and a Board of Directors responsible to the major share-holders (the productive elite of the people), and not a corruptible government responsible to everyone and no one, as in a democracy based on the fallacy of equality. Fortunately, 'the corporate sector will never be democratized' , Weinz assures his readers, as everything is about efficiency, and disloyal elements standing in the way of progress are immediately cleared.46 Law, order, and homogeneity are reestablished in the Kingdom, as Islam and the free press are banned, and leftists, Muslims, and dissidents neutralized, deported, or executed. Conclusion: the Politics of Passion. Fascist Noir is characterized by a passionate celebration of violence, an element that Klaus Theweleit also found in the Freikorps literature and held central to the attraction of fascism itself. Fascist language, Theweleit found, appears to be unified around two main features. When fascists write of the everyday, their relations to themselves, their work, or their sensibilities, their language is meaningless, voided, aborted; when their writing is associated with violence in pursuit with world-historic missions, political foes and inferior races, depraved beasts and white sisters, their language is animated, intense, alive.47 This feature reappears in contemporary fascist writing, suggesting a relation between violence, life, and passionate love. To fascist conviction, life is struggle and the absence of struggle is death. In fascist desire, Eros as the libidinal life force stands against Thanatos, the god of (peaceful) death, only in the sense that reduced tension and conflict means fading out and ultimately dying. In Freudian analysis, Thanatos is the death instinct that stands opposed to Eros, the life instinct. In Beyond the Pleasure Principle, Freud sought a way to bring the antithetical polarities between life and death instincts in relation to each other, even tracing the one to the other.48 Everything living ultimately dies from causes within itself, Freund reasons, suggesting that 'the goal of all life is death.'49 Violence and aggression are self-destructive energies redirected outwardly toward external targets, Freud argues, finding in sadism an alloy between eroticism and death. Even 'where it emerges without any sexual purpose, in the blindest fury of destructiveness' , Freud later writes, 'we cannot fail to recognize that the satisfaction of the instinct is accompanied by an extraordinarily high degree of narcissistic enjoyment, owing to its presenting the ego with a fulfilment of the latter's old wishes for omnipotence."50 In Civilization and its Discontents, Freud suggests that the outcome of the 'mutually opposing action' of Eros and Thanatos is of world dominion proportion. Written in 1930, when National Socialism began to assert itself as a political force to be reckoned with, Freud's tone in Civilization and its Discontents is pessimistic. Identifying civilization as a 'process in the service of Eros' whose purpose to realize 'the unity of mankind,' Freud found its project jeopardized by its rival, man's natural aggressive instinct. 'The meaning of the evolution of civilization is no longer obscure to us,' Freud states, but '[re]presents the struggle between Eros and Death, between the instinct of life arid the instinct of destruction, as it works itself out in the human species. This struggle is what all life essentially consists of, and the evolution of civilization may therefore be simply described as the struggle for life of the human species.'51 To the extent that violence and aggression in fascist literature is passionate, it cannot only be seen as opposed to Eros, but rather as an instinct imbued by eroticism and love. 'Fascist writing itself' , Theweleit observes, 'makes quite , Ernst Jünger draws a parallel between war and love, arguing that the lust of blood in its intensity is equivalent only to Eros: 'The lust of blood hangs over the war like a red storm-sail over a black galley: in its boundless momentum it is comparable only to love.'53 Thus, our analysis should not stop at identifying violence and aggression as forces of destruction and death, but look at destructivity's productivity, and the life-bequeathing aspects of death, which, I would argue, is at the core of fascism's urge for national rebirth. Only apocalyptic violence will create the conditions for the rise of the New World and the New Man; death is the cradle of life. Marking war and aggression masculine, and untainted violence (as opposed to tainted violence) a male-to-male task between (real) men, never aimed at (real) women (whores and traitors exempted), war is thereby elevated to the status of the principle of reproduction. Jünger wrote, 'War is not only our father, it is also our son. We have begotten him and he has begotten us.'54 War as the Father and Son makes war akin to God, as God is the Father and Son.55 This makes the children of God children of war; the agents of war agents of God; the palingenetic mythos of fascism a vision of 'a new heaven and a new earth,' and the voice of fascism a promise: 'Behold, I make all things new' (Rev 21:5). Reproduction and rebirth, fascism claims, do not really require women, not in any active role. With notable exceptions (e.g., Sandra in The Girl Who Was Chased by Fire), the emblematic protagonist in fascist fiction is a man or a band of men, a Männerbund. If the male protagonist has a family, a wife, mother, daughter, they are typically killed already in the introduction. By being killed, an act that does not require her saying much, the main role of the female character is over, except as a motif for revenge or metaphorical representation of the defiled nation. Her death sets the plot in motion, and the protagonist free to focus on his main mission, the violent destruction of the murderers and what they represent (blackness, Jewishness, Muslimness, or any other otherness and uncleanliness). In Warrior Dreams, James Gibson's seminal study on violence and manhood in post-Vietnam popular culture of the early 1990's, the warrior hero in American film is a man set apart.56 As in fascist noir, the hero's family, if there is one, his wife or children, are typically killed (Mad Max, Lethal Weapon) or nearly killed (Patriot Games). The hero acts alone, with a partner or with a tribe of male warriors. If he belongs to an organization, he will not abide by their regulations but creates his own rules of engagement to serve a Higher Justice (Dirty Harry; Death Wish). He always fights for righteous American values but is typically frustrated or betrayed by representatives of the System (Rambo iii; Clear and Present Danger; Black Berets). Substitute American values with white nationalist values and the System with politically correct multiculturalism, and the similarities to fascist fiction are obvious. Fascist fiction is not necessarily that different from mainstream popular culture, as evidenced by the fact that Braveheart arguably was the most popular movie in White Power United States in the late 1990s.57 In patriarchically conceived nationalist tradition, the nation is conventionally construed as a woman, in Swedish nationalism named Moder [Mother] Svea, and, therefore, as something that belongs to men. In radical nationalist narratives, the body of Moder Svea is penetrated by racialized others, illustrating the failure of her national guardians, the Swedish government, to perform as men, thus rendering their claim to be her protector and provider illegitimate. As she screams in anguish, real men should rush to her rescue, which is what the violence-prone radical nationalist warrior claims he is doing. In fascist self-conception, the militant activist is the white knight in shining armor. In fascist crime fiction, white beautiful blond Swedish women are recurrently raped and violated by racialized others. As observed by Karina Horsti, women are often represented as the embodiment of the nation in nationalist imagination, the 'openness' and softness of her body representing a weakness, a boundary for which violation and infection from the outside are constant threats.58 A metaphorical representation of the violated nation, the recurrence of the figure of the raped white woman in fascist noir provides the reader with ample opportunities to vicariously enjoy the pleasures of being a real man; a fascist man who comes to the rescue of the helpless woman in a blazing explosion of excessive violence, an eruption that destroys her aggressors and releases the salvaged woman's admiration for her hero. The white fascist mission to save the endangered white woman, nation, and children by unleashing excessive violence allows him to rise above the mundane world of the plebeian commoner in consumerist society. It sets him apart in the more profound sense of being Holy and makes him a Crusader ready to sacrifice his life in pursuit of a divine mission. That the borders of real life and fiction may be blurred has frequently been illustrated by outbursts of fascist-inspired violence across western societies, including the self-proclaimed Templar Knight Anders Behring Breivik's bombing of the Norwegian government quarters in Oslo, and the massacre at a Labor Youth Party summer camp at Utøya.59 In Breivik's self-image he was a true hero, sacrificing himself to save the Norwegian nation and the Nordic race.60 Drawing on the psychoanalytical work of Peter Sloterdijk, Joakim Andersen points at Thumos as a driving force on par with Eros. Thumos is the pursuit of glory, which, to fascists, is integral to manliness. A culture without Thumos is an unmanly culture. When contemporary white fascists characterize Western society as feminized, they often refer to the supposed lack of Thumos. A man experiencing a violation of his dignity cannot, Andersen emphasizes, be bought; any attempt to bribe him exacerbates his fury. Accordingly, not being able to defend your woman and nation will publicly deprive a man of his manliness, of his Thumos, making him a worthless creature, a feminized weakling. Thumos is thus key to the warrior instinct fascism celebrates. 61 The recurrent message inherent in contemporary fascist fiction is that a man who does not employ passionate violence to rid the nation from the dragon of politically correct multiculturalism and to put an end to the ongoing white genocide; a man who abstains from drawing his sword to protect the endangered white woman, is a man not set apart, i.e., an unholy man. To Benito Mussolini fascism was a 'spiritual view of life'; an idealistic reaction to materialism, scientific positivism, simultaneously anti-capitalist and anti-communist.62 Fascism invites its devotees to transcend the mundane and materialistic to become part of an evolving Faustian nation that will see the dawn of the New Era and the New Man arising out of the ashes of the cleansing flames of apocalyptic violence. Fascist fiction with its passionate celebration of violence, Eros, and Thanatos, Life, and Death, should not be underestimated as an avenue to turn the heart of fascist desire on fire.
v3-fos-license
2018-08-28T07:29:53.115Z
2018-08-28T00:00:00.000
52097497
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41421-018-0046-x.pdf", "pdf_hash": "89d1813d1ed19d9aaf80f2dacaf15f81bec2540f", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1529", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "5525440ab51e4ff070e6de83a2877d5304a29dd7", "year": 2018 }
pes2o/s2orc
Postsynaptic p47phox regulates long-term depression in the hippocampus It is well documented that reactive oxygen species (ROS) affects neurodegeneration in the brain. Several studies also implicate ROS in the regulation of synapse function and learning and memory processes, although the precise source of ROS generation within these contexts remains to be further explored. Here we show that postsynaptic superoxide generation through PKCζ-activated NADPH oxidase 2 (NOX2) is critical for long-term depression (LTD) of synaptic transmission in the CA1–Shaffer collateral synapse of the rat hippocampus. Specifically, PKCζ-dependent phosphorylation of p47phox at serine 316, a NOX2 regulatory subunit, is required for LTD but is not necessary for long-term potentiation (LTP). Our data suggest that postsynaptic p47phox phosphorylation at serine 316 is a key upstream determinant for LTD and synapse weakening. Introduction Synapse weakening is part of a group of physiological processes referred to as synaptic plasticity, which govern changes in synaptic function in response to neuronal activity, and are thought to represent the cellular and molecular mechanisms of learning and memory 1 . On the other hand, aberrant activation of synapse weakening signalling pathways has been reported in several Alzheimer's disease (AD) models [2][3][4] , suggesting that these signalling pathways represent a crucial interplay between physiology and the onset of disease-associated pathophysiology. Mounting evidence suggests that apoptotic signalling cascades, including caspase-3 and glycogen synthase kinase 3β (GSK-3β) activation, are centrally involved in physiological and pathophysiological forms of synapse weakening, manifest through postsynaptic α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) endocytosis and long-term depression (LTD) of synaptic transmission [4][5][6] . However, how these signals are first initiated is unknown. Reactive oxygen species (ROS) are not only well-known upstream regulators of neuronal apoptosis 7 and neurodegenerative signals 8 but are also known to play important regulatory roles in aspects of neuronal physiology, including synaptic plasticity and synapse weakening [9][10][11][12][13] . ROS can originate from numerous sources to affect synaptic plasticity, including presynaptic neurons, postsynaptic neurons and microglia 11,13 . However, to date there has been no characterization or elucidation of the precise mechanisms of ROS generation that regulate synaptic plasticity. Here, we examined the production of postsynaptic ROS during LTD, revealing a key role for postsynaptic ROS production via NADPH oxidase 2 (NOX2). Crucially, we find that the activity of postsynaptic protein kinase C zeta (PKCζ) is also required for LTD and identify the phosphorylation of p47phox at serine 316 as a necessary step in this pathway. Therefore, our results uncover a role for a specific postsynaptic ROS production pathway in activity-dependent synapse weakening. Results Postsynaptic superoxide is required for LTD in the CA1 of the hippocampus Superoxide ions are one of the primary forms of ROS and are known to be elevated in neurons following the activation of N-methyl-D-aspartate receptors (NMDARs) [14][15][16] . We therefore hypothesized that intra-neuronal superoxide radicals are upstream regulators of NMDAR-dependent forms of synaptic plasticity. To address this, we analysed the effects of superoxide dismutase (SOD), a class of endogenous enzymes that catalyse superoxide dismutation 17 , on an NMDAR-dependent form of LTD in rat hippocampal acute slices 18 . Accordingly, whilst application of low-frequency electric stimulation (LFS) during whole-cell patch clamp recording readily induced LTD in CA1-Schaffer collateral synapses (52.8 ± 9.0% of baseline, p = 0.002 vs. control input, Fig. 1a), postsynaptic infusion of SOD (300 units/ml) through the patch pipette blocked LTD (89.5 ± 8.7% of baseline, p = 0.115 vs. control input, Fig. 1b). To determine whether the superoxide radicals involved in this form of LTD could originate from an extracellular source, such as microglia 13 , we bath applied SOD. Given that SOD has poor membrane permeability 19 , extracellular bath application of the enzyme will catalyse extracellular superoxide dismutation without affecting intracellularly generated superoxide. Extracellular SOD application had no effect on LTD (57.5 ± 9.9% of baseline, p = 0.004 vs. control input, Fig. 1c), suggesting that postsynaptic intracellular superoxide, specifically, is critical for LTD expression. Since hydrogen peroxide (H 2 O 2 ), another ROS, can be a product of SOD catalysis of superoxide dismutation and is also implicated in synaptic plasticity 20 , we tested whether H 2 O 2 is required for LTD. Postsynaptic infusion of catalase (300 units/ml), an enzyme that catalyses the decomposition of H 2 O 2 to water and oxygen, had no effect on LTD (66.8 ± 4.7% of baseline, p = 0.002 vs. control input, Fig. 1d). We also tested whether H 2 O 2 may be inhibiting NMDAR-LTD, which could provide an alternative explanation for the inhibition of LTD induced by SOD injection. To test this, we postsynaptically injected SOD along with catalase, thereby scavenging the H 2 O 2 product of the SOD reaction. In this experiment, LTD was also blocked (SOD+catalase, 90.1 ± 4.8% of baseline, p = 0.150 vs. baseline, Fig. 1e). In bath application experiments, SOD and catalase co-treatment did not affect LTD (SOD+catalase, 68.3 ± 1.2% of baseline, p < 0.001 vs. baseline, Fig. 1f). Together, these results suggest that postsynaptic intracellular superoxide is required for NMDARdependent hippocampal LTD and that H 2 O 2 itself is neither required for nor an inhibitor of LTD. NOX2 regulates LTD While several ROS-inducing mechanisms are present in neurons, emerging evidence suggests that superoxide production post-NMDAR activation is catalysed by NADPH oxidase (NOX) 15,16 . NOX is a membrane-bound enzymatic complex responsible for the production of ROS and is present in neurons where it can localize to synapses 21 . Specifically, NOX2, commonly known as the prototypical NOX, has been suggested as the primary regulator of NMDAR-activated superoxide generation 15 . While NOX1-4 are reported to be expressed in the brain, NOX3 constitutively generates superoxide without stimulation 22 and NOX4 constitutively produces H 2 O 2 23 , an ROS that appears not to be involved in LTD. We therefore predicted that postsynaptic NOX1 and 2 are the most likely candidates for superoxide production during LTD. To test a requirement for these NOX isoforms for LTD, we utilized postsynaptic infusion of NOX inhibitors. Postsynaptic infusion of AEBSF (20 µM), a non-selective inhibitor of NOX 24 , blocked LTD expression in CA1 neurons of acute hippocampal slices (90.4 ± 6.0% of baseline, p = 0.146 vs. control input, Fig. 2a), whilst postsynaptic infusion of the specific NOX1 inhibitor ML-171 (3 μM) 25 had no effect on LTD (58.9 ± 8.6% of baseline, p = 0.006 vs. control input, Fig. 2b). Postsynaptic infusion of apocynin (100 µM), a putative but nonselective inhibitor of NOX2 26 , significantly impaired LTD expression (88.7 ± 8.3% of baseline, p = 0.583 vs. control input, Fig. 2c). In contrast, bath perfusion of apocynin after LTD induction failed to reverse the expression of LTD, indicating that NOX2 is not required for LTD maintenance (apocynin perfusion: 68.8 ± 4.0% of baseline vs. control: 78.9 ± 3.4% of baseline, p = 0.078, Supplementary Fig. S1). Together, these data are suggestive of a specific role for the NOX2 isoform of NOX in the regulation of LTD induction. PKCζ and phosphorylation of p47phox is required for LTD The function of NOX1/NOX2 is principally regulated through multi-site phosphorylation of the p47phox subunit of the complex by PKC isoforms, including the atypical PKCζ 27,28 . Given the requirement for NOX2 in LTD, we hypothesized that phosphorylation of p47phox would also be required. To test this, we first utilized biolistic transfection of p47phox shRNA in CA1 neurons of organotypic hippocampal slices to knock down the expression of p47phox ( Supplementary Fig. S3). Transfection of p47phox shRNA had no significant effect on basal EPSC AMPA compared with untransfected neighbouring cells (untransfected: 145.0 ± 15.1 pA, transfected: 162.1 ± 15.1 pA, p = 0.594, Fig. 3a). Consistent with our hypothesis, p47phox shRNA transfection significantly impaired LTD expression when compared to untransfected cells (p47phox shRNA-transfected cells: 87.2 ± 11.5% of baseline; untransfected cells: 50.6 ± 7.1% of baseline, p = 0.0145, Fig. 3b). . c Bath application of SOD has no effect on LTD expression (n = 5). d Postsynaptic infusion of catalase (300 units/ml, 20 min) has no effect on LTD (n = 5). e Postsynaptic infusion of SOD and catalase inhibits LTD (n = 6). f Bath perfusion of SOD and catalase has no effect on LTD (n = 6). Symbols and error bars indicate mean ± SEM Since phosphorylation of the p47phox subunit by the atypical PKCζ leads to superoxide generation 16,27 , we therefore used PKCζ shRNA ( Supplementary Fig. S4) to examine whether PKCζ is also required for LTD. We found that transfection of PKCζ shRNA had no significant effect on basal EPSC AMPA (untransfected: 241.0 ± 33.2 pA, transfected 238.9 ± 22.6 pA, p = 0.957, Fig. 3c) but significantly impaired LTD (PKCζ shRNA: 90.6 ± 4.4% of baseline; untransfected cells: 50.4 ± 5.4% of baseline, p = 0.0001, Fig. 3d). Furthermore, the p47phox shRNAmediated LTD deficit was rescued by co-expression of human p47phox (untransfected cells: 74.8 ± 8.9% of baseline; transfected cells: 69.2 ± 4.5% of baseline, p = 0.583; Fig. 3e). Phosphorylation of p47phox at serine 316 is required for LTD Since both p47phox shRNA and PKCζ shRNA transfection resulted in a loss of LTD, it was of interest to determine the specific molecular mechanism surrounding the regulation of p47phox by PKCζ. We generated shRNA-resistant constructs of rat p47phox with differing combinations of site-specific mutations at four residues phosphorylated by PKCζ 27 . Consistent with our working hypothesis, LTD was blocked in cells co-transfected with p47phox shRNA and an shRNA-resistant mutant form of p47phox with all four residues mutated to alanine, to prevent their phosphorylation (after LFS: 95.1 ± 6.8%, p = 0.725 vs. control input, Fig. 4a). In cells expressing a triple phosphorylation mutant (S/T304/305/316A-residues notated as per rat p47phox), LTD was also impaired (after LFS: 88.0 ± 5.5%, p = 0.087 vs. control input, Fig. 4b). In comparison, LTD was readily inducible in cells transfected with a double S/T304/305A mutant (after LFS: 57.2 ± 4.1%, p = 0.000117 vs. control input, Fig. 4c). Notably, we found that LTD was also impaired in cells expressing the S316A mutant form of p47phox (after LFS: 98 ± 22.8%, p = 0.73 vs. control input, Fig. 4d) but the expression of S329A mutant form of p47phox had no effect on LTD induction (after LFS: 63.1 ± 7.0%, p = 0.004 vs. control input, Fig. 4e). Finally, we tested whether phosphorylation of p47phox at S316 is sufficient to induce synapse weakening, through paired recordings from neurons transfected with a S316 pseudo-phosphorylated form of p47phox (S316D) and neighbouring untransfected neurons. Our data showed significantly reduced AMPAR-mediated currents in S316D-transfected neurons (untransfected: 132.4 ± 12.9 pA vs. S316D transfected: 94.1 ± 9.4 pA, p = 0.023, Fig. 4f), indicating that phosphorylation at this residue is sufficient to reduce AMPARmediated synaptic transmission. This effect appears specific to AMPARs, as NMDAR-mediated currents were unchanged between the two cell types (untransfected: 131.1 ± 11.7 pA vs. S316D transfected: 118.7 ± 13.7 pA, p = 0.495, Fig. 4f). Taken together, these data suggest that phosphorylation at S316 of p47phox, likely by PKCζ, is a key regulator of a signalling cascade that governs LTD induction and synapse weakening. Discussion The notion that ROS production is a key element of synaptic plasticity has been well established 11-13, 20, 31 . However, the precise source of ROS and mechanisms by which it is regulated during synaptic plasticity have not been fully elucidated. Identification of these specific signalling pathways is necessary for our full understanding of its physiological and pathophysiological implications in synaptic function. In the present study, we have shown that postsynaptic NOX2-mediated superoxide production, via PKCζmediated phosphorylation of p47phox at the serine 316 residue (pS316 p47phox), is pivotal for LTD expression and weakening of AMPAR-mediated synaptic transmission. Importantly, few studies have directly addressed the source of ROS in the context of synaptic plasticity. Using our selective knockdown approach, in which we have specifically silenced or inhibited postsynaptic ROS production, we have now shown that postsynaptic NOX2 is a necessary hub for ROS production associated with LTD. Several research groups have shown a similar requirement for ROS during LTD or synapse weakening 9,10,12,13 whilst evidence from other groups suggests that ROS can regulate LTP 12,31,32 . Quite how or why ROS can regulate both forms of synaptic plasticity is unclear, but it is possible that different ROS production sources and mechanisms may underpin different forms of synaptic plasticity. NMDAR activation leads to PKCζ-mediated phosphorylation of the p47phox subunit, which is a critical activator signal for NOX2 mediated ROS production 15,16 . In the present study, we show that low-frequency electrical stimulation of hippocampal slices, which induces an NMDAR-dependent form of LTD, also leads to the activation of PKCζ. This effect was blocked by D-AP5, indicative of synaptic NMDAR-dependent activation of PKCζ that is associated with LTD induction. Importantly, through postsynaptic knockdown of PKCζ expression, we reveal an LTD-specific requirement for PKCζ with no observable contribution to LTP expression, consistent with other synapse weakening signals shown in previous studies 5,6,[33][34][35][36] . Finally, our data show that constitutive phosphorylation of the PKCζ substrate, p47phox, at S316 is sufficient to induce weakening of AMPAR-mediated synaptic transmission, even in the absence of upstream activator signals. This single phosphorylation event, which induces NOX2 activation and ROS production, is therefore both necessary and sufficient for synapse weakening. It is not clear how postsynaptic ROS production is itself involved in the mechanisms of LTD signalling. One possibility is that postsynaptic ROS activates Bax protein to stimulate cytochrome c release from mitochondria 37 . Cytochrome c release induces caspase-3 activation 38 , which in turn can affect synapse weakening signal cascades involving Akt-1 and GSK-3β 4, 39 . This possibility is supported by the observation that Bax is itself required for LTD signalling 33 . A growing list of molecules, including caspases, GSK-3β and tau, are now known to be involved in AMPAR endocytosis and LTD 5,6,33,34 . Collectively, these molecules form what we have termed the synapse weakening pathway, encompassing molecules associated with apoptosis and synapse elimination in both physiological and pathophysiological circumstances 4,36,40 . It has been postulated that the balance between synapse weakening (caspase-3, GSK-3β and tau) and strengthening pathways (phosphoinositide-3 kinase and Akt-1) is critical to determine the direction of LTP and LTD and long-term fate of synapses [4][5][6]35 . Indeed, models of neurodegenerative pathologies such as AD exhibit AMPAR endocytosis and facilitated LTD induction, concomitant with the inhibition of LTP in the hippocampus 2,3,39,41,42 . Aberrant activation of synapse weakening signals is therefore believed to be a central underlying molecular mechanism in the pathology and cognitive decline of numerous neurodegenerative diseases 4,40,43,44 . Our results suggest that ROS, via a specific production mechanism, now form a part of this critical synapse weakening signalling cascade. Addressing whether aberrant LTD-like and/or AMPAR-mediated synapse weakening can be seen in human forms of neurodegenerative disease remains a key challenge to translating these findings into viable therapeutic targets. Animals All procedures involving animals were carried out in accordance with the UK Animals (Scientific Procedures) Act, 1986. Male Wistar rats (Charles River, UK) were used to prepare organotypic (6-8 day-old rats) and acute hippocampal slices (2-4-week-old rats). Older rats were housed four or five per cage and allowed access to water and food ad libitum. The cages were maintained at a constant temperature (23 ± 1°C) and relative humidity (60 ± 10%) under a 12-h light/dark cycle (lights on from 07:30 to 19:30). Acute hippocampal slices Rats were killed by cervical dislocation and decapitation. Following this, the brain was rapidly removed and placed into ice-cold artificial cerebrospinal fluid (aCSF; continuously bubbled with 95% O 2 /5% CO 2 ) containing 124 mM NaCl, 3 mM KCl, 26 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 2 mM CaCl 2 , 1 mM MgSO 4 , and 10 mM Dglucose. Hippocampi were extracted and transverse hippocampal slices (400 µm thickness) were cut using a McIlwain tissue chopper. Following manual separation, the slices were then submerged in aCSF for a minimum of 1 h before experiments commenced. Acute slices were placed in a recording chamber and perfused with warmed (28-29°C) and carbogenated aCSF at 2 ml/min. Two independent stimulating electrodes were placed separately in the Schaffer collateral-CA1 input (test pathway) and subiculum-CA1 input (control pathway). For whole-cell patch experiments, 20 μM picrotoxin was included in the aCSF and CA1 neurons were blind-patched using a 4-6 MΩ borosilicate glass pipette containing 130 mM CsMeSO 4 , 8 mM NaCl, 4 mM Mg-ATP, 0.3 mM Na-GTP, 0.5 mM EGTA, 10 mM HEPES, 6 mM QX-314, pH 7.2-7.3 and 280-290 mOsm/kg. For field excitatory postsynaptic potential (fEPSP) recordings, a glass pipette containing 3 M NaCl was placed in the stratum radiatum region of the CA1. LTD experiments during whole-cell patch were carried out as for cultured hippocampal slices (see below), except a 20-min baseline was used. For LTD of fEPSPs, a 30-min stable baseline at 70% of maximum stimulation intensity was followed by LFS, consisting of 900 pulses at 1 Hz, of the test pathway. This was followed by 60 min of post-conditioning recording. In some cases, slices were removed immediately after LFS for western blotting processing. For whole-cell patch recordings, cultured slices were perfused with a warmed (28-29°C) recording solution (119 mM NaCl, 2.5 mM KCl, 26 mM NaHCO 3 , 1 mM NaH 2 PO 4 , 4 mM MgCl 2 , 11 mM D-glucose, 4 mM CaCl 2 , 10 μM 2-chloroadenosine and 20 μM picrotoxin). The recording solution was continuously bubbled with 95% O 2 /5% CO 2 at source. The usual flow rate was 2 ml/min. In most recordings, two independent stimulating electrodes were placed separately in the Schaffer collateral-CA1 input and subiculum-CA1 input. Recordings were made from pyramidal neurons in the CA1 region, using glass pipettes containing CsMeSO 4 internal solution (as above) and neurons voltage clamped at −70 mV unless otherwise stated. To induce LTD, a 10-min baseline was followed by 1 Hz stimulation (200 stimuli) with recorded neurons voltage clamped at −40 mV. To induce LTP, a 5-min baseline was followed by a 2 Hz stimulation (200 stimuli) with recorded neurons voltage clamped at 0 mV. For quantification and comparisons between groups/inputs, the peak EPSC amplitude of the test input (relative to baseline) was averaged 15-20 min (cultured slice) or 20-25 min (acute slice) after conditioning was applied. EPSC AMPA was measured as the peak EPSC amplitude at a holding potential of −70 mV. EPSC NMDA was measured as the peak EPSC amplitude 90-100 ms after stimulus, at a holding potential of +40 mV. Drugs and antibodies The following drugs were dissolved in internal recording solution (for postsynaptic infusion) or aCSF (for bath perfusion) at concentrations based on previous studies: AEBSF ( Expression/shRNA plasmids and single-cell PCR Constructs for shRNA knockdown of target transcripts were generated using the Block-iT™ pENTR/U6 system, as per the manufacturer's instructions (Life Technologies, UK). The target sequences for rat NOX1 and NOX2 were GCAACTGTTCATACTCTTTCC and GGTCTTACTTT GAAGTGTTCT, respectively. NOX2 scrambled shRNA sequence was GGTAGTTACTCGTTAGTTTCT. PKCζ shRNA target sequence was GGCCATGAGCATCTCTG TTGT and for p47phox it was GCTCCTACCCTGCTT TAATGT. P47phox-rescue and mutation experiments were performed with co-transfection of the p47phox shRNA construct and a pCMV-SPORT6-p47phox cDNA clone (Source Bioscience, UK). Site-directed mutagenesis to generate S/T304/305A, S/T304/305/316A, S316A and S329A constructs was performed on the pCMV-SPORT6-p47phox construct using QuikChange™ technology, as per the manufacturer's instructions (Agilent Technologies, USA). All generated constructs were sequence verified via Sanger sequencing (Source Bioscience, UK). Statistical analyses Sample sizes, indicated by n, are indicated in the figure legends and represent the number of biological replicates. For experiments in acute slices, this reflects the number of individual animals from which the data were obtained. For cultured slices, this reflects the number of individual slices. Samples sizes for electrophysiology experiments were determined through empirical evidence obtained within our laboratory and are consistent with those found in existing literature. Data are expressed as mean ± standard error of the mean (SEM) and analysed using the SigmaPlot software (Systat Software, Chicago, USA). Significance was set at p < 0.05 and unpaired t tests were used to determine the statistical significance of effects vs. control inputs or untransfected cells, where appropriate, and paired t tests were used to compare to baseline values, when necessary.
v3-fos-license
2020-04-23T09:07:18.081Z
2020-04-22T00:00:00.000
216076110
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-08634-4", "pdf_hash": "889bea4a7d0f9d769edc675447bf64534fbaa2b1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1531", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3d3d6bef17806f2ec8c8a5185a753b27eadb5aea", "year": 2020 }
pes2o/s2orc
Malaria elimination using the 1-3-7 approach: lessons from Sampov Loun, Cambodia. Background Cambodia has targeted malaria elimination within its territory by 2025 and is developing a model elimination package of strategies and interventions designed to achieve this goal. Methods Cambodia adopted a simplified 1-3-7 surveillance model in the Sampov Loun operational health district in western Cambodia beginning in July 2015. The 1-3-7 approach targets reporting of confirmed cases within one day, investigation of specific cases within three days, and targeted control measures to prevent further transmission within seven days. In Sampov Loun, response measures included reactive case detection (testing of co-travelers, household contacts and family members, and surrounding households with suspected malaria cases), and provision of health education, and insecticide-treated nets. Day 28 follow up microscopy was conducted for all confirmed P. falciparum and P. falciparum-mixed-species malaria cases to assess treatment efficacy. Results The number of confirmed malaria cases in the district fell from 519 in 2015 to 181 in 2017, and the annual parasite incidence (API) in the district fell from 3.21 per 1000 population to 1.06 per 1000 population. The last locally transmitted case of malaria in Sampov Loun was identified in March 2016. In response to the 408 index cases identified, 1377 contacts were screened, resulting in the identification of 14 positive cases. All positive cases occurred among index case co-travelers. Conclusion The experience of the 1-3-7 approach in Sampov Loun indicates that the basic essential malaria elimination package can be feasibly implemented at the operational district level to achieve the goal of malaria elimination in Cambodia and has provided essential information that has led to the refinement of this package. Background Malaria remains a leading cause of death and disease in many developing countries, with an estimated 219 million cases and 435,000 deaths occurring globally in 2017 [1]. The Southeast Asian Region has made significant progress in reducing its malaria incidence rate, experiencing a 59% decline in new cases from 2010 to 2017 [1]. The reduction in annual parasite incidence (API) is attributable to the malaria control program efforts with support from the Global Fund, USAID/PMI, as well as other non-programmatic factors including deforestation, climate change, improved infrastructure, etc. As a result, many countries in the region are moving toward malaria elimination. Cambodia has targeted malaria elimination within its territory by 2025 [2,3], and has developed a malaria elimination package that includes strategies and interventions [4] designed to achieve this goal. Driving the push for malaria elimination is the intensification of artemisinin resistance and the development of multiple partner drug resistance in the western region of Cambodia. A key part of the elimination strategy is the 1-3-7 surveillance and response model, which involves reporting of confirmed malaria cases within one day, investigation of malaria cases confirmed through rapid diagnostic testing (RDT) within three days, and application of targeted control measures to prevent further transmission within seven days (Fig. 1). The 1-3-7 strategy was initially developed and implemented in China in 2012 [5][6][7] and has since been adapted to the local contexts in several country settings in Southern Africa and Southeast Asia [8][9][10]. With support from the President's Malaria Initiative (PMI), the United States Agency for International Development (USAID) Control and Prevention of Malaria (CAP-Malaria) Project supported national, provincial, and district health authorities in Cambodia to pilot and then scale-up a simplified 1-3-7 model in the Sampov Loun operational health district in Western Cambodia beginning in July 2015. The purpose of this article is to detail the experience in Sampov Loun in implementing the 1-3-7 elimination surveillance approach from July 2015 -January 2017 and to discuss challenges and lessons learned for potential future scale-up. Selection of Sampov Loun Sampov Loun is an operational district in Battambang Province in Western Cambodia with a population of approximately 160,000 people across three administrative districts and 127 villages. At the time of initial implementation, the health care infrastructure and capacity in Sampov Loun was comprised of nine health centers, one former district hospital, one referral hospital, 32 private providers, and 168 village malaria workers. Sampov Loun was targeted for malaria elimination as it experienced a significant decline in reported cases from 7.54 per 1000 population in 2012 to 2.87 per 1000 population in 2014, and it was identified as a site of intensifying artemisinin resistance. The overall objective of the program was to develop and implement an elimination model using the 1-3-7 approach within the existing public health system in Sampov Loun and to document the feasibility of the model. Components of the intervention The cascade of care begins on Day 1 with suspected malaria cases being identified by village malaria workers or at health facilities. All suspected cases are tested via either RDT or microscopy. Patients with negative results are advised to seek consultation at public health facility. Patients with positive results are immediately placed on a threeday directly observed therapy (DOT) regimen. Uncomplicated malaria cases were treated with dihydroartemisininpiperaquine (DHA-PIP) from July 2015 -January 2016. However, because some malaria cases did not respond to DHA-PIP, the regimen for uncomplicated malaria cases was switched to artesunate-mefloquine (ASMQ) beginning in early 2016. Pregnant women in their first trimester were treated with quinine. Those with treatment failure by the day 28 follow up were deemed to be drug-resistant and were treated with quinine plus tetracycline. The health worker who made the diagnosis notified the malaria case to the district malaria coordinators using SMS from their mobile phones. Within three days of notification, case investigations were conducted by village malaria workers and health facility staff. Case investigations included interviews with the index case and resulted in case classification (plasmodium species and case origin). Interviewers collected information on the patient's malaria history, recent travel and co-travelers of the index case, household members, and malaria prevention practices. A co-traveler was defined as a person who has been working, traveling, or staying outside of the home village with an index case in the past 3-4 weeks. Individual case investigation reports were collected and uploaded to a centralized malaria elimination database. Within seven days of notification, targeted response measures were undertaken (although these often happened within three days in conjunction with case investigation activities). Response measures included reactive case detection (i.e. testing of cotravelers, household contacts and family members, and surrounding households with suspected malaria cases), and provision of health education and long-lasting insecticidetreated nets (LLINs). Day 28 follow up microscopy was conducted for all confirmed P. falciparum and mixedspecies malaria cases to confirm clearance of parasitemia. Management structure The elimination program in Sampov Loun relied on a multi-sectoral collaboration between the Cambodia National Malaria Program (CNM), the Provincial Health Department, Operational District, Public Health Facilities, Private Providers and Village Malaria Workers. Provincial and District Special Working Groups for Malaria Elimination, consisting of health and non-health departments, uniformed services (i.e. army and police), private sector partners, and volunteers were formed to support the implementation of malaria elimination strategy. The program also relied on cross-border collaborations with neighboring Thailand to conduct patient investigation and follow up as well as to develop bilingual behavior change communication materials. Results Implementation of 1-3-7 approach Figure 2 shows the percentage of malaria cases that were successfully notified within 24 hours, investigated within three days, and responded to within seven days. The percentage of cases notified within 24 hours rose from 50% in July 2015 to 100% in January 2017. Over the same time period, the percentage of cases investigated within three days rose from 20 to 100% and the percentage of cases responded to with targeted response measures rose from 35% to nearly 100%. Data from private providers was not collected from September -December 2017, due to changes in the national policy regarding the role of private sector providers in malaria control activities. Private providers are now instructed to refer all suspected malaria cases to public facilities for malaria diagnosis, treatment, and follow-up. In response to the 408 index cases identified during the period of this pilot, 1377 contacts were screened (900 index household members, 395 co-travellers, and 82 surrounding household members), resulting in the identification of 14 positive cases (nine P. falciparum and five P. vivax). All positive cases were identified among index case co-travellers; there were no cases identified among index household members or surrounding household members. A total of 2492 individuals received health education and 242 LLINs were distributed. Rates of DOT provision gradually decreased from 86% (171/200) from July -December 2015 to 77% (99/128) from January -June 2016 and to 64% (51/80) from July 2016 -January 2017 because of an increase in loss to follow up due to high mobility and cross-border movement of those malaria patients. Figure 3 shows the overall trend in confirmed malaria cases in Sampov Loun from 2012 to 2017. Since implementation of the 1-3-7 elimination framework began in 2015, the annual parasite incidence (API) has fallen from 3.21 per 1000 population to 1.06 per 1000 population. Sampov Loun has also seen a steady decline in the number of confirmed P. falciparum and mixed P. falciparum/other species cases. Case classification Beginning in April 2016, all cases of malaria diagnosed in Sampov Loun have been classified as imported, indicating interruption of local transmission. While the district continues to see seasonal spikes in imported cases, these too are on a downward trajectory (Fig. 4). Case investigations have allowed Sampov Loun to track the origins of imported cases (Figs. 5 and 6). From July 2015 to January 2017, 11% of imported cases in the district were from Thailand, while 89% were from elsewhere within Cambodia. Of this 89, 31% were imported from neighboring provinces, while 69% were imported from other high-transmission areas, mostly in the eastern part of the country. Staffing requirements for malaria elimination The malaria elimination activities in Sampov Loun were largely carried out by existing program staff, including OD Malaria Supervisors (ODMS) for supervision, surveillance, and case finding; laboratory technicians for reading of blood slides; and the Village Malaria Workers (VMWs) for early diagnosis using RDTs, treatment of malaria cases, providing DOT to confirmed malaria cases, and conducting reactive case detection among contacts of malaria index cases. The PMI provided LLINs, RDTs, and other consumables, which were distributed by the CAP-Malaria Project, as well as covered the costs of capacity building and supervision. The project also provided nominal transportation expenses to the VMWs/health facility staff to conduct household visits if needed. Every time a positive malaria case was notified, the health center staff and VMW visited the village within the first three days to undertake case investigation and plan for reactive case detection (RCD) within the first seven days. RCD efforts are being targeted on high risk groups such as co-travelers, forest goers and overnight stay in the forest fringe areas. The project is using SD Malaria Ag Pf/Pv RDT (Alere) to make diagnosis and to screen target populations. This RDT has sensitivity of 99.7% (98.5-100) and specificity of 99.5% (97.2-99.9%). In addition, the CNM team and Provincial Malaria Supervisor conducted supervision of health facilities and VMWs to ensure smooth implementation of the elimination activities. The RCD visits were used for screening households around the index case, co-travelers and household members to identify additional malaria positive cases. The overall costs of this model are minimal and thus the surveillance model is replicable with minimal additional support. Discussion Results from the implementation of the 1-3-7 malaria elimination approach in Sampov Loun operational district from July 2015 to January 2017 demonstrate the feasibility of a local malaria elimination strategy, despite the challenges of multi-drug resistance and limited resources. The basic essential package of activities for malaria elimination, consisting of a combination of community-based case management and the 1-3-7 surveillance and response approach, was manageable at the The integration of mobile health technology enabled Day 1 case notification via SMS messaging and helped improve real-time surveillance efforts. Practically, case investigation and response activities were often conducted simultaneously, resulting in what local officials called a "1-2-2 model" or "1-3-3 model" rather than a The results from Sampov Loun generated valuable insights that can help make the 1-3-7 approach more efficient in the future. For example, in Sampov Loun, reactive case detection (RCD) efforts yielded very few positive results (0.9% positive rate over the period of implementation). Other studies have similarly questioned the efficacy of RCD in low transmission settings [11] or among neighborhood or hotspot contacts [12] and suggested that new approaches designed to optimize RCD are needed. In Cambodia, results suggest that focusing RCD efforts on co-travelers and forest worksites may be more effective than wider contact testing. Similarly, case investigations revealed that peri-domestic transmission was rarely, if ever, occurring in the district. This suggests that strategies that prevent peridomestic transmission, such as indoor residual spraying (IRS), may not result in additional malaria elimination gains. Finally, the 1-3-7 approach enabled the district to develop highly detailed maps of malaria cases, allowing for the identification of hot spots to be targeted for future activities. Several challenges were identified during implementation. First, the program experienced declining motivation among health workers to pursue case investigation and contact testing, particularly during weekends and public holidays. Maintaining workforce motivation, as well as collaborations with private providers, is critical to ongoing elimination effort success. In addition, in a context where most cases are imported from outside the district, district-level response activities alone are likely to be ineffective in interrupting transmission. Communication and surveillance linkages with other operational district malaria response teams is necessary to sufficiently address external sources of infection [10,13]. Reducing malaria burden in neighboring districts also has positive spillover effects, as the risk of re-introduction decreases. Strengthened cross-border collaborations are also needed to ensure adequate coverage of migrant and mobile populations with malaria preventive, diagnostic and treatment services [14,15]. Conclusion The experience of the 1-3-7 approach in Sampov Loun indicates that the basic essential malaria elimination package can be feasibly implemented in operational districts with very low-level transmission to achieve the goal of malaria elimination. As a result of the successful implementation in Sampov Loun, Cambodia has scaled up elimination activities in all operational districts of Battambang Province and Pailin Province, and planning to expand activities to neighboring provinces, while continuing to target malaria elimination countrywide by 2025. The national malaria program is exploring the possibility of integrating 1-3-7 or a variant of this in its Malaria Elimination Action Framework for the 2020-2025 period.
v3-fos-license
2019-03-24T13:02:40.988Z
2019-03-22T00:00:00.000
85455237
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11948-019-00101-7.pdf", "pdf_hash": "020b76317e07c38508e2334a6faa0ccd568df428", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1532", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "17c3880ef91c04d04888ed4983ce8680b325a18e", "year": 2019 }
pes2o/s2orc
Driving in the Dark: Designing Autonomous Vehicles for Reducing Light Pollution This paper proposes that autonomous vehicles should be designed to reduce light pollution. In support of this specific proposal, a moral assessment of autonomous vehicles more comprehensive than the dilemmatic life-and-death questions of trolley problem-style situations is presented. The paper therefore consists of two interrelated arguments. The first is that autonomous vehicles are currently still a technology in development, and not one that has acquired its definitive shape, meaning the design of both the vehicles and the surrounding infrastructure is open-ended. Design for values is utilized to articulate a path forward, by which engineering ethics should strive to incorporate values into a technology during its development phase. Second, it is argued that nighttime lighting—a critical supporting infrastructure—should be a prima facie consideration for autonomous vehicles during their development phase. It is shown that a reduction in light pollution, and more boldly a better balance of lighting and darkness, can be achieved via the design of future autonomous vehicles. Two case studies are examined (parking lots and highways) through which autonomous vehicles may be designed for “driving in the dark.” Nighttime lighting issues are thus inserted into a broader ethics of autonomous vehicles, while simultaneously introducing questions of autonomous vehicles into debates about light pollution. Introduction Autonomous vehicles have the potential to revolutionize transportation networks and radically transform urban design strategies. Exactly what this future will look like, though, is still open for debate and scrutiny. Yet while visions of future cities and roadways dominated by "driverless cars" remain nebulous, their impending realization has garnered a growing technical and ethical debate. 1 Current technical discourse largely focuses on the potential benefits in terms of safety, easing congestion, and emissions reductions (e.g., Hoogendoorn et al. 2014;Diakaki et al. 2015;Kockelman 2014, 2015). Research is also exploring the tangential effects of driving automation on issues such as vehicle ownership and sharing, land use, energy consumption, air pollution, and public health (e.g., Duarte and Ratti 2018;Milakis et al. 2017). Taking a more critical approach, ethical discourse has largely focused on how vehicles should be programmed to behave in dilemmatic life-anddeath scenarios, and what decision-making criteria should be followed (e.g., Bonnefon et al. 2016;Gogoll and Müller 2017;Lin 2016;Santoni de Sio 2017). The issues under debate are then how these vehicles should be programmed to operate in such circumstances, who should decide on this programming, and where the resultant moral and legal responsibility lies. 2 While important considerations, critiques have nevertheless been raised about this pathway for ethical discourse, including the over-reliance on viewing autonomous vehicles as a real-life manifestation of the "trolley problem" (JafariNaimi 2017; Nyholm and Smids 2016), the lack of attention to social justice issues (Epting 2018;Mladenovic and McPherson 2016), and the need for systems level analyses (Borenstein et al. 2017). These critiques highlight a broader issue with over-emphasizing hypothetical dilemmatic scenarios: they focus on a yet-to-be-realized endpoint, assuming that fully autonomous vehicles have been introduced into the existing physical, behavioural, and institutional landscape. Further, it leads the ethics of autonomous vehicles towards an ethics of collision programming. This risks overlooking the larger landscape of social and environmental challenges-and opportunities-this new technology may create, and the moral issues at stake therein. 3 Given the potentially transformative impact of autonomous vehicles on a broad range of moral, social, and environmental values, there is an opportunity-and arguably a duty-to broaden ethical analyses and consider how (and why) to develop this technology. For this task a design for values approach is adopted, which asserts that 1 In this paper "autonomous vehicles" and "autonomous driving systems" are used interchangeably. However, this is meant to be a broader categorization than "driverless cars" or "self-driving cars," which refers to a specific level of (high) automation and a specific use and function of the automated systems. Thus, these latter terms represent a set of assumptions that this paper is critical towards. 2 With the exception of Robert Sparrow and Mark Howard (2017) who reflect on the moral obligation to realize the transition to a fully autonomous transport system (assuming that this will be safer). societal and moral values should be proactively taken into account from the early stages of the design and development process, thus embedding values into the technical system (van den Hoven et al. 2015). Importantly, this approach allows for a questioning of basic presuppositions about both vehicle design and the surrounding infrastructure that this new technology will shape, and be shaped by (e.g., Heinrichs 2016;Milakis et al. 2017). Such an approach necessitates ethical research into a range of issues related to the physical infrastructure, institutions, and socio-technical systems interwoven with transportation networks. This paper examines one specific topic in detail, namely the relationship between autonomous vehicles and streetlights, a critical piece of transportation infrastructure that has yet to receive significant attention. The adverse effects of artificial nighttime lighting-known as light pollution-have emerged as a pressing environmental issue, costing billions of dollars, using enormous amounts of energy, negatively affecting human health and ecosystems, and hindering experiences of a natural night sky (Stone 2017). To combat these effects, "[t]he challenge faced by 21st century policymakers is to provide outdoor light where and when it is needed while reducing costs, improving visibility, and minimizing any adverse effects on plants, animals, and humans caused through exposure to unnatural levels of light at night" (Kyba et al. 2014(Kyba et al. , p. 1807. The introduction of autonomous vehicles is a rare and pivotal opportunity to take up this challenge. Questions of light pollution could therefore be part of the landscape of values and goals influencing the development of autonomous vehicles and surrounding infrastructure. This paper offers a novel analysis of the confluence of two technologies with seemingly disparate moral challenges-autonomous vehicles and nighttime lighting-exploring how autonomous driving systems could be designed to reduce light pollution and create darker nights. This paper will proceed as follows. A comprehensive ethics of autonomous vehicles, which utilizes a design for values approach to proactively incorporate ethical concerns into the predicted short-to-medium term development phases, is put forward to contextualize this paper. This is followed by a look into the ethics of nighttime lightning, and in particular the issue of light pollution. The value of darkness (Stone 2018b) is introduced as a moral framework for nighttime lighting, and applied to road lighting. In doing so, a weak and strong moral claim is articulated. At the least, autonomous vehicles must minimize the negative effects and costs of light pollution. Yet they can also go further, striving to actively promote the valuableness of darkness and help to re-imagine urban nightscapes. Following this bolder position, two scenarios are sketched in which lighting infrastructure can be adapted for "driving in the dark"-parking lots and highways-as are the design requirements this places on future high-automation vehicles. Towards a Comprehensive Ethics of Autonomous Vehicles To examine the ethical issue of how to design autonomous driving systems for reducing light pollution and realizing a better balance between lighting and darkness, the following principles are followed (adapted from Filippo Santoni de Sio 2016): 1 3 (a) Focusing on the process towards full automation and the full range of possible varieties of (partial) autonomy rather than only on one hypothetical fully-autonomous ("driverless") scenario; (b) Going beyond collision programming and towards the design of the entire sociotechnical system, including technical infrastructures, social and legal norms, and educational systems; (c) Broadening the scope of possible ethical issues involved in the design of future systems-not only risks for life and physical integrity, but also justice, privacy, inclusion, environment, etc.; and, (d) Taking a proactive approach and considering how ethical trade-offs (moral dilemmas) can be solved through design, by relying on a value-sensitive approach Before turning to light pollution and presenting possible "driving in the dark" scenarios, two of these principles require further explanation: varieties of automation (a) and a proactive design for values approach (d). Varieties of Automation Ethical debates often focus on "driverless" or "self-driving" cars-in other words, fully autonomous vehicles. However, such debates often jump to a hypothetical endpoint of both technical development as well as social and institutional adoption of this new technology. Thus, these are simplifications that a comprehensive ethical approach-with an attention to the full range of values at stake in the development of technology, as well as maintaining relevance to the real world of technology and policy-cannot afford. Therefore, before engaging in any ethical reflection on autonomous vehicles we should be clear on at least two issues: what different levels of automation are possible, and, what reasonable timelines for their adoption would be. 4 According to a standard taxonomy, SAE International standard J3016 (SAE 2016), vehicle autonomy ranges from 0-no automation, to 5-full automation, with the autonomous driving system controlling all aspects and modes of driving. A key distinction in the taxonomy is between level 2 and 3, when the autonomous system takes over an entire "dynamic driving task." However, at level 3 the human driver still has a responsibility to intervene at the request of the system. In levels 4 and 5-"high automation" and "full automation," respectively-this is no longer the case. Levels 4 and 5 are also called higher-order automation, insofar as "the driver no longer has to monitor the vehicle or system continuously" (Beiker 2016, p. 194). However, a critical difference is that whereas in level 5 the vehicle can drive autonomously under all scenarios-mixed traffic, city centers, highways, parking, high and low speed roads, etc.-at level 4 vehicles can only drive without human supervision in specific scenarios, for instance highways and parking lots. 3 Driving in the Dark: Designing Autonomous Vehicles for Reducing… Ethical literature focused on dilemmatic scenarios typically take level 5 vehicles as a given-"driverless cars" operating in mixed traffic scenarios and interacting with different sorts of road users (e.g., non-autonomous vehicles, cyclists, pedestrians) in various driving scenarios (highways, urban roads, country roads). However, notwithstanding the recurrent claims in the media that driverless cars "are coming," and although some in the car industry cite 2020 as a target date for fully autonomous vehicles, scientific researchers tend to be more cautious. An expert and enthusiast pioneer in vehicle automation like Steven Shladover (2016) is sceptical that full automation (level 5) will happen any time before 2075; however he believes that level 4 as defined by SAE (full automation for limited tasks) will likely be possible in the next decade. For example, he believes that autonomous valet parking and autonomous freeway systems (which form the basis of the two case studies included later on) will be realities within 10 years. However, once the technology is available, there are still questions regarding the rate of consumer adoption, as well as necessary policy and institutional changes. According to Sven Beiker (2016), in a scenario of continuous technological (and market) evolution, it would likely still take at least 15-20 years for there to be a significant share of cars in operation with higher-order automation (even though more niche-based innovations like autonomous taxis might take hold more quickly). 5 The Netherlands Institute for Transport Policy Analysis (KiM) predicts a similar trajectory, expecting a full transition to high-automation occurring around 2060-2100 (KiM 2015). Based on these predictions, this paper assumes that while full automation (level 5) is not likely to happen on a large scale in the near future, automation under limited conditions (level 4) is likely to be achieved and in use within the next 15-20 years. A comprehensive ethics of autonomous vehicles should therefore investigate the opportunities (and risks) that less-than-fully autonomous vehicles may bring, as well as anticipate issues that might arise during the transition period towards higher-level automation. Steering the Future: Design for Values Accepting that the future of autonomous vehicles is open, and that different scenarios for development and adoption can unfold, this future becomes one influenced by the choices of actors within all kinds of technical and social processes, including industry, governance, economics, and politics. Rather than retroactively observing how these choices and processes are eventually realized in specific scenarios, social and environmental values can guide a process of proactively creating scenarios that comply with these goals. From an engineering perspective, including values in the design process may seem counterintuitive, since engineering design has traditionally viewed new products or technologies as value-neutral, developed only on the basis of functional requirements. However, from the perspective of (consumer) product development, and fields such as architecture and fashion design, values are standard elements that co-shape design processes. For example, cars are already designed not only for enabling transportation at a specific speed, but also for expressing personality, style, wealth, masculinity, etc. Likewise, various requirements for a wide range of engineered products and services, such as safety and sustainability, are in fact value-laden concepts deeply embedded into the design process. There are roughly two ways in which social and environmental values can be injected into the design of technologies. The first is to take identified values as constraints to design. Designers should actively explore whether the new product or technology could violate or come into conflict with the values of stakeholders. If so, designers should adjust the design of the product or technology such that these conflicts are avoided. The value-sensitive design method developed by Batya Friedman and colleagues (2006) follows in part this more precautionary, constraint-oriented approach. Alternatively, social and environmental values can be articulated as requirements within the design process, alongside functional requirements. In this way, values are not only constraints against which designs should be checked, but also targets that immediately co-define the product or technology under development. For example, the design of a new bridge in a city can be seen as a project aimed at meeting functional requirements, such as allowing specific traffic flows, as well as at realising values such as expressing the innovative character of the city, or inclusiveness by allowing pedestrians and cyclists to use it. This more integrated approach is developed under the heading of design for values (van den Hoven 2007(van den Hoven , 2013van den Hoven et al. 2015 6 ). The later "driving in the dark" scenarios primarily adopt the latter design for values approach. The openness of the development of autonomous driving systems, combined with their potential to fundamentally transform transportation networks, creates a unique and pivotal opportunity to include social and environmental values as design criteria. Importantly, such an approach leads to the question: what values should be incorporated into the design of autonomous vehicles and surrounding infrastructure? The Function and Morality of Nighttime Lighting Within this more comprehensive ethics of autonomous vehicles, a careful articulation, and justification, of values worth pursuing becomes an important task. Here, the focus is on one particular set of values (and one particular technological domain) that has yet to receive critical attention, but for which the introduction of 1 3 Driving in the Dark: Designing Autonomous Vehicles for Reducing… high-automation vehicles introduces important possibilities: nighttime lighting. 7 A recent editorial in Lighting Research & Technology hints at the impending impact of autonomous vehicles by highlighting the existential crisis facing streetlights: It is predicted that by 2040 most vehicles sold will be autonomous. This raises an interesting question. If there is no driver who needs to see the way ahead, is the rationale for providing much road lighting gone? The potential represented by these impending technologies suggests to me that now would be a good time for all those involved in road lighting to ask themselves some fundamental questions. What is the purpose of road lighting? If it is no longer necessary to allow drivers to see where they are going, what is it for? (Boyce 2016, p. 787) While a critique of the timeline proposed by Peter Boyce was put forward above, as well as a clarification of what sorts of autonomous tasks may soon be realized, this call to action nevertheless signals a need to elucidate the values informing the "fundamental questions" driving the future need and function of streetlights. This means extending technical and moral discussions of autonomous vehicles to include the impacts of (transportation-related) nighttime lighting, and vice versa. Light Pollution and the Value of Darkness One lighting-related concern is the adverse effects of artificial illumination at night, known as light pollution. The concept of light pollution was popularized in the 1970s to describe and categorize the negative effects of artificial nighttime lighting, and has since emerged as an important environmental concern for the 21st century (Stone 2017). Terrel Gallaway (2010, p. 72) defines light pollution as "the unintended consequences of poorly designed and injudiciously used artificial lighting." In the USA, approximately 30% of outdoor lighting is considered to be "wasted," estimated to cost upwards of 7 billion US dollars per year. Furthermore, eliminating this excess lighting could have the same reduction in CO 2 emissions as removing ~ 9.5 million cars from the road ). An estimate of the excess and wasted nighttime lighting in the European Union puts the costs at over 5 billion Euros per year (Morgan-Taylor 2014). In addition to the financial costs and energy usage, artificial lighting at night has negative effects on human health, as well as wildlife and ecosystems (e.g., Gaston et al. 2015;Longcore and Rich 2004;Pottharst and Könecke 2013). And, ever-present skyglow, perhaps the most pervasive effect of light pollution around urbanized regions, is increasingly cutting off access to a starry night sky-experiences that arguably carry significant cultural value (e.g., Bogard 2013; Gallaway 2014). It is estimated that over 80% the world, and more than 99% 1 3 in Europe and North America, now live in regions with "polluted" night skies (Falchi et al. 2016). Acknowledging that light pollution is an important issue in its own right, and intertwined with larger societal concerns (e.g., sustainability and climate change), it can be argued that there is a moral obligation to work towards eliminating, or at least mitigating, the above adverse effects. Existing efforts to curb the light pollution include ordinances at local, national, and even trans-national levels, with goals of emissions reductions, energy (and cost) efficiency, and in some cases dark sky protection. 8 There are also efforts focused on proper technical standards for lighting fixtures, colour temperature, and brightness (e.g., IDA-IES 2011). And, in recent years "dark sky reserve" programs have emerged, aimed at the protection and conservation of unpolluted night skies in wilderness areas or national parks (Meier 2014). Such efforts are important and have led to successes in both curbing light pollution and raising public awareness. However, much of the developed world continues to get brighter, and this trend is expected to further increase with the widespread adoption of LED streetlights (Falchi et al. 2016;Kyba et al. 2017). Continued efforts are therefore needed, including proposals for more radical or transformative changes. We must consider longer-range ideas to effectively "design out" many of the causes of light pollution in ways that are, in the formulation of Ibo van de Poel (2016), both morally acceptable and socially accepted; that is, that can reduce negative effects without hindering the desirable and necessary aspects of nighttime illumination. In efforts to seek out more radical or transformative strategies to nighttime lighting, it is useful to also seek out new moral frameworks-to elucidate underlying judgments and re-frame the problem at hand. Shaping concerns about light pollution is an important shift in how we perceive and evaluate darkness at night. Historically seen as evil, chaotic, and dangerous, darkness is increasingly seen as something of positive environmental and cultural value (Bogard 2013;Edensor 2017). To understand how darkness could be viewed as something beneficial for urban nightscapes, the framework of Taylor Stone (2018b) is utilized here. The commonly recognized effects of light pollution are re-framed as nine ways by which, or through which, value is derived from darkness. From these nine values, prima facie obligations are derived as principles to be considered in the design of nightscapes, even if not achievable in every case (Table 1). This provides a comprehensive set of design goals for nighttime lighting that incorporates environmental values, thus going beyond only mitigating "polluting" effects. A focus on the value of darkness can therefore allow for a re-evaluation of all nighttime lighting, ultimately offering more drastic energy and cost savings. Importantly, this framework does not rest on a total de-valuing of lighting at night, but rather on an appreciation of natural nighttime conditions and the potential created by an attentive and restrained use of artificial lighting. Designing for Darkness Street lighting and vehicle lights combine to create one of the largest sources of illumination at night, and therefore should be seen as a pressing environmental issue (Lyytimäki et al. 2012). One needs only to view an aerial photo (at the scale of cities, nations, or even continents) to observe the presence of transportation-related lighting, as illuminated grids and lines carving through the landscape. According to the International Energy Agency (2006), globally there are more than 100 million streetlights, using approximately 114 TWh of electricity annually. Parking lots are responsible for an estimated 55 million additional lights in OECD countries alone, consuming an additional 88 TWh of electricity in 2005. Taken together, street and parking lot lighting combine to constitute over 90% of outdoor illumination (International Energy Agency 2006). In the European Union, lighting accounts for 14% of total energy consumption; of that, approximately 14.7% is outdoor stationary lighting, which are mainly streetlights. Globally, almost one-fifth of all electricity produced is used for lighting, of which approximately 8% is outdoor stationary lighting (De Almeida et al. 2014). Another area of impact is the vehicle headlights. It is estimated that each year over 55 billion litres of gasoline or diesel is used to operate vehicle lights, equating to about 3.2% of total vehicle fuel use, and equivalent to the consumption of over 1 million barrels of oil daily (International Energy Agency 2006). Design strategies that address nighttime lighting can take the form of a weak or strong position. First, future autonomous vehicles must, at the very least, strive to reduce the adverse effects and costs caused by transportation-related illumination. Given the ties to efficiency and sustainability (cost savings, GHG reductions, etc.), as well as likely health and ecosystem benefits, there is no moral justification for omitting consideration of this design requirement. The degree to which light pollution can be reduced, and if this may compromise other desired goals, are additional questions outside the scope of this paper. For instance, a position often taken is that nighttime lighting increases safety and should therefore be extended rather than be reduced. 9 Acknowledging the importance of such questions, however, should not block the adoption of light pollution concerns as a prima facie requirement in the development of autonomous driving systems. A second, stronger claim, although more bold and visionary, can be derived from a design for values approach: future autonomous vehicles and surrounding infrastructure should actively promote the value of darkness. The transformative potential of higher-automation vehicles offers an opportunity to fundamentally re-consider how (and why) to light our nightscapes. The vehicle-focused lighting strategies of the 20 th century can be replaced with alternative approaches, which actively strive to bring some darkness back into urban nightscapes. The effects can be far reaching, ranging from lighting that is more attentive to pedestrian and cycling traffic, to more intimate and convivial urban spaces, to ecologically-oriented "dark design" (Edensor 2017), to re-envisioning ideas of the nocturnal sublime within urbanized areas (Stone 2018a). As mentioned above, such an approach does not imply a goal of eliminating all nighttime lighting, but a better balance of light and dark that is attentive to functional needs and environmental values. In sum, autonomous vehicles can work towards achieving what Tim Edensor (2015, p. 436) poetically describes as a "re-enchantment of the night" via a conscientious re-introduction of urban darkness. Realizing Darkness with Autonomous Vehicles If the bolder position articulated above is adopted, how would this steer future innovation? What scenarios and related design requirements would eliminate much of the need for transportation-focused streetlights, thus allowing for a drastic reduction in light pollution and a conscientious re-imagining of (urban) darkness? What does this mean for the design choices for autonomous vehicles themselves, as well as surrounding infrastructure and institutions? Such questions are complex, requiring technical, moral, legal, policy, and design work for a full answer. Here first steps are taken by providing a preliminary sketch of what such a path forward would entail. Accepting the timeline of technology development and adoption as earlier laid out, it can be expected that level 4 automation-where the system has full control for limited tasks-will be available within the next decade, and market saturation may occur over the coming 20-30 years. These tasks, though limited, provide a testbed for the viability of "driving in the dark" scenarios, and represent "low hanging fruit" for immediate positive effects. With this in mind, two scenarios that are candidates for full automation in the near future are introduced: parking lots and highways. These build on the similar scenarios proposed by Walther Wachenfeld and colleagues (2016), adding the potential for substantial, and relatively immediate, 3 Driving in the Dark: Designing Autonomous Vehicles for Reducing… positive impact towards the creation of darker nights. Both have a singular functionality and are primarily used by vehicles, thus avoiding issues such as pedestrian and cyclist interactions. Equally important, their lighting is singularly focused on vehicle usage, with little or no ancillary benefits (aesthetic, social, etc.), meaning that a drastic reduction would have minimal impact on other nearby types and uses of illumination or nighttime activities. Following these two case studies, the resultant design requirements for the autonomous vehicles themselves are briefly considered. Scenario 1: Parking in the Dark The first scenario builds on the use case "Autonomous Valet Parking" described by Wachenfeld and colleagues (2016, pp. 14-16). As the name indicates, the autonomous system acts as a personal valet. One exits the vehicle at a destination, inputs a nearby parking lot for the system, and the vehicle parks itself. Similarly, a pick up location would be chosen (similar to ride-sharing services) and the vehicle would come pick you up. Such a scenario typically means a short driving distance for the autonomous program (and in cases such as shopping malls, driving only within the parking lot itself), low speeds, and lighter traffic. Hence, this can be seen as an introductory scenario of level-4 automation for (personally-owned) vehicles. The value-add proposed here is that parking lots designated for autonomous valet parking no longer require constant illumination. This would be cost-saving for the lot owner and reduce energy consumption. And, it would greatly decrease light pollution, especially if this could be introduced in suburban areas around shopping malls etc., where parking lots take up extensive space. With only contingency lighting in place for maintenance, security, and emergencies, the parking lots could be left in the dark. The nature of parking lots also makes the incremental rollout of "parking in the dark" possible-specific lots or sections can be converted gradually, based on demand. Thus in the short term designated darkened parking lots can be introduced, with this trend spreading if autonomous parking becomes the norm in future generations of vehicles, and if dark parking gains support and public acceptance. Overall, this scenario is seen as having high impact potential-recall that globally there are more than 55 million lights used for parking lots (International Energy Agency 2006). Furthermore, it can be applied in a variety of settings-commercial areas, urban downtowns, suburban and residential areas, etc.-allowing for a wide distribution of benefits. It may create new concerns about crime, although dark campus programs have reportedly seen reductions in vandalism through reduced nighttime illumination (Henderson 2010). Even so, it would still necessitate new protocols for security, as well as safety considerations regarding barriers to entry or some form of technology-supported surveillance such as infrared cameras or alarm sensors (especially in areas where pedestrian traffic is close by). Scenario 2: Dark Highways The second scenario builds on the use case described by Wachenfeld and colleagues (2016, pp. 12-14) of "Interstate Pilot Using Driver for Extended Availability." In this scenario, once the vehicle has entered the highway the driver can-or mustactivate the robot and relinquish driving responsibilities. After a destination is entered, the autonomous system will take over navigation, guidance, and control of the vehicle. At the pre-determined off-ramp or exit, the autonomous system coordinates a safe handover, with backup emergency procedures if the driver is unresponsive. Important to note is that "highways" here is used to describe a broader typology of roadways, which are given different names in different (social and use) contexts: freeways, interstates, expressways, etc. However, the common characteristics of these roadways are most important. First, they are used exclusively for highspeed vehicle traffic. Second, access is only possible by special connecting elements, such as on/off ramps (Wachenfeld et al. 2016). This means they will be devoid of pedestrians and cyclists, as well as intersections. Despite their high speeds, the simple surroundings, driving tasks, and minimal "dynamic objects" means that this can be considered as an introductory use case for autonomous systems (Wachenfeld et al. 2016). Similar to the scenario above, the value-add is that highways would no longer require lighting, save for on/off ramps and emergency situations. While promising, the adoption of darkened highways presents more complications than the dark parking scenario above. First, it would require that all vehicles on the road use autonomous systems; so long as one car has a human driver, all lights are required. Thus, a high level of market saturation, combined with regulatory changes to vehicle requirements, would be required. A second issue would be user acceptance, as this would be a somewhat radical change in driving habits. One can imagine some initial hesitation to being a passenger in an autonomous vehicle travelling 120 km/hr with no lights above, no headlights, and no brake lights! Yet these concerns, though certainly well grounded, are not insurmountable. There has been widespread adoption of train and airplane travel, where passengers have eventually come to simply rely on the system to safely bring them to destinations, even if they cannot control or even see what is happening in front of the vehicle. Thus, such concerns do not make darkened highways immediately untenable as an ambitious medium-term goal, and the potential reduction to nighttime lighting offered by such scenarios is great enough to warrant further investigation. Certainly, in line with a design for values approach, this initial normative proposal should be integrated with empirical research into users' behaviour and responses. Design Requirements for Dark-Driving Autonomous Vehicles For these and other possible driving in the dark scenarios, a final consideration is the development of autonomous vehicles themselves. When adopting the value(s) of darkness as a design goal, it may seem obvious that efforts towards their realization should directly focus on the brightly lit and extensive road systems, and in a subsidiary way to adaptations of the vehicles for which the roads are meant. Yet, a simultaneous focus on the re-design of the vehicles is also required. For example, efficiency in car lighting is beneficial to increasing the distance cars can drive, establishing a link to the instrumental values of efficiency and sustainability-something particularly relevant for the introduction of electric vehicles. Also in this respect, it should be emphasized that here we are making general normative proposals for what should be integrated into the technical and empirical work on autonomous driving systems. Thus, these should simply be understood as initial considerations to be assessed, and if possible integrated, via future technical research. Towards the goal of designing autonomous vehicles to function without both streetlights and headlights, a few key requirements can be identified (Table 2). First, it requires a "higher-order" level of automation-level 4 or higher as per the SAE taxonomy (SAE 2016). This will need to be accompanied by social and institutional changes, for it requires some consideration of when (or if) this technology should be "grandfathered" into new vehicles by laws and regulations, and a timeline for turning off lights in parking lots and especially on highways. Both scenarios would also require a re-design of transition zones, as well as new safety and emergency protocols. Another important consideration is the development of sensor technologies and navigation systems. The "driving in the dark" scenarios require a continuous investment in systems that require little or no lighting to navigate at night, such as on LiDAR ("light detecting and ranging") technology coupled with maps, GPS, etc. This can potentially allow autonomous systems to drive in total darkness, as evidenced in an early test by Ford (e.g., Burgess 2016; Korosec 2016). Designing for low-light navigation will undoubtedly raise new technical challenges, including how to detect traffic signs and lane markings, as well as how to detect unexpected objects such as debris or wildlife-an especially important issue for highway safety. Further, the required technologies may be financially prohibitive in their current form. But again, this does not detract from the assertion that this should be one explicit design goal during the current development phase, in order to explore what possibilities are technically, financially, and socially achievable, and at what scale. A more general design consideration is the new user experiences that can be offered by driving in the dark. Cars are often framed as means to give people freedom and access to natural settings, which can hypothetically find its way into the design of cars through features like panoramic transparent roofs, allowing passengers Requires some consideration of if or when this technology should be "grandfathered" into new vehicles by law, and a timeline for turning off lights in parking lots and on highways; both would require a re-design of transition zones and safety protocols to enjoy natural landscapes and nightscapes. Switching off the lights would substantially increase experiences of the wonder and beauty of the starry night sky, further fostering the intrinsic goods of darkness (see Stone 2018a, b; Table 1). Conclusion This paper proposed that the development of autonomous vehicles should incorporate, among other things, the ethics of nighttime lighting. At the least, autonomous vehicles should be designed to reduce the adverse effects of light pollution. More radically, they can strive to create darker nights and play a role in re-imagining urban nightscapes. To frame such a proposal, an ethics of autonomous driving systems broader than the dilemmatic life-and-death questions of trolley problem-style situations is required. A design for values approach to engineering ethics, in which values are pro-actively incorporated into technologies during their development phase, opens up a range of potential issues that can-and arguably should-be addressed in both the design of autonomous vehicles and their surrounding physical and institutional infrastructure. The scenarios for dark parking lots and highways presented in this paper should not be seen as definitive, but are rather starting points for incorporating the ethics of nighttime lighting into a broader ethics of autonomous driving systems. And considered otherwise, it shows how autonomous vehicles, as one example of an emerging technology with profound transformative and disruptive potential, can be inserted into discourse on nighttime lighting. While the development of future roadways may not necessarily adopt these (or similar) scenarios in full, they must at least take this as a prima facie consideration in the development process. Future research should address the technical and financial feasibility of these scenarios, as well as study the possible social and psychological dimensions of their implementation. Future research should also consider what new ethical problems could arise if these proposals are adopted. Langdon Winner (1980) famously showed the embedded-ness of politics in infrastructure by arguing that New York highway overpasses were explicitly made too low for public buses. This prevented public transportation from reaching certain locales, with the goal of hindering access to racial minorities and people of lower socio-economic status. One could similarly imagine that the proposals discussed in this paper inadvertently contribute to a similar scenario, where affluent areas or access roads are pro-actively darkened, therefore requiring high-automation vehicles and potentially limiting access based on socio-economic status. Hence, if "driving in the dark" scenarios are to be adopted, the landscape of potential ethical and political impacts must be continually explored alongside technological innovation. What is now needed is an iterative process for if (or how) the value of darkness gets incorporated into autonomous driving systems, and how it fits into the broader landscape of values at stake. If the general argument is accepted, a follow-up question is then what other social and environmental issues should be considered during the development phase of autonomous vehicles. This requires a careful consideration of both institutional (e.g., ownership models) and physical (e.g., the design of mixed-use urban centers) infrastructures, as well as new issues created by autonomous systems, such as data security. And, this could lead to a re-design of various services that make use of vehicles (e.g., ambulances, garbage pick-up, or package deliveries). Put more bluntly, ethicists must continue to critically and creatively explore what a future of "driverless cars" can, and should, entail.
v3-fos-license
2021-08-20T18:57:23.688Z
2021-04-01T00:00:00.000
237515976
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mental.jmir.org/2021/9/e29318/PDF", "pdf_hash": "6b28d4531078d3fb274673e661c11452bd90e239", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1533", "s2fieldsofstudy": [ "Medicine", "Sociology", "Education" ], "sha1": "27061c1386f9a012e2c78ef8d1061cd37c44a9ba", "year": 2021 }
pes2o/s2orc
Adolescent Health on Social Media and the Mentorship of Youth Investigators: Five Content Analysis Studies Conducted by Youth Investigators Although the literature on adolescent health includes studies that incorporate youth perspectives via a participatory design, research that is designed, conducted, and presented by youth remains absent. This paper presents the work of 5 youth investigators on the intersecting topics of adolescent health and social media. Each of these youths was equipped with tools, knowledge, and mentorship for scientifically evaluating a research question. The youths developed a research question that aligned with their interests and filled a gap that they identified in the literature. The youths, whose projects are featured in this paper, designed and conducted their own research project, drafted their own manuscript, and revised and resubmitted a draft based on reviewer input. Each youth worked with a research mentor; however, the research questions, study designs, and suggestions for future research were their own. (JMIR Ment Health 2021;8(9):e29318) doi: 10.2196/29318 No Research About Us Without Us Most scientists who work with human subjects are familiar with this tenet, which implies that study participants, including the youth, should not be treated merely as passive subjects and beneficiaries of research but rather as active contributors to the research process [1]. Indeed, a growing body of research suggests that youths and their communities alike benefit from youth participation in research [2][3][4]. However, one systematic review of youth participatory action research found that youths are seldom engaged in the earliest phases of the research process, including the assessment of needs and formation of research questions [3]. Another systematic review on youth participatory action research found that none of the 45 studies reviewed included youth as authors [4]. Thus, despite increased recognition of the need for youth inclusion in research, instances of youth defining research questions and authoring empirical manuscripts are rare. the way that young people engage with designing and implementing adolescent health programs and policies" [5]. This empirical compilation extends this prerogative to suggest that youth should also be invited to author research. This suggestion may raise additional considerations and concerns. For example, teaching and mentoring youth through the basics of the research process is time-intensive, and many research teams do not prioritize the allocation of staff toward training youth. In addition, youths are often unfamiliar with processes that the scientific community has deemed crucial to legitimate scientific contributions, and thus, research produced by youth may differ from that produced by scholars with more training and resources. In this study, given the youths' early stage in their research careers, certain expectations of typical empirical research articles were adjusted. For example, the youths received permission from mentors to collect smaller sample sizes, use simpler analytic approaches, and retain a few instances of less-scientific language to allow their unique voices to be present. These approaches are similar to professional researchers' early investigations or pilot work, which often focus on detecting early findings to fuel larger hypothesis-driven studies [6,7]. We suggest that the need for adjustments such as these is not a sufficient reason to exclude youth from publishing research; rather, youths' developmental and educational stages should be considered as important context for evaluating their work. This compilation of youth research presents the work of five young investigators; 4 of the youth authors were in high school at the time of writing (JJ, EK, AM, and OT), and one was in her first semester of college (SG). All youth participated in the Summer Research Scholars (SRS) program, a program that uses a tested and empirically supported curriculum to guide adolescents through the steps of the research process to complete and present their own independent research project [8]. The youths whose projects were featured here experienced 3 months of training in the SRS program. They were provided with tools, knowledge, mentorship, and supervision to scientifically evaluate a research question. They selected their questions through a review of the literature, incorporation of their own areas of interest, and discussions with their peers and mentors. The youths then had approximately 6 months to design and conduct their own research project, draft their own manuscript, and revise and resubmit that draft based on reviewer input. Although each youth worked with a research mentor, the research questions, study designs, and suggestions for future research are their own. Natural Language Processing In this compilation, two of the five studies used natural language processing to evaluate the text. For these analyses, the Linguistic Inquiry and Word Count (LIWC) program was used [9]. This software program analyzes bodies of text for the frequency of keywords associated with psychologically meaningful categories, including thinking styles, attentional focus, and emotionality in a variety of experimental settings. LIWC builds on several decades of research to understand narrative voice in health [10][11][12][13][14][15] and uses validated internal dictionaries developed by a rigorous process-in which groups of judges reviewed 2000 words or word stems and determined how the reviewed words related to specific categories (eg, word count, total first-person usage, and negative emotion). During the LIWC analysis of a document, every word is compared with dictionaries of up to 74 dimensions across these categories. LIWC calculates the proportion of words falling into different categories, ranging from emotional words to words about social context [16]. LIWC has been validated for content and construct validity [11,17]. Interrater reliability discrimination of categories has been found to range from 86% to 100% depending on the dimension being assessed, supporting content validity. Content Analysis All five studies included content analyses as all or part of the approach. Content analysis is a systematic technique for developing categories into which data are sorted based on explicit rules [18]. Content analysis allows for the examination and quantification of social media content, such as original posts (text, images, and videos) and engagement with such posts (likes, comments, and shares). Content analysis has previously been used to evaluate discussions of health-related topics on social media [19,20]. Content analysis may produce more objective data on social media behavior than, for example, self-reports or interviews, which may be vulnerable to social desirability or recall bias. Content analysis approaches allow for the testing of hypotheses through the development of a deductive codebook based on theory or clinical guidelines. Content analysis also allows for the exploratory evaluation of novel phenomena through the inclusion of inductive codes. Furthermore, the content analysis of publicly available social media data is often granted exempt status by institutional review boards, as was the case with each of these projects. This research method allowed adolescents to conduct research on an accelerated timeline. Background As of February 26, 2021, the COVID-19 pandemic was associated with more than 2.5 million deaths globally, with 508,127 confirmed deaths in the United States alone [21]. Nonpharmaceutical interventions have been used to curb the spread of the virus. Social distancing measures (physical distancing, quarantines, and remote work or school) have been found to be one of the most effective methods for reducing COVID-19 transmission [22], and in compliance with these measures, many communications have shifted to a virtual format. In turn, social media may play a significant role in communication regarding the risks associated with COVID-19 and social distancing measures. In the early days of the first outbreak (January 31, 2020, to February 2, 2020), social media use among Chinese citizens aged 18 years or older was correlated with a 22.6% increase in anxiety and a 48.3% increase in depression [23]. Before the pandemic, anxiety and depression were not uncommon among adolescents, with 16.5% of US youth aged 6-17 years experiencing a mental health condition in 2016 [24]. In 2020, 98.1% of US adolescents reported compliance with social distancing [25]. Adolescents may be at a unique risk for mental health challenges because of a combination of social media use, prepandemic rates of anxiety and depression, and reduced social contact. The World Health Organization (WHO) declared COVID-19 a global pandemic on March 11, 2020. By April 2, 2020, the WHO launched their #HealthyAtHome campaign, followed by advice on the use of face masks by April 6, 2020. All these announcements were communicated in a tweet from the WHO [26]. Objectives Currently, associations among social media use, mental health, and the language surrounding COVID-19-related posts remain unclear. The shifting state of the institutional response to COVID-19 may be reflected in the broader discussion of the pandemic on Twitter. Thus, changes in information found on social media may be associated with related shifts in mental health. In light of this, our study aims to explore the independent and co-occurring mentions of social distancing and mental health on Twitter between March and April 2020, as well as the content and language featured in these posts. Study Design We conducted an exploratory content analysis and linguistic analysis of Twitter. Our social media unit of analysis was the tweet, which we defined as a single post created by a user account containing between 1 and 280 characters of text. This study was exempt from human subjects review by the University of Wisconsin-Madison Institutional Review Board. Search and Sampling Strategy We sought to obtain a representative sample of COVID-19-related tweets from March 2020 to April 2020. We manually sampled posts from the Top category of a custom Twitter search for #COVID19 from both months. We selected the first 50 tweets from the list for each month. Twitter's proprietary sorting algorithm governs the retrieval of tweets via this mechanism. This design sought to replicate how adolescents might encounter COVID-19-related tweets available to the general public. Tweets favored by the Twitter algorithm are given more exposure on the platform and thus are most relevant to the average adolescent Twitter user. Social Media Inclusion Criteria Sampled tweets were included for analysis if they contained English language text and the hashtag #COVID19. If either of these criteria were not met, the corresponding tweet was excluded from the analysis. Similarly, duplicate tweets were eliminated from the sample set. Measures Our study examined the following three categories of data: descriptive profile data, post content, and LIWC scores. Descriptive Profile Data Descriptive profile data for each tweet included its post date, number of likes, number of account followers, and account verification status. These data points offer context regarding the size of the audience and the types of accounts posting the tweets. Verified accounts usually belong to users of public interest, ranging from celebrities to institutions such as the WHO and accredited professionals, including epidemiologists. Post Content Post content data consisted of the multimedia attached to the tweet in the form of a hyperlink, image, or video, as well as references to social distancing and mental health. Multimedia data were recorded to provide context for social media engagement (interaction with posts through likes, comments, and other means), and by extension, its weighting by the algorithm. In previous research, social media users reported that they would preferably engage with posts containing an image (68%), a video (50%), and a hyperlink (16%) [27]. Thus, greater multimedia presence indicates more relevance to typical adolescent users because of the relationship between high social media engagement and Twitter's algorithm. References to social distancing and mental health were based on keywords established in the codebook, which are defined in Table 1 (22) One or more keywords from social distancing and mental health Both The LIWC Program We used the most recent version of the LIWC program, which is a text analysis program described in the Introduction section of this compilation [28]. LIWC has previously been used in studies on news media coverage of cyberbullying [29], gender differences in pediatric residency personal statements [30], and linguistic convergence among friend groups [31]. Output variables, referred to as LIWC scores in this paper, represent the frequency of keyword occurrence. Each of these numerical scores can be compared between datasets to illustrate the relative trends in the written materials. The only LIWC score that differs from this design is emotional tone, which is evaluated as a percentile between 0% and 100% [28]. Sample keywords from LIWC dictionaries are listed in Table 2. Our LIWC dictionaries of interest were anger and sadness (emotions), anxiety and risk (perceptions), and first-person singular and first-person plural (pronouns). The emotions and perceptions dictionaries correspond to several relevant components of adolescent mental health (eg, symptoms of stress, anxiety, and depression), whereas the singular and plural pronoun dictionaries provide insight into the framing of tweets. All these factors help inform our understanding of the possible mental health implications for adolescent users. Data Collection Procedures Data collection began with Twitter's Advanced search under the Search filters section of the website. This type of search allows a user account to filter tweets using hashtags and dates. #COVID-19 was input into the hashtag field, with March 1, 2020, as the start date and March 31, 2020, as the end date for March. For April, we switched the data parameters to April 1, 2020, for the start date and April 30, 2020, for the end date. Age-related filters were not available, so our selected tweets were produced by accounts associated with organizations and individuals of various ages. Tweet text verbatim was copied into a spreadsheet before the number of the tweet's likes, the number of account followers, and account verification status were recorded. Tweet text was analyzed using the LIWC program to determine LIWC scores. We did not collect any personally identifiable data, such as full names or Twitter handles. Data were collected between July 22, 2020, and July 26, 2020. Analyses Data were separated by month to represent March 2020 and April 2020 to interpret our findings through comparisons. Descriptive statistics were conducted and included means and SDs for tweet likes and followers, percentages for account verification, and percentages for multimedia. A chi-square test was used to assess the relationship between the month and the proportion of mentions of social distancing, mental health, and both or none. Independent t tests were conducted to compare LIWC scores between tweets posted in March and April. Statistical analyses were conducted using the software STATA 15.1 (StataCorp LLC), and statistical significance was set at P<.05. Descriptive Profile Data We identified a national sample of 100 tweets associated with the COVID-19 pandemic, with 50 sourced from March 2020 and another 50 sourced from April 2020. The March tweets averaged 586. LIWC Results An independent sample t test showed that March 2020 tweets had significantly more singular (I) pronouns (mean 2.23, SD 4.32) than April 2020 tweets (mean 0.77, SD 1.66; t 98 =2.24; P=.03). All other results were statistically insignificant (P>.05). The complete list of LIWC scores and t test results are listed in Table 2. Principal Findings We found that references to social distancing increased, whereas references to mental health and both categories decreased between March and April. In addition, first-person singular pronoun usage decreased significantly during the same timeframe. On the basis of these findings, we drew cautious inferences regarding adolescent mental health amid the COVID-19 pandemic. Social Media Discourse Our results suggest that social distancing may have become a more salient topic of social media discourse as the pandemic progressed. In the early months of the crisis, fears surrounding the coronavirus remained relatively consistent, supported by the lack of significant differences between the March and April data sets of anxiety and risk. In addition, social distancing remained prominent between both months, whereas references to mental health decreased from March to April. These findings may provide some insight into adolescent mental health amid the COVID-19 pandemic. Previous studies have shown that adolescents are especially sensitive to changes in social stimuli, particularly in the reduction of interaction with peer groups [32]. References to social distancing may emphasize this unfulfilled need. Adolescents who increasingly come into contact with social media content involving social distancing may be at an elevated risk of mental health consequences. Individualism Versus Collectivism The decrease in the prevalence of I pronouns suggests less frequent usage of an individualistic language as the pandemic progressed. However, our findings do not support an increase in the prevalence of we pronouns. One possible explanation is a shifting focus to more factual and objective content phrased in the third person (eg, public health statements). In the future, larger studies may be able to detect trends in language to examine changes in individualistic or collectivist sentiment over time or across groups. Collectivism emphasizes the priorities of the group over the individual [33], which may reduce the frequency with which individual mental health is referenced. For this reason, fewer references to mental health may indicate a societal paradigm shift away from the perspective of individuals. The collectivist perspective is associated with positive mental wellness under certain circumstances. In a study, cultural collectivism was correlated with a reduction in suicidal ideation among grieving women [32]. In contrast with the isolation associated with social distancing measures, a collectivist outlook may have a positive effect on adolescent mental health. Limitations A few considerations must be taken into account when interpreting the results of our study. The breadth of our content analysis provides insight into macrolevel social trends, although our sample size was small. For a few of our LIWC measures, statistical significance may have been achieved in a larger sample. Similarly, selected posts were prioritized by the Twitter algorithm rather than being produced or shared by the adolescent population. With these caveats in mind, our data can be used to draw cautious inferences regarding specific population segments, although they are not a perfect metric of individual attitudes and perceptions. In the future, surveys or focus groups can be used to corroborate the content analysis findings. In addition, the web-based or offline divide may have played a role in our study. Opinions expressed on the web can differ from personal beliefs offline. Interviews with adolescents would provide an understanding of offline perceptions of social distancing and mental health. Conclusions In light of our study, clinicians should consider the type of content being consumed on social media when addressing adolescents' mental health during the COVID-19 pandemic. Mass media sources (television and newspaper stories) have the capacity to shape public responses to crises. Previous research has demonstrated that as few as 4.6% of these sources express empathy in response to crises [34]. Mass media messaging is related to social media, as Twitter serves as a platform that forwards these sources to a wider audience. When discussing mental health challenges, clinicians should ask their adolescent patients about the tone of the media they consume and how it affects their mood and thought processes. These considerations will be especially relevant in treating the long-term mental health consequences of the pandemic. Future studies should investigate the offline mental health implications of COVID-19 on social media using more targeted methods, such as interviews and focus groups. These studies can be regularly administered for a timeframe longer than 2 months to better track the evolution of the adolescent mental health response to COVID-19. Opioid Use Among Adolescents In 2018, almost 70,000 people died from drug overdose [35]. Two out of three overdose deaths involved opioids such as prescription opioids, heroin, or synthetic opioids (eg, fentanyl). A large number of opioid users are adolescents [36]. Research suggests that 5.5% of those aged 17 years endorse opioid misuse [37]. Investigating opioid-related content on social media platforms commonly used by adolescents has the potential to reveal patterns of opioid abuse at a national scale, understand the opinions of adolescents and young adults, and provide insight to support prevention and treatment [38]. Social Media and Opiate Use Despite the large number of adolescents who misuse opioids [36], little is known about how social media influences adolescents' use or misuse of opiates. One study found an association between a participant tweeting about opioids and offline opioid overdoses [39]. Furthermore, previous research has shown that engagement with alcohol-related and e-cigarette-related social media is associated with more offline use of these substances [40,41]. Therefore, it is important to understand the messages that adolescents view on social media regarding the risky behavior of using opiates, as these messages may predict their behavior. Reddit is a forum-based social media platform in which subcommunities, or subreddits, are built based on people's interests [42]. A study on regular news consumers found that about half of Reddit users were young adults aged between 18 and 29 years [43]. To date, few empirical studies have discussed opioid use in Reddit communities. A previous study evaluated posts from anonymous and nonanonymous users in an opioid-related Reddit thread [44]. This study found that nonanonymous users were more likely to use words related to the past than anonymous users, who may have felt more comfortable discussing present actions. This study supports the usefulness of applying linguistic analysis to Reddit posts in an effort to understand opioid users. However, this study did not examine differences between pro-and antiopioid Reddit posts, which may further reveal attitudes among opioid users. This Study Little is known about how Reddit users post about opioids in web-based communities, the degree to which engagement occurs, and the themes present in pro-versus antiopioid posts. The aim of this study is to conduct a content analysis evaluating engagement and linguistic elements of pro-and antiopioid use posts on Reddit. Study Design We conducted a content analysis using the natural language processing of publicly available Reddit posts. This study was exempt from human subjects review by the University of Wisconsin-Madison Institutional Review Board. Reddit Post Selection We identified a sample of Reddit posts by searching for opioid on Reddit and clicked on the first subreddit, which also had the largest membership. This selection approach allowed us to replicate how an adolescent might naturally look for information about opioid use on Reddit. Posts within this subreddit were sorted by the most recent, and the first 100 posts were pulled for analysis. Posts were included for analysis if they discussed opioid use and took a positive or negative stance regarding opioid use. Neutral posts were excluded from the analysis. To determine whether posts were pro-or antiopioid use, we developed a codebook based on previous research [45]. Pro-opioid posts mentioned usage of drugs, questions about usage and sourcing, and addiction without a stated intent or desire to recover. Antiopioid posts mentioned seeking help for recovery, withdrawal, and sobriety. One investigator (SG) categorized posts as positive or negative using a deductive approach. A second investigator coded a 19.61% (20/102) subsample of both Reddit (n=10) and Twitter (n=10) posts to calculate the interrater agreement. Interrater agreement was calculated as the percentage of Reddit and Twitter posts referencing each category, ranging between 85% (17/20) and 100% (20/20), with a mean of 92.7% (SD .05%) The LIWC Program Pro-and antiopioid posts were evaluated using the LIWC software described in the Introduction section of this compilation [28]. LIWC software scans text and calculates the percentage of text words that fall within a given dictionary. LIWC has been used in previous studies to evaluate publicly available text and understand differences in content and tone [29]. LIWC Dictionaries The selected LIWC variables were aligned with the specific aims of our study. Previous research has shown that the results of opiate use include changes in physical and mental health, isolation from family and friends, and financial difficulties [46]. Therefore, we included variables related to tone, positive emotion, negative emotion, health, relationships, and focus (past, present, and future). The three focus variables show whether the text describes past events (used, ago), present events (now, today), or future events (will, soon). These variables were selected based on their relevance to opioid use [47]. Table 3 shows the full list of LIWC variables, with example words coded for those variables. Engagement We collected data on how individuals may interact with proor antiopioid posts by logging the number of upvotes and comments on each post. The upvote feature on the Reddit platform is typically used as an indication of the support or approval of a post. Data Collection Procedure Data were collected on Reddit posts from July 11 to 12, 2020. Interrater agreement for positive posts was 80% (8/10). All posts were copied and pasted verbatim into Google sheets. Analysis Descriptive statistics were used to understand the engagement with pro-and antiopioid posts. A two-tailed t test was used to compare LIWC scores for each variable between pro-and antiopioid Reddit posts. Statistical significance was set at P<.05. Sample Characteristics At the time of the analysis, this subreddit had approximately 116,000 members. A total of 100 posts were included in the analysis. All posts were dated between July 8, 2020, and July 26, 2020. In this sample, 65% (65/100) of the posts were pro-opioids, and 35% (35/100) were antiopioid. There were An example pro-opioid post was the following, "Hey guys, was just wondering based on your experiences, what is the best oxycodone brand." An example antiopioid post was the following: LIWC Results The scores on the LIWC variable Focus Present were significantly higher for antiopioid posts (mean 15.58, SD 9.81) than for pro-opioid posts (mean 11.19, SD 5.79; t 98 =2.82; P=.02). The Focus Future variable was also significantly higher for antiopioid posts (mean 1.63, SD 1.86) than for pro-opioid posts (mean 0.83, SD 0.96; t 98 =2.84; P=.02). There were no other statistically significant differences between pro-and antiopioid posts (Table 4). Principal Findings The purpose of this study was to examine the differences between pro-and antiopioid posts on Reddit. We found that there were more comments and upvotes on antiopioid posts than on pro-opioid posts. Antiopioid posts were more likely to contain linguistic elements related to the present and future than pro-opioid posts. Our results show that there may be differences in how antiopioid and pro-opioid Reddit users relate to the present and the future. Our results further show that although pro-opioid posts may be more common in some Reddit communities, antiopioid users who post in these communities are likely to experience support (in the form of upvotes) and engagement (in the form of comments). Most posts on the Reddit opioid community were pro-opioid. However, there were more upvotes for antiopioid content than for pro-opioid content. As the upvote feature is typically used to indicate support or approval, this finding suggests that users who wish to discuss the negative effects of opioid use may find support (in the form of upvotes) on Reddit. More frequently, upvoted posts are sorted to the top by the Reddit algorithm. Therefore, antiopioid posts, which received more upvotes in this study, may also be seen by more users of this subreddit. There were also fewer comments under pro-opioid posts than antiopioid posts. Comments are a primary way for users to interact with each other in the community and may indicate engagement, support, and discussion. Our results suggest that people who share pro-opioid posts may be met with less conversation than those who share antiopioid posts. However, in this study, only the number and not the content of comments were evaluated. Future studies should examine whether comments on antiopioid posts are positive or negative. Of the 16 LIWC variables measured, only the Focus Present and Focus Future variables were significantly different between pro-and antiopioid use posts. This finding is somewhat consistent with a previous study that found differences in time focus between anonymous and nonanonymous Reddit users [44]. We found that nonanonymous users used more words related to the past, which may be due to concerns about nonanonymously disclosing current or planned activities. In this study, a difference in focus was also identified. Words such as today, now, and will were more frequent in antiopioid posts than pro-opioid posts. It may be that antiopioid Reddit users are more likely to comment on the negative present or anticipated future outcomes associated with opioid use. No other variables were significantly different between pro-opioid and antiopioid posts. This finding was surprising, as the variables were chosen based on outcomes of opioid use, and social media displays of drug use have been shown to reflect real-life use [41]. We had expected that antiopioid posts, for example, would describe negative emotions (scared) or describe the topics of family, friends, or financial problems [46]. It may be that pro-opioid users chose to focus on other aspects of drug use. For example, past research on e-cigarette use shows that Reddit conversations are dominated by conversations about how to access these products [48]. In contrast, antiopioid Reddit users may have chosen not to share negative outcomes related to health, family, or work. Limitations This study's external validity was limited by the small sample size and the relatively short length of the period for which the posts were collected. Reddit does not provide demographics for its users, so we cannot confirm that the study participants were adolescents or young adults; however, research suggests that Reddit is more popular among young adults than any other demographic [43]. Therefore, young people are likely to encounter the pro-and antiopioid content described in this study. Conclusions This study was one of the first to examine the engagement and linguistic elements of opioid use in a popular subreddit. This study showed that there is still much that needs to be researched about how individuals engage in web-based communities about opiates. The findings of this study indicate that, except for verb tense, word usage in posts about opiates may not distinguish pro-from antiopioid Reddit users. Future studies should investigate how members of opioid discussion groups on Reddit and other social media interact with opioid-related content, including through likes or upvotes and in the comment sections. If Reddit's algorithm can identify youths who misuse opioids, Reddit can suggest resources and assist its users in finding help. Self-esteem in Adolescence Adolescence is the stage between 11 and 21 years of age, and it is a critical period for the development of self-esteem [49]. Self-esteem is defined by how one positively or negatively views oneself, and low self-esteem is associated with depression, anxiety, and suicidal ideation [50][51][52][53]. In adolescence, self-esteem is especially vulnerable to protective or harmful influences [54]. Therefore, it is important to understand influences on adolescent self-esteem, and social media may present one such influence. Previous studies have found that social media use is associated with lower self-esteem [55,56]. Social Media Influencers and Self-esteem One way in which social media influences adolescents' self-esteem is through social media influencers. Social media influencers are content creators with a large social media following. Individuals with low self-esteem are more likely to make upward comparisons between themselves and influencers [54,57]. Beauty-related YouTube content creators are influencers that share aspects of their personality, esthetics, and preferences, allowing viewers to relate to them [57]. Specifically, beauty-related YouTube content creators post videos that review and promote makeup products while also entertaining their viewers through sharing tutorials and trends [57,58]. Such content may potentially impact adolescent viewers' self-esteem, given the focus on physical appearance and that YouTube is used by 77% of adolescent internet users [59]. These youths may seek advice on purchases, enjoy video entertainment, or watch videos to relax [60]. In addition, the popularity of beauty-related videos on YouTube has increased drastically as the annual viewership increased from 59 billion in 2016 to 169 billion in 2018 [58]. This large viewership suggests that these videos have the potential to reach and influence a wide audience, which may include many adolescents. Thus, it is important to understand the influence of viewing beauty-related YouTube content creators' videos on adolescents' self-esteem. This Study Previous studies have examined content shared by beauty-related YouTube content creators and identified methods creators use to gain followers and influence their viewers [57]. Beauty-related YouTube videos may affect the self-esteem of viewers, and it is possible that these viewers describe the effects on their self-esteem in comments on these videos. However, previous studies have shown that Instagram users may post positively toned comments while self-reporting negative effects on their body image [61]. Thus, the presence of self-esteem-related discussions in beauty-related YouTube content creators' video comment sections and their tone remain unclear. The aim of this study was to examine the expression of and overlap between self-esteem and tone in comments on beauty-related YouTube videos. Study Design We conducted a content analysis of publicly available YouTube comments, which is described in the Introduction section of this compilation. This method allowed for the objective evaluation of conversations in beauty-related YouTube video comment sections. This study was exempt from human subjects review by the University of Wisconsin-Madison Institutional Review Board. Search Strategy We identified a sample of 6 beauty-related YouTube content creators whose content focuses on makeup. To identify beauty-related YouTube content creators likely to be viewed by adolescents, we used the search term most popular makeup youtubers in the Google search engine. The first four relevant results were reviewed, and beauty-related YouTube content creators who were represented on more than one website were selected [62]. Beauty-related YouTube content creators were included if, within their top 16 most recently posted videos, at least two videos included makeup or palette in the title of the video. For each beauty-related YouTube content creators, the two most recent videos including the word makeup or palette in the title were included in this study. Comment Inclusion Criteria The 20 most recent comments for each beauty-related YouTube content creator's selected video were evaluated if they contained more than a username, were written in English, and included words (not just emojis). For comments that received responses, only the initial comments were included, not the responses. Positive and Negative Self-esteem We developed a codebook adapted from the Rosenberg Self-Esteem questionnaire to evaluate the presence of positive and negative self-esteem references [63]. The 10 items from the questionnaire were used to define positive and negative self-esteem references. The questionnaire included both positively and negatively framed statements. Each item was used for both positive self-esteem and negative self-esteem codes by reframing the statements. The positive self-esteem code was created using positively framed statements and by converting negative questionnaire items to positive statements. Similarly, the negative self-esteem category was developed using negatively framed statements and by converting the positive questionnaire items to negative statements. Table 5 shows the full definitions of positive and negative self-esteem. Self-esteem Satisfied with self, thinking they are good, having good qualities, feeling they are just like or better than others, feel proud of themself, feeling important, feeling worthy, respects themself, feels successful, and positive about self or life Positive Unsatisfied with self, thinking they are no good at all, feeling they have little/no good qualities, feeling nothing compared with others, having little to be proud of, feeling useless, feeling like a person of little worth, disrespectful to self, thinks they are a failure, and negative attitude toward self Negative Tone Good, pleasant, happy, joyful, contented, commenter offers a suggestion, impressed, love, gratitude, inspiration, fabulous, and constructive suggestions offered by commenters Positive Bad, unpleasant, sad, afraid, angry, dislike, out of control, boring, disgusted, hate, terrible, canceled, and corrections offered by commenters Negative Positive and Negative Tone We developed a codebook adapted from the Positive and Negative Experience Scale to evaluate tone [64]. The initial codebook was pilot tested and refined using the study data. The initial codebook defined positive tone using the following adjectives from the scale: good, pleasant, happy, joyful, and contented. Additional phrases suggesting a positive tone were added to the codebook after pilot testing, including adjectives (impressed, love, gratitude, inspiration, and fabulous) and constructive suggestions offered by commenters. Negative tone was also defined using adjectives from the Scale of Positive and Negative Experience, including bad, unpleasant, sad, afraid, angry, and dislike. Additional phrases suggesting a negative tone were added to the codebook after pilot testing, including adjectives (out of control, boring, disgusted, hate, terrible, and canceled) and corrections offered by commenters. Audience Engagement We recorded the number of likes, dislikes, comments, and views on each video, as well as the follower count for each beauty-related YouTube content creator's channel. Data Collection Procedures Data were collected from each video, and comments were coded in October 2020. A second investigator coded a 10% (24/240) subsample of the comments. Interrater agreement was calculated for each codebook measure as the percentage of YouTube comments coded the same between the two investigators. Interrater agreement ranged between 88% (21/24) and 100% (24/24), with a mean of 95.8% (SD 0.06). Analysis Descriptive statistics were calculated to assess the prevalence of references to self-esteem and tone. Positive and Negative Self-esteem A total of 240 comments were evaluated. Among these comments, 5.4% (13/240) reported positive self-esteem. An example of a positive self-esteem comment was: I'm not really a brand...yet.But, I do want to take over the world of vintage clothing. You're an inspiration. [beauty-related YouTube content creator's name] Of all the comments, 6.3% (15/240) referenced negative self-esteem. For example, a person commented: I would be terrible friends with them, I don't like make-up on me, I'm broke, I'm not popular, and I don't keep up with trends. No comments referenced both positive and negative self-esteem. Of the comments, 88.3% (212/240) did not refer to self-esteem. Positive and Negative Tone Among the comments evaluated, 65.4% (157/240) exhibited a positive tone. An example of a comment with a positive tone included, "HE LOOKS GOOD WTF." Of all comments, 17.5% (42/240) displayed a negative tone. For example, a person said, "You look like You did an absolutely terrible kawaii Makeup." Of the comments evaluated, 3.3% (4/240) exhibited both positive and negative tone. For example, a commenter wrote: As a victim of the One Chip Challenge...DON'T DO IT! hahahahaha!!! I am a wuss and totally had a panic attack...I did get some pretty funny footage though LMAO. Of all comments, 18.7% (45/240) displayed neither positive nor negative tone. Principal Findings Through this study, we found that few comments on beauty-related YouTube videos referenced self-esteem. Among the self-esteem comments, a similar number of positive self-esteem and negative self-esteem comments were observed. Most comments on the sampled beauty-related videos showed a positive tone. The first main finding was that 11.7% (28/240) of comments referenced self-esteem. A possible explanation for this finding is that many feel uncomfortable discussing self-esteem on the web. Previous research suggests that social media users feel pressured to only post what would make them, as an individual, look good [65], which may influence viewers to avoid commenting negatively about their self-esteem. It is also possible that the viewer avoids commenting on their self-esteem if the rest of the comment section appears to include few comments referencing others' self-esteem. Finally, it may be that few viewers experienced changes in their self-esteem. Our second main finding was that there were a similar number of comments that referenced positive and negative self-esteem. One possible reason for this finding is that beauty-related YouTube videos can impact a viewer's self-esteem both positively and negatively. This finding aligns with a previous study that suggested that YouTube videos can both hurt and help the viewer's self-esteem based on their understanding and relatability to the beauty-related YouTube content creators [66]. However, another possibility is that YouTube users comment on their existing self-esteem levels when viewing YouTube content and may not be influenced by the videos themselves. Therefore, it is not clear what prompts positive and negative self-esteem discussions within the comments of beauty-related YouTube content creator videos. An additional finding was that most comments displayed a positive tone. This aligns with previous studies that suggest that beauty-related YouTube content creators build their platform to spread positivity [67] and that viewers are more likely to comment with supportive material [68]. A positive tone may indicate a positive viewing experience and positive influences on self-esteem; however, research conducted on Instagram suggests that users may comment positively on a post but self-report negative effects on their own body image [61]. Thus, it is possible that a positive tone may not reflect a positive change in the viewer's self-esteem. In addition, it is possible that the positive tone conveyed in the comment section may influence the viewer to avoid commenting negatively or commenting on their self-esteem. Limitations One limitation of this study was that self-reported self-esteem was not measured. Future work should examine the self-reported self-esteem of adolescents who view beauty-related videos on YouTube. In addition, those who comment on YouTube videos may not be representative of all viewers of the video, and it is not clear how findings generalize across all video viewers. In an attempt to review a broader set of viewer comments, multiple beauty-related YouTube content creators were included in this study. Similarly, the ages of the commenters are unknown; thus, it is possible that there are comments not shared by adolescent viewers. Nonadolescent viewers may be less vulnerable to negative self-esteem as a result of watching beauty-related YouTube content creators. However, as there are a large number of adolescent viewers on YouTube [69], it is possible that many comments were posted by adolescents. Furthermore, we excluded emojis from our coding processes as emojis may have multiple interpretations based on their context and may be difficult to code objectively. Conclusions Despite these limitations, our study has several important implications. Given the high frequency of comments on beauty-related YouTube videos with a positive tone, coupled with the low frequency of self-esteem disclosures, it is possible that adolescents would feel uncomfortable discussing negative effects on their self-esteem in this web-based environment. Future studies should examine avenues in which adolescents discuss their self-esteem in connection with beauty-related videos on YouTube. Future research should also explore effective approaches for parents to engage in conversations with their children about beauty-related videos on YouTube. Furthermore, some self-esteem references in comments suggests the possibility that these videos could affect adolescents' self-esteem. Further studies should investigate the effects of beauty-related YouTube videos on viewers' self-reported self-esteem. Social Media and Depression The most prominent mental illness affecting adolescents is depression, with 4%-5% of adolescents impacted worldwide each year [70]. If depression symptoms are not treated, it can lead to recurrence later in life [70]. The most extreme cases of depression can also lead to suicide, a major cause of death among adolescents [71,72]. Oftentimes, people with depression will post their feelings and inner thoughts on social media platforms, giving others a chance to respond and support them [73]. A study of Facebook accounts found that participants who showed symptoms of depression on the web also self-reported symptoms of depression [74]. Previous research has evaluated how depression is discussed on social media and has investigated ways to identify users with depression [75]. Computer algorithms can detect depression-related content in posts on social media with an accuracy of more than 90% [75]. Being able to consistently identify symptoms of depression on social media could lead to earlier treatment for adolescents. Reddit Versus Twitter Social media platforms, such as Reddit and Twitter, provide spaces for adolescents to discuss the triumphs and tribulations of their daily lives on the web, including personal information about their school, family, and friends [76]. Reddit is a social media platform divided into distinct communities to foster discussions among users [42]. These communities, often called subreddits, are created and moderated by users rather than the Reddit platform itself. Users can post specific subreddits, and others can respond by continuing the thread. Previous research has suggested that an anonymous platform such as Reddit encourages the discussion of more emotional or sensitive information [77]. In contrast to Reddit, Twitter is a platform designed around short statements made by users to convey information in real time [78]. A study from 2017 found that users who tweet about mental health do so because Twitter provides a sense of community, a safe space for expression, and means of coping [79]. Users also use Twitter to spread awareness. Little is known about the differences in how adolescents talk about their depression on an anonymous, forum-based platform such as Reddit compared with a personal, newsfeed platform such as Twitter. Therefore, this study aims to compare depression posts on Reddit, a forum-based platform, and Twitter, a newsfeed platform, to understand how users talk about depression on the web. Study Design We conducted a content analysis of publicly available Reddit and Twitter posts on October 28, 2020, to determine the number of posts that showed symptoms of depression and other themes related to youth. Reddit posts were defined as the first posts in the depression subreddit r/depression. Twitter posts were defined as posts that used the hashtag #depression. This study was exempt from human subjects review by the University of Wisconsin-Madison Institutional Review Board. Search Strategy We identified a national sample of publicly available Reddit and Twitter posts. Reddit posts were taken directly from the subreddit r/depression. Posts were evaluated under the new tab to view a wide range of recent posts, rather than only the most popular. Twitter posts were collected using the search term #depression. The latest tab was used on Twitter to ensure a variation of posts. Using the search terms r/depression and #depression, we sought to replicate search strategies adolescents would use when discussing depression on social media. Post Inclusion Criteria Reddit posts were included if they were the most recent posts in r/depression. Twitter posts were included if they were some of the most recent posts made with #depression. On Twitter, posts were included if they contained content in addition to hashtags. Posts with pictures were considered on both platforms. However, posts written in a language other than English, with videos, and made by accounts that stated that they were bots were not considered. Duplicate posts or identical posts published by either the same or different accounts were excluded from the sample. Measures An investigator categorized Reddit and Twitter posts into major topic categories using deductive and inductive approaches. The investigator reviewed each post to determine if symptoms of depression and youth topics were discussed. Open coding was used to generate codes for promotional posts and medical topics, as these themes emerged while coding these posts. A codebook that used the Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV criteria from a previous study was adapted to determine whether posts contained symptoms of depression [74]. A full list of codes and their prevalence is shown in Table 6. Each post included in the study was coded for the following categories: at least one symptom of depression; at least one youth topic; promotional posts; and medical topics. A symptom of depression was defined by mentioning at least one of the nine DSM-IV criteria, including categories such as depressed mood, insomnia or hypersomnia, and recurrent thoughts of death. A complete list of DSM-IV categories can be found in Table 6. Youth topic variables included specific references to school, family, and social activities. Youth topic variables were included to discern whether adolescents were likely posting using #depression and r/depression. Adolescents might also discuss topics such as school, family, and social activities. Promotional posts included posts endorsing any material or content. Examples included promotions for blogs on exercise, books on veterans, and seminars on meditation. Finally, medical topics included medical references, such as medications, hospital visits, and therapy. An investigator coded the samples of the Reddit and Twitter posts. A second investigator coded a 10% (n=40) subsample of both Reddit (n=20) and Twitter (n=20) posts to calculate the interrater agreement. Data Collection Procedures Data were collected on each Reddit and Twitter feed on October 28, 2020. Interrater agreement was calculated as the percentage of Reddit and Twitter posts referencing each category, ranging between 85% (34/40) and 100% (40/40), with a mean of 92.7% (SD .05%) Analysis Descriptive statistics were calculated for all measures. A chi-square test was used to analyze the relationship between each platform and the following main categories: the proportion of mentions of at least one symptom of depression; at least one youth topic; promotional content; and medical topics. Statistical significance was set at P<.05. The average DSM-IV scores were calculated by calculating the total number of DSM-IV symptoms that appeared on each platform and dividing by the number of posts collected from each of them. Overview A total of 53 posts were selected from the subreddit r/depression. At the time of coding (October 2020), this subreddit had approximately 700,000 members. A total of 49 tweets were selected from the social media platform Twitter. Symptoms of Depression and Youth Codes We found that 92% ( Promotional and Medical Codes We found that promotional content appeared in 0% (0/53) of Reddit posts and 53% (26/49) of Twitter posts (P<.001). We also found that the percentage of posts referencing medical topics was 28% (15/53) on Reddit and 18% (9/49) on Twitter (P=.24). For the full results describing the codes and frequency, see Table 6. Principal Findings This study compared Reddit and Twitter posts discussing depression. Results suggested that the discussion of depression was significantly more common on Reddit than on Twitter, with 92% (49/53) of Reddit posts and 24% (12/49) of tweets mentioning at least one symptom of depression. Furthermore, Reddit posts received an average DSM-IV score of 2.4, whereas Twitter posts received an average DSM-IV score of 0.2. This difference in expression fits with existing literature that adolescents are more or less willing to reveal certain emotions based on the type of social media platform they are using [80]. Our results also suggested that Reddit posts may be more likely to be posted by adolescents, with 62% (33/53) of posts on Reddit referencing and discussing at least one youth code compared with 10% (5/49) of posts on Twitter mentioning such subjects. Another important finding is that Twitter posts were significantly more likely to contain promotional content than Reddit posts. None of the Reddit posts investigated contained promotional content. In comparison, 53% (26/49) of tweets contained promotional content. Our findings may help understand the potential differences in discussions of depression on Reddit and Twitter. These findings suggest that users are more likely to elaborate on their experiences with depression when posting on the subreddit r/depression rather than with #depression on Twitter. This may be because of the anonymity of r/depression [77]. Reddit communities are also moderated by users who volunteer as moderators. Moderating powers include the ability to remove posts, comments, and users from the community. It is possible that the heavy moderation of r/depression by users compared with Twitter encourages others to be more open with their posts on Reddit [42]. A moderated Reddit thread could become a safe place for adolescents to discuss mental health and depression. They might find it comforting that a moderator would be able to remove hurtful or harmful posts or users from the subreddit. Both Reddit and Twitter have features to report posts or messages; however, having a moderator could take the burden of reporting off the shoulders of adolescents and onto a third party. Another possibility could be that subreddit moderators can remove posts that do not align with the subreddit's mission, such as promotional content, resulting in longer posts in which users can expand on their experiences with depression. The finding that more Reddit posts referenced at least one youth code compared with Twitter could suggest that there is a wider audience of adolescents using r/depression to discuss mental health than Twitter. This finding is consistent with the demographics of each platform. In recent years, Reddit's age demographic has been trending younger, with 21% of users aged between 18 and 24 years [81]. Conversely, Twitter's age demographic has been trending older, with 28% aged between 35 and 49 years [82]. Finally, the larger percentage of posts with promotional content on Twitter than on Reddit suggests that tweets with symptoms of depression are diluted by tweets from other topics such as promotions for blogs on exercise or seminars on meditation, sometimes unrelated to clinical depression. The promotional nature of the content on Twitter could be why references to youth-based topics such as school, family, and social activity were more common on Reddit than on Twitter. There is a chance that the promotional use of #depression could deter adolescents experiencing symptoms of depression from discussing their experiences on Twitter. Limitations The sample of Reddit and Twitter posts included in this study was small. However, patterns in the data were still identified. Another limitation was the timeframe for the collection of posts. All posts were collected during the COVID-19 pandemic, where stay-at-home orders and isolation could have influenced the data. Although this limitation might skew the number of posts that show at least one symptom of depression, it should affect both platforms equally without affecting the overall comparison. Conclusions Future studies should consider investigating other moderated communities for users experiencing depression, such as Facebook groups. Future studies should also consider comparing Instagram, a photo-based social media platform, with Reddit, a forum-style platform, which could yield important information on how the inclusion of photographs makes users more or less likely to discuss mental health topics. Comparisons should also be made with platforms that target foreign audiences, as both Reddit and Twitter have a majority of US users. Further understanding of how adolescents discuss depression on the web could help inform guidelines for social media support communities. Although computer algorithms could be used to detect posts about depression, supporting web-based communities where human detection is taking place could provide adolescents with the resources they need to get help. It could also help clinicians understand youth experiences with depression, what web-based resources they access for support, and lead to earlier treatments. Climate Change and Adolescent Mental Health Climate change can result in mental illnesses, such as depression, anxiety, and posttraumatic stress disorder [83]. Climate change can impact mental health directly through exposure to traumas, such as forest fires and floods, and indirectly, as people hear news about climate change and its associated deaths [83,84]. The effects of climate change will disproportionately affect the youth and young people. Some of the decisions and mistakes made by previous generations (some necessary, some not) have, in turn, led to the ill effects of climate change. It is the younger and future generations who will have to answer for the mistakes of the past and live a greater proportion of their lives in a steadily degrading environment [85]. However, young people also have the ability to advocate against climate change. Climate Change Advocacy on Social Media Young people can advocate against climate change through new technological developments. Currently, the younger generation uses social media as a context to promote climate change activism [86,87]. Research shows that climate change advocacy on social media promotes knowledge and behavioral changes around climate change [88]. Social media personalizes the issue of climate change by adding photographs, conveying information through friends, and catering to users' preferences for receiving information [88]. However, it is unknown whether climate change advocacy on social media is associated with depression or anxiety, as viewers are exposed to negative news and messages. Furthermore, research on climate change advocacy on social media disproportionately focuses on Twitter, excluding popular sites such as Instagram [89]. It is crucial to study Instagram because it is one of the most popular social media platforms among adolescents [90]. This Study Previous studies have confirmed the negative mental health effects of climate change and studied climate change activism on social media. However, it is unknown whether positively versus negatively framed posts receive more engagement in the form of likes, comments, or followers on the posters' accounts. It is also unknown whether comment sections of positively and negatively framed posts include sentiments consistent with depression, anxiety, or positive affect. The aim of this study is to evaluate whether positively or negatively framed climate change Instagram posts receive more engagement and whether their comment sections demonstrate sentiments consistent with depression, anxiety, and/or positive affect. Study Design We conducted a content analysis of publicly available, positively and negatively framed climate change Instagram posts to understand whether sentiments consistent with depression and anxiety as well as sentiments consistent with positive affect could be found in their comment sections. This study was exempt from human subjects review by the University of Wisconsin-Madison Institutional Review Board. We selected Instagram because many adolescents actively use it. Although we could not confirm that commenters were adolescents, 37.1% of Instagram users are aged between 13 and 24 years, and Instagram is the most widely used social media platform among American teens [90,91]. Search Strategy and Post Eligibility Our goal was to identify a sample of positive and negative posts on Instagram to evaluate the comments on these posts. We chose to search for posts using a hashtag page instead of a single Instagram profile so that we could see a variety of posts from different users. We performed a series of pilot tests to identify the most popular hashtags. From these pilot tests, we selected the hashtag #climatechange, as it is highly popular and contains a mix of positively and negatively framed posts on climate change. Next, we identified a sample of both positively and negatively framed posts by opening Instagram and searching using the hashtag #climatechange. Posts were sorted by the most popular feature on Instagram and considered for inclusion in chronological order, with the most recent viewed first to replicate how an adolescent might come across such posts. We defined a negatively framed post as a post that expressed a negative event or consequence of climate change ( Figure 1). Inversely, a positively framed post was not one that advocated against climate change but rather gave good news about climate change, provided a solution, or was explicitly uplifting ( Figure 2). Each selected post had to be identified as positive or negative by two investigators (OT and AJ) to be eligible for comment evaluation. Posts for which two investigators did not reach the same verdict, neutral posts, and duplicate posts were not included. Posts with fewer than five comments were excluded. The top five comments below each post, regardless of whether they earned a code, were included in the analyses. Audience Engagement We documented the number of likes and comments on each post. We also documented the number of followers for each account that shared a post. Depression and Anxiety We developed a codebook adapted from the DSM-V criteria to define criteria for sentiments consistent with depression and anxiety [92]. Emojis that expressed sentiments consistent with depression or anxiety were added inductively. For example, a crying emoji reflected the sad or tearful sentiment within depression, and the nervous emoji reflected the worry or feeling on edge sentiment of anxiety (see Table 7 for the full codebook). Positive Affect We developed a measure of positive affect based on sentiments consistent with positive affect items from the Positive and Negative Affect Schedule-expanded form [93]. Emojis that expressed sentiments consistent with positive affect were added inductively. For example, the applause emoji was considered to reflect the enthusiasm sentiment within positive affect (see Table 7 for the full codebook). Procedures Data were collected for each post from October 29, 2020, to November 2, 2020. An investigator (OT) coded all comments using a combination of deductive and inductive approaches. A second investigator (AJ) coded a 10% (10/100) subsample of comments. Interrater agreement was calculated as the percentage of comments on which both coders agreed (9/10, 90% of all comments). Analysis Two-sided t tests were used to understand differences in the number of likes and comments between positively and negatively framed posts and the number of followers of accounts that shared positively versus negatively framed posts. Statistical significance was set at P<.05. Descriptive statistics were used to describe the prevalence of mental health sentiments in the comments. Mental Health Sentiments in Comments Of the 100 comments, 17 (17%) referenced sentiments consistent with depression. An example comment was, "Yes we are and Shame on us." Furthermore, 5% (5/100) referenced sentiments consistent with anxiety; an example was, "You're [sic] weekly posts always make me feel so much better about my eco anxiety." This post was also coded for positive affect. Finally, 32% (32/100) referenced sentiments consistent with positive affect, and an example was, "Good work ‼" Table 8 shows the frequency of depression, anxiety, and positive affect sentiments in comments under positively and negatively framed climate change posts. Principal Findings This study examined positively and negatively framed climate change posts and their comment sections on Instagram. Overall, both positively and negatively framed posts received thousands of likes and an average of >50 comments. There were no statistically significant differences between the number of likes or comments on positively versus negatively framed posts. This finding suggests that climate change posts can reach thousands of adolescents and positively influence young people, raising awareness without negative mental health consequences. First, we found that all posts received thousands of likes and comments, and the accounts posting content all had thousands of followers. However, none of these numbers were significantly different between positively and negatively framed climate change posts. This finding suggests that climate change posts have a broad reach and receive engagement regardless of whether their message is positive or negative. Previous research shows that positive climate change messages may inspire action, whereas negative messages can result in passivity or helplessness [94,95]. Instagram should continue to allow both positively and negatively framed climate change posts on their platforms, as long as they are credible news statements, but should censor videos of murder, self-harm, violence, and other triggering content. As there was an equal engagement in positive and negative posts in this study, climate change advocates may wish to focus on sharing positively framed climate change posts. Second, out of 100 comments, we found that 17 referenced sentiments consistent with depression, and 5 were related to anxiety. There was a total of 18 references to depression and anxiety under negatively framed posts and a total of 4 references under positively framed posts. This finding is consistent with previous research that noted a negative association between climate change and mental health [83,84]. However, only 22 sentiments were consistent with depression and anxiety in this study. We hypothesize that viewers may have been hesitant to share their true feelings on the internet, possibly because Instagram is not anonymous, and people may be nervous about expressing what seems like political views. Finally, we found that 32% (32/100) of the comments referenced sentiments consistent with positive affect. This is a promising finding, as it suggests that climate change posts may inspire positive affect in some viewers. It is also possible that individuals are more likely to express positive opinions on social media, perhaps to seem more likable. Limitations We were limited by the small sample size of only 20 posts and 100 comments. In a larger sample, using a greater number of posts, it is likely that our numbers would be more generalizable. Furthermore, although we looked at the most recent comments on negatively and positively framed posts, responses to comments were not investigated. Another interesting challenge we encountered was the use of emojis in comments. It was surprising to see the number of comments that strictly used emojis, and interpretation may have been more subjective than the interpretation of words. Finally, we focused only on #climatechange. Searching for a larger variety of hashtags may have yielded different results. Conclusions This study has implications for the display of mental health sentiments in comment sections. Many comments showed sentiments of positive affect. Future research should aim to understand whether exposure to positively framed climate change posts positively affects mood and activism. Although this study focused on climate change advocacy on Instagram, future studies should examine climate change advocacy on other social media platforms. The high engagement in climate change advocacy Instagram pages in this study and the many positive messages may show the eagerness of the adolescent generation to approach climate change issues. Instagram should therefore be used as a platform to raise awareness on climate change issues as adolescents commonly use it, and this population is imperative for addressing climate change.
v3-fos-license
2021-06-09T07:01:56.855Z
2021-04-22T00:00:00.000
235371599
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "bafd07747ce4b320f7db2f2027346b0c8fe64659", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1534", "s2fieldsofstudy": [ "Medicine" ], "sha1": "bafd07747ce4b320f7db2f2027346b0c8fe64659", "year": 2021 }
pes2o/s2orc
Hypofractionation and Stereotactic Body Radiation Therapy in Inoperable Locally Advanced Non-small Cell Lung Cancer Background and Aim: Radiotherapy (RT) plays a key role in the control of locally advanced non-small cell lung cancer (LA-NSCLC). Throughout the years, different doses and fractionations of RT have been used in an attempt to optimize the results. Recently, special interest has been given to hypofractionation (hypoRT) and stereotactic body radiation therapy (SBRT). HypoRT is a relatively widespread treatment, although the accompanying level of evidence is limited. For its part, SBRT has been used specially to overdose specific areas of the disease as a boost after radiochemotherapy. In both cases, the study of how to integrate these RT tools with chemotherapy and immunotherapy is fundamental. In addition, the 2020 COVID-19 pandemic situation has sparked increased interest in hypofractionated treatments. In this review, we analyze the role of SBRT and hypoRT in the management of LA-NSCLC in accordance with current scientific evidence. Relevance for Patients: The objective of this article is to introduce professionals to the role that hypoRT and SBRT can play in the treatment of LA-NSCLC to offer the best treatment to their patients. Introduction The Pacific study showed how the administration of durvalumab after radiotherapy (RT) and chemotherapy (CRT) significantly improved survival in patients with unresectable Stage III non-small cell lung carcinoma (NSCLC). It was an important milestone in the treatment of this disease [1]. Studies have shown that the most common type of recurrence in LA-NSCLC is distant [2,3]. Despite this, locoregional control is still essential to increase survival, so it is of great interest to optimize RT treatments [4]. Over the years, different strategies have been used to improve the results with RT by altering the doses, the fractionations and applying the latest technological advances. In recent times, special emphasis has been placed on the use of hypofractionated treatments. The progressive knowledge and expansion of stereotactic body RT (SBRT) has prompted the study of its application in this disease. In addition, with the impact of the COVID-19 pandemic, different scientific societies gave recommendations aimed at shortening treatment times by paying special attention to hypofractionated treatments. Finally, the emergence of immunotherapy has meant a true paradigm shift in the management of lung cancer. The best way to combine RT with these new treatments remains to be defined. In this context, in this publication, we intend to review the status of hypoRT and SBRT in the changing scenario of locally advanced and unresectable lung cancer. American Society for Radiation Oncology updated guidelines. The following search strategy was performed of the PubMed database on July 2020: (lung AND (non-small cell OR NSCLC) NOT metast * [TI]) AND (stage III OR locally advanced OR locally-advanced) AND (radiation therapy OR radio -therapy) AND (hypofract * OR hyperfract * OR adaptive RT OR SBRT) NOT case reports. Clinical studies, clinical trials, meta-analysis, and reviews were selected. References were also analyzed. We classified in hyperfractionation (hyperRT), hypofractionation (hypoRT), adapted therapy, or SBRT. State of the art: Current management, conventional dose escalation, and hyperfractionation Treatment of Stage III N0 or N1 NSCLC consists of surgery followed by adjuvant treatment. When there is mediastinal involvement, the role of surgery is controversial. A 2007 Phase III study of the European Organization for Research and Treatment of Cancer (EORTC) randomized CT followed by RT or surgery to N2 patients and showed no differences between both treatments [5]. Regarding neoadjuvant treatment, there is evidence of the benefit of CT as neoadjuvant treatment in Stage III [6]. Considering neoadjuvant CRT, a Phase III intergroup study randomized to CRT followed by surgery versus exclusive CRT. Although no differences in survival were demonstrated between the two groups, an unplanned analysis found that those patients treated with lobectomy had a longer survival when compared to patients treated with CRT [7]. In any case, the role of surgery in N2 patients is still subject to debate in part due to the great diversity of this group of patients. There is evidence to suggest that the prognosis after surgery depends on the burden of mediastinal involvement [8]. In this way, in cases with few affected lymph nodes or in a single station, surgery could be indicated. In unresectable situations, the evidence favors treatment with RT at doses of 60 Gy-70 Gy in 2 Gy fractions concomitant with CT and followed by durvalumab, based on the results of the Pacific trial [1,9,10]. In this study, patients received concurrent CRT and the administration or not of adjuvant durvalumab was randomized. Latest data reported at ESMO 2020 maintained impressive results with increased follow-up. In the experimental arm with durvalumab, median overall survival (OS) was 47.5 months and 4-year OS was 49.6%, while in the placebo arm, the median survival and 4-year OS were 29.1 months and 36.3%, respectively [10]. Anyway, in Europe, durvalumab is only approved by the European Agency of Medicaments (EMA) for patients with PDL1> 1% [11]. In non-randomized studies, RT dose escalation has shown improvements in survival and local control (LC) both in exclusive RT and administered with CT [12][13][14][15][16]. OS in these studies ranged from 19 to 24 months. In an analysis by Machtay et al., 1356 patients drawn from Radiation Therapy Oncology Group (RTOG) clinical trials were retrospectively analyzed. This study showed that 1% increases in bioequivalent dose (BED) were related with 3% improvement in LC and 4% relative improvement in OS [17]. Based on these encouraging results, the Phase III trial RTOG 0617 trial explored the benefits of increasing the dose of RT by randomizing patients to receive 74 Gy versus 60 Gy. This trial was prematurely closed due to the low survival in the experimental arm. Median OS in the control arm was 28.7 months (the same as in the control arm in the Pacific study) and the 2-year OS was 58%, while in the experimental arm, median OS and 2-year OS were 20.3 months and 45%, respectively. Surprisingly, the LC and locoregional control were also worse in the experimental arm. It was concluded that increasing the dose to 74 Gy provided no benefits [18]. There were several coincidence factors in the experimental arm that may have contributed to these results, such as the increased doses received by the heart, less adherence to RT protocols, worse compliance to the CT schemes, and increased toxicity deaths. Finally, the use of intensity modulated RT (IMRT) and the volume of patients treated by the participating centers may have influenced the results [19][20][21][22]. Although the main clinical guidelines propose conventional fractionation RT as the standard of care [9, 23,24], the controversy about the benefit of dose escalation remains open. In 2016, Ramroth et al. published a meta-analysis examining studies with patients randomized to different RT schemes, including regimes with splits, hypoRT, hyperRT, or dose escalation with conventional fractionation. The results showed that the increased of BED administered without CT improved survival [25]. On the other hand, it is known that the prolongation of the global treatment time has a negative impact on the control of the disease as a consequence of the accelerated repopulation of cells, which could also contribute to the results of the RTOG 0617 trial. For each day that RT treatment is prolonged beyond 6 weeks, the chances of survival are reduced by 1.6% [20,21,26,29]. Based on this, other ways to elevate BED without prolonging the overall treatment time have been explored. Hyperfractionation HyperRT consists in increasing the total number of fractions delivered twice (or more) per day, at a reduced dose per fraction and it has shown uneven results. Treatment with continuous hyperfractionated accelerated RT (CHART), administering 54 Gy in 12 days, in three 1.5 Gy fractions per day, showed an increase in survival [30]. However, the same scheme without treatments on the weekends (CHARTWEL) showed no benefit over conventional treatment [31]. Similarly, an Australian study on hyperRT and the study RTOG 9410 comparing various treatment schemes including hyperRT did not show benefit through this strategy [32,33]. A meta-analysis analyzed 2000 patients from 10 trials comparing conventional fractionation with hyperRT showed a survival benefit with hyperRT of 3.8% and 2.5% at 3 and 5 years, respectively, but with a significant increase in acute esophagitis (9% vs. 19%) [34]. The diffusion and widespread implementation of hyperRT has been hampered by logistical barriers. Furthermore, it is not well known how these treatments should be supplemented with CT. Finally, hyperRT has been consistently observed to increase DOI: http://dx.doi.org/10.18053/jctres.07.202102.017 esophageal toxicity. Therefore, although the 2019 National Institute for Health and Care Excellence (NICE) guidelines propose hyperRT as an alternative to conventional fractionation in patients not candidates for CRT treatment, its use is minority [35,36]. Hypofractionation Nowadays, there is a greater interest in exploring fractionations aimed at shortening the global treatment time by increasing the doses per fraction and thus increasing the BED. It has been observed that there is a moderate linear relationship between BED and OS when using hypoRT so that for each 1 Gy increase in BED, there would be a benefit of 0.36-0.7% in OS [37]. In most of the published studies, they use moderate hypoRT with doses of 2.5-3 Gy per fraction. Hypofractionation without concurrent CT It is important to highlight that patients in the Pacific and RTOG 0617 trials had a performance status (PS) of 0 or 1 and median age of 64 years, what, undoubtedly, has its implications in the results. In routine clinical practice, patients' PS is frequently ≥2, median age at diagnosis is 70 years, and they often have comorbidities that jeopardize treatment strategy. In fact, it is estimated that between 55% and 59% of the patients are not candidates for concurrent CRT [38,39]. Some experiences have shown good results with acceptable toxicity using hypoRT in this group of patients [40]. A retrospective Spanish study analyzed the results treating patients no candidates to CRT with hypoRT (66 Gy in 24 fractions). In Stage III subgroup, 2-year OS was 37.5% and no Grade 3 toxicity was reported [41]. Din et al. published a retrospective analysis with 609 patients treated with hypoRT (55 Gy in 20 fractions of 2.75 Gy) without concurrent CT (28% received sequential CT). Median OS and 2-year OS for locally advanced disease were 20 months and 40%, respectively, with no Grade 3 or 4 toxicities [42]. Another retrospective analysis by Amini et al., with 300 patients, compared hypoRT (45 Gy in 15 fractions of 3 Gy) with conventional fractionations in patients not candidates for CRT treatments. They concluded that hypoRT was an acceptable treatment option for poor PS patients, with similar results to those achieved with conventional RT [43]. Recently, it has been published an analysis of the National Cancer Database (NCDB) comparing hypoRT versus conventionally fractionated RT in patients treated with RT alone. A total of 6490 patients were evaluated, 5378 with conventional RT (median dose of 66 Gy in 2 Gy) and 1112 with hypoRT (median dose 58.5 Gy in 2.5 Gy fractions). HypoRT was associated with older age, lower BED, academic facility type, higher T-stage, and lower N-stage. After adjusting by these covariates, no difference in OS was observed between both groups [44]. In a review by Kaster et al., analyzing the studies with hypoRT without concurrent CT, the reported weighted mean acute esophageal and pulmonary toxicity were 1.9% and 1.2%, respectively. Late esophageal and pulmonary toxicity were 1.4% and 6.9%, respectively. Two-year survival ranged from 18% to 42%. Toxicities were defined as events that could be scored as Grade 3 or more [37]. In a 2019 review, Parisi et al. analyzed up to 29 studies published since 2007. In hypoRT treatments without CT, the dose range ranged from 45 to 85.5 Gy. Acute grade ≥3 esophageal toxicity was 0-15% and acute pulmonary toxicity was 0-44%. The late esophageal and pulmonary toxicity found was 0-16% and 0-47%, respectively, with pulmonary toxicity being most commonly ≤Grade 3. Two-year OS ranged from 22 to 68.7% [45]. More aggressive hypoRT schemes have been employed, although, as with SBRT, the central location of the lesions is of particular concern. In a Phase I dose escalation trial, 55 patients with poor PS were treated with doses of 50, 55, or 60 Gy in 15 fractions. It was concluded that precision hypoRT with 60 Gy in 15 fractions is generally well tolerated [46]. The same group developed a Phase III trial in which patients with a PS ≥2, not candidates for concurrent CRT, were randomly assigned to either 60 Gy in 30 fractions or 60 Gy in 15 fractions. In an interim analysis, the median OS was 11.5 months with no intergroup differences. The authors concluded that hypoRT may be an alternative for these groups of patients [47]. A retrospective analysis published in 2020 with 42 patients with KI ≥70% treated with doses of 60 Gy in 15 fractions (and mostly with sequential CT), showed a 2-year survival of 69%, with 14% esophageal toxicity ≥G3 and 14% pulmonary toxicity ≥G3 [48]. With the emergence of immunotherapy in the treatment of LA-NSCLC, the use of hypoRT treatments is controversial. On the one hand, there is a concern of a cumulative risk for severe pneumonitis. On the other hand, the immunomodulatory role of hypoRT could increase the effectiveness of treatments. In this sense, the Phase II TRADE-hypo trial will investigate two radiation regimens combining durvalumab therapy with either conventionally fractionated or hypoRT (55 Gy in 20 fractions), in patients not candidates for CT. Another Phase II study (DUART trial) will explore the role of durvalumab after RT in patients not candidates for CT, also including hypofractionated schemes. Both studies are in recruitment [49,50]. During the COVID-19 pandemic, many recommendations have emerged by different societies with the common message that hypofractionated schedules without concurrent CRT are appropriate [51][52][53]. In the practical guideline carried out by ESTRO, there is consensus to recommend hypoRT alone or with sequential CT. The most recommended regimens are 60 Gy in 20 fractions, 60-66 Gy in 24-30 fractions, and 55 Gy in 20 fractions [52]. As we can see, there is great heterogeneity in the doses and fractionation schemes published in the literature. In any case, and despite the difficulty in obtaining conclusions about the efficacy of hypoRT in patients not candidates for concurrent CRT, these treatments are increasingly becoming part of routine clinical practice. Hypofractionation with concurrent CRT The Phase I trial called Alliance, studied RT dose escalation using advanced RT techniques with weekly carboplatin-based concurrent CT. The daily fractionation was escalated from 2.22 Gy to a maximum of 3 Gy per fraction to a total fixed dose of 60 Gy over four planned cohorts. The MTD was reached and defined as 60 Gy in 24 fractions of 2.5 Gy [45]. In other small Phase I study with 3DRT, 13 patients were treated with 3 Gy per fraction with concurrent CT. The MTD was 69 Gy at 3 Gy per fraction with no treatment related deaths reported [55]. In the systemic review by Kaster et al., in the hypoRT with concurrent CT subgroup, the weighted mean acute esophageal and pulmonary toxicity were 14.9% and 7.9%, respectively. Weighted mean late esophageal and pulmonary toxicity were 16.6% and 12.2%, respectively. The 2-year OS ranged from 24% to 58% [37]. In the previously referenced review by Parisi et al., in the hypoRT treatments with concurrent CT, the acute Grades 2 and 3 esophagitis ranged between 3% and 41.7% and acute pneumonitis ranged from 0 to 23%. Late esophageal and pulmonary toxicity ranged from 0 to 8.3% and from 0 to 47%, respectively. The 2-year survival was 38.6-68.7% [45]. These results reflect, with the limitations inherent in this type of study, an increase in toxicity in combined hypoRT-CT treatments. In 2014, the SOCCAR trial compared concurrent versus sequential hypoRT (55 Gy in 20 fractions of 2.75 Gy) and CT. Initially, it was planned to be a Phase III study, but due to poor recruitment, it was restructured to a Phase II study. The main objective was to assess the tolerability of the treatment. The results showed low toxicity, with 9.3% and 8.2% Grade 3 esophageal toxicity in the concurrent and sequential arms, respectively. Twoyear OS in the concurrent CRT arm was 50% versus 46% in the sequential arm. The median OS for concurrent versus sequential was 24.3 and 18.4 months, respectively. In 2019, a retrospective analysis was published with 100 patients treated with hypoRT (55 Gy in 20 fractions) with 2 cycles of concurrent CT followed by 2 cycles of adjuvant CT with vinorelbine and cisplatin. The 2-year OS was 58%, higher than in the SOCCAR study, possibly due to the incorporation of advances in disease staging and modern RT techniques [56]. Although this scheme of RT with concurrent CRT has not ever been directly compared with conventional CRT, it has not hindered it from being the most employed RT fractionation in the UK, according to a survey carried out about the most common practices in the treatment of NSCLC [36]. In fact, in the NICE guidelines for NSCLC updated in 2019, this scheme is presented as an alternative for radical treatments [35]. The EORTC 08972-22973 study randomized hypoRT (66 Gy in 24 fractions of 2.75 Gy) with concurrent CT (daily lowdose cisplatin) versus sequential CT (two cycles of cisplatin and gemcitabine). The study was prematurely discontinued due to poor recruitment, and no significant differences were seen between the two arms, maybe because of the poor power of the study. The median OS and 2-year OS for the concurrent arm were 16.5 months and 39%, respectively. Acute G3 esophagitis was higher in concurrent than sequential CRT (14% vs. 5%). Anyway, this 2007 study, employed elective nodal irradiation and old planning techniques [57,58]. This 66 Gy in 24 fractions scheme is still used in common clinical practice in some centers incorporating new planning and delivery techniques and positron emission tomography (PET)-based nodal treatment. In a Phase II trial, the addition or not of cetuximab was randomized to concurrent CRT (66 Gy in 24 fractions and lowdose cisplatin). The results were excellent independently of the administration of cetuximab, with a median OS of 31.5 months and 2-year OS of 59.4%. On the other hand, it was observed that the dose on the esophagus, the PS, and the comorbidities of the patients influenced the OS [50]. A retrospective analysis with 469 patients was published in 2017 using the same scheme of hypoRT and low-dose CT. The authors found a significant association between heart dose and OS [60]. This shows the importance of selecting cases for hypoRT treatments with concomitant CT, as well as optimizing RT techniques and adjusting CT treatments. The use of 4DCT, IMRT, volumetric modulated arc therapy (VMAT), and advanced image-guided RT (IGRT) techniques such as conebeam CT (CBCT) may be especially necessary when performing this type of treatment to try to minimize esophageal, cardiac, and pulmonary toxicity. The ESTRO recommendations for the COVID-19 pandemic argue against the use of hypoRT with concurrent CT [61]. However, the British group proposes the use of hypoRT with concurrent CT as per SOCCAR protocol for selected cases [53]. This divergence reflects the limitations of the current evidence to be able to give firm recommendations in this regard. Going further, a recent systematic review has questioned the benefit of performing concurrent CRT treatments over sequential treatment when the dose is increased through fractionation modifications [62]. Personalized hypofractionated radiation therapy Typically, RT treatments use fixed doses of radiation for a particular disease. However, the possibility of individualizing dose escalation based on the tolerance limits of healthy organs has been explored. This has been called isotoxic radiation therapy [63]. Cannon et al. published a Phase I dose escalation study using isotoxic hypoRT without concurrent CT. In this study, patients were treated in 25 fractions, escalating the dose from 2.28 Gy to 3.42 Gy individually, according to the risk of developing pneumonitis. The MTD was defined as 63.25 Gy in 25 fractions of 2.53 Gy, similar to other dose escalation studies. Late Grade 4-5 toxicities were attributable to damage to central and perihilar structures and were correlated with dose to the proximal bronchial tree [64]. Once again, we verify the importance of limiting the dose received by the central structures. The importance of employing advanced RT techniques is evident if the dose is to be increased through hypofractionation. The IDEAL-CRT trial evaluated dose escalation up to 73 Gy in 30 fractions over 6 weeks with dose escalation calculated on an individual patient basis according to either lung or esophageal radiation dose. Median OS was 37.5 months and 2-year OS was 62.9% [65,66]. The ADSCAN Phase II study, currently in recruitment, includes different forms of fractionation for patients treated with CT and sequential RT. It randomizes to five different RT schemes: . The aim of this study is to find the most promising way to increase doses to subsequently develop a Phase III trial [67]. Other studies have used PET to adapt the volume of treatment. In a Phase II carried out by Kong et al., a PET was performed at 40-50 Gy to redefine the treatment target and a hypoRT boost was administered on the observed residual disease. The threshold dose was defined as the dose over which the risk of Grade 2 pneumonitis was above 17.2% (approximately equivalent to 20 Gy mean lung dose). In this study, the median dose administered over the tumor was 83 Gy in 30 fractions and most patients received CT concurrently. LC at 2 years was 82% and median OS was 25 months [68]. The randomized Phase II PET-boost trial aimed to improve LC by boosting either the whole primary tumor (arm A) or the high FDG uptake region inside the primary tumor (arm B). The boost dose was maximized by normal tissue constraints (isotoxic treatment). The results were presented at ESTRO 2020 congress, showing a median total dose of 78 Gy for the arm A and 84 Gy for the arm B and a LC >90% at 1 year for both arms, respectively, although the trial did not reach predefined sample size and many scans were not evaluable [69]. The RTOG 1106 is a Phase II study with a control arm (60 Gy in 30 fractions) and an experimental arm (21 fractions at 2.2 Gy and 9 fractions applied on the residual disease seen in a PET, at doses between 2.2 and 3.8 Gy without exceeding the mean lung dose of 20 Gy). The experimental arm is a hypofractionated, adaptive, and isotoxic scheme [70]. The first results of this study were presented at World Conference on Lung Cancer 2020. Adaptive RT increased numerically local and local-regional control, but these differences were not statistically significant. There were no differences in Grade 3 or worse toxicity, OS, progression-free survival, and lung cancer-specific survival between treatment arms [71]. Although this type of treatment can make it possible to increase the dose on the disease, there is still no clear evidence about its clinical benefit. In the future, studies exploring this type of fractionation should include immunotherapy and consider customizing the RT dose based on radiosensitivity profiles [72]. SBRT Conventionally, wide margins have been used to compensate movement of thoracic lesions during the respiratory cycle, thus limiting the radiation dose that could be delivered. SBRT consists of the administration of high doses of radiation with high precision with a narrow margin and with a strong gradient to protect the surrounding healthy tissue. To be able to carry this out, it is essential to have 4DCT images, use strategies to compensate for the movement during the respiratory cycle (as dampening, gating, active breathing control, or tracking) and to have a good IGRT system during treatment. As IGRT strategy, the use of CBCT and more recently 4D-CBCT are widely disseminated [56,57]. In this way, it has become feasible to administer radiation doses hitherto unimaginable with limited toxicity [58]. It is of interest to note that some of the strategies used in SBRT to improve precision can be similarly used in hypoRT treatments to reduce possible toxicity, as previously noted. SBRT achieves LC in more than 90% of patients with Stage I NSCLC and increases survival when compared to conventional RT, becoming the alternative to surgery in inoperable patients [73,74]. From this experience, the SBRT role at LA-NSCLC has become an area of great interest. The objective of SBRT in this context would be to optimize LC of the disease while trying to minimize toxicity. However, to this day, it is still unclear how best to integrate SBRT with established treatments. There are several publications that have proposed different treatment schemes, paying special attention to its safety. The University of Kentucky group published a tolerability study in which the patients were treated with CRT (60 Gy in 30 fractions) and then SBRT was administered on the residual tumor (<5 cm) observed in PET. The SBRT dose was 20 Gy in two fractions and 19.5 Gy in three fractions for central lesions, always above 100 Gy BED (alpha/beta = 10). Thirty-seven patients were treated with a follow-up of 25.5 months. Median OS was 25.2 months LC was 78%. Grade 3 pneumonitis occurred in 13.5%. Two patients died due to fatal hemorrhage although no dosimetric differences were seen. Anyway, they proposed 175 Gy (BED10) as an estimated MTD for combined CRT and SBRT boost planning for the pulmonary artery for future studies. The authors concluded that it is a safe treatment resulting in good LC with not increased risk for toxicity above that for standard radiation therapy [75,76]. Other groups have developed Phase I dose escalation studies with SBRT toxicity boost following CRT, but with small number of patients and limited follow-up. Hepel et al. carried out a Phase I dose escalation study after CRT with 50.4 Gy, exploring four dose levels: 16 Gy in two fractions, 20 Gy in two fractions, 24 Gy in two fractions, and 28 Gy in two fractions. One-year locoregional control was 100% with boost doses ≥24 Gy. One patient died of bronchopulmonary hemorrhage associated to the dose applied to the proximal bronchovascular tree. Based on their results, the authors recommend limiting the doses applied to the bronchovascular tree or increase the number of fractions [77]. In a previous dosimetric pilot study, the authors proposed dose limits for the organs at risk and showed how it is feasible to respect these constraints, but there were no predetermined limits for the bronchovascular tree [78]. Higgins et al. analyzed four dose levels (18 Gy in two fractions, 20 Gy in two fractions, 30 Gy in five fractions, and 35 Gy in five fractions) following concurrent CRT at 44 Gy. Two patients developed Grade 5 toxicities (a tracheoesophageal fistula and one case of hemoptysis). The authors concluded that although 30 Gy in five fractions is the maximum tolerated dose calculated, 20 Gy in two fractions may be a reasonable dose as no Grade 5 toxicities were observed with this scheme [79]. In a retrospective study, 16 patients received conventional CRT to a median dose of 50.40 Gy followed by an SBRT boost with an average dose of 25 Gy given over five fractions. One-year LC was 76%, 25% developed Grade 2 acute pneumonitis, and no Grade 5 toxicities were observed [80]. In Table 1, we collect and summarize these studies. The methodological limitations of the study do not allow drawing conclusions, but it opens the door to another possible therapeutic application for SBRT. The use of SBRT has also been examined as part of a multimodal treatment with CRT and surgery in the locally advanced disease. The currently on-going Linnearre I is a Phase I viability study in which SBRT is given as a neoadjuvant treatment in N0-N1 patients [81]. In other prospective study published in 2018 by Singh et al., SBRT was employed as an adjuvant treatment after surgery and before adjuvant CRT. A 10 Gy single fraction was applied on the affected nodal stations or in cases of positive margins with good results in terms of LC [82]. The Phase I hybrid study proposes combining CT with hypoRT on lymph node disease (24 fractions of 2.42 Gy) with SBRT (3 fractions of 18 Gy) on the primary tumor, which must be peripheral and smaller than 5 cm. This way, the aim is to increase the dose on the primary tumor, without increasing the dose at centrally to avoid major toxicity. This study has completed recruitment [83]. Following the publication of the Pacific study, interest has grown about how to combine SBRT with CRT and durvalumab treatment. The immunomodulatory role of SBRT makes it a particularly interesting tool in this context [84]. It is known that RT can induce an immune response that acts against the tumor by increasing immunogenic cell death and stimulating a systemic immunity against the disease. However, this effect seems to be counteracted by the immunosuppressive capacity of the tumor microenvironment itself. The synergistic role that can be established between RT and drugs that reduce the immunosuppressive capacity of the tumor is currently under investigation. The effect that dose and fractionation may have on the immunomodulatory capacity of RT is also being investigated, although there is evidence that the immunogenic response is greater when high doses per fraction are used [85]. For this reason, it is of the utmost interest to try to combine immunotherapy with SBRT, which may contribute to improving LC of the lesion both due to its direct role against the tumor and through its immunoregulatory contribution. In this context, a Phase II study consisting of the administration of CRT followed by durvalumab and a boost of 20 Gy in 2-3 fractions on the primary tumor has started recruitment [86]. This type of schema is a model of how LA-NSCLC might be handled in the future. Conclusions HypoRT can shorten the treatment time, which may provide clinical benefit by reducing the repopulation of tumor cells during RT. In addition, it allows a more efficient use of services. The efficacy of hypoRT regimens with or without CT should be contrasted in prospective studies designed for it. These studies should introduce technological advances in the field of RT as well as immunotherapy, as they are already part of the LA-NSCLC treatment. For its part, SBRT treatments may increase LC in patients with LA-NSCLC with an acceptable safety profile, although the level of evidence is still poor. Given the immunomodulatory role of RT and especially of SBRT, it is presumable that, in the coming years, new treatment schemes will be proposed that tend to integrate hypoRT and boost with high doses of RT per fraction together with immunotherapy. Undoubtedly, it is a research area full of possibilities that in the future will bring great changes in the management of the LA-NSCLC.
v3-fos-license
2022-12-09T16:05:18.272Z
2022-12-07T00:00:00.000
254436626
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-1591476/latest.pdf", "pdf_hash": "7ed56dc7b6ed961c52b54b79ab7ffb46a7ceae66", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1536", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "efdc755010b87aaba2358e5fa3c6d8c8cb4f9d73", "year": 2022 }
pes2o/s2orc
Post-monsoon seasonal variation of prokaryotic diversity in solfataric soil from the North Sikkim hot spring The solfataric soil sediments of the hot springs of Sikkim located at Yume Samdung and Lachen valley were studied for deciphering the bacterial diversity. The main aim here is to present a comparative study and generate a baseline data on the post-monsoon seasonal variation for the months of October and December, analyzed through 16S rRNA V3-V4 amplicon sequencing. The results have shown that there is not much variation at phylum level in the month of October in all the three hot springs such as New Yume Samdung (NYS), Old Yume Samdung (OYS), and Tarum (TAR) hot spring. The abundant phyla mainly present were Firmicutes, followed by Proteobacteria, Actinobacteria, and Bacteroidetes. Similarly, in the month of December, Firmicutes, Proteobacteria, Actinobacteria, and Bacteroidetes were prevalent; however, the percent relative abundance of these phyla in the month of December is relatively less. Besides this decrease in percent abundance, it was interestingly seen that relatively more phyla were found contributing towards the bacterial diversity in the month of December. Similar to phylum level, at genus level, there was not much variation seen among various prevalent genera of the three studied hot springs in both months. The major genera prevalent in both months among all the three hot springs were followed by Bacillus, Desulfotomaculum, Lactobacillus, and Paenibacillus. A similar trend was also seen at gene level that relative abundance of various genera was higher in the month of October but more genera were found to be contributing towards bacterial diversity in the month of December. Few distinct genera were found to be more abundant in the month of December such as Rhodopirellula and Blastopirellula. The results may conclude that there is not much variation in the abundance and type of bacterial communities during the post-monsoon season in the month of October and December. However, this may be assumed that there is the accumulation or increase in the bacterial communities during the winter (relatively higher temperature among hot springs) and may favor few mesophilic and more thermophilic communities as well. Introduction The most prodigious gift of nature if anyone has to consider then undoubtedly has to be the extreme environmental conditions. The word "extreme" has been concocted by mankind as the abiotic parameters governing these niches or ecosystem are beyond human adaptive physiological capabilities. Our cognitive functioning and metabolomics cannot explain or survive the ruthlessness of nature's extreme ecosystems be it on the basis of temperature or pH or salinity or atmospheric pressure among other factors. The countenance of hot springs is their invaluable microbial communities which has gained impetus in recent decades (Pedron et al. 2019). The hot spring microbiology is regarded as the hotspot of research in the arena of microbial ecology as study of life at extremes has challenged the scientific world to retrospect the adaptability and limitations of life (Schmid et al. 2020). Encountering the limitations from culture-dependent methods, new molecular strategies such as amplicon/shotgun sequencing, nanopore chip assay, and omics tools gave a leapfrog advantage for better understanding of the microbial diversity throughout the world (Rawat and Joshi 2019). 3 Aiming the geomicrobiological features, microbial community structure of different geothermal springs has been determined worldwide such as in China (Guo et al. 2020), Japan (Martinez et al. 2019), South Africa, Colombia, solfataric fields of Iceland, Great Basin hot springs, and Yellowstone National Park (USA) (López-López et al. 2013;Urbieta et al. 2015). The Indian sub-continent supposedly houses 400 hot springs distributed across seven geothermal provinces, and among them, only about 30 hot springs have been explored with respect to microbiological aspects (Poddar and Das 2018). Out of these 30 hot springs, the hot spring soil ecology and their bacterial diversity analyzed through high-throughput sequencing (HTS) have been limited and cover only the hot springs of Manikarnan (Himachal Pradesh), Tapovan (Uttarakhand), Jakrem (Meghalaya), and Taptopani and Atri (Orissa). Most of the researches have been on the hot spring water only, and so far, there are no studies on the seasonal variation of the microbial diversity in these geothermal systems. Hence, the bacterial diversity analysis of the hot spring encompassing both the soil and water components is of great significance, as these hot springs have been traditionally used for various balneotherapeutic purposes and recreational activities (Das et al. 2012). On the context of the soil microbial diversity studies of the hot springs of Sikkim, there is a dearth in the knowledge of its ecology and diversity. Here, we have tried to examine the hot spring soil bacterial ecology to understand its community structure through culture-independent approach. An attempt was done to generate a first-ever report on the monthly variation in these diversity profiles during the postmonsoon season at Sikkim Himalayas. Sampling sites and sample collection The geographical position of the coordinates and elevation (above mean sea level) of the hot springs were measured with the help of GPSMAP 78S (Garmin, India). The hot springs located at the North Sikkim district were selected for the current study (Fig. 1). Three hot spring soil samples were chosen for the present study-NYS, OYS, and TAR. The samples were taken in the month of October and December 2019. The solfataric soil sediments (1000 g) were aseptically pooled in triplicates from different sections encompassing the whole perimeter of the NYS, OYS, and TAR hot springs of North Sikkim district in sterile sample containers (Wang et al. 2013). The samples were preserved in situ by storing in the thermally insulated sampling box packed with ice gel bags and were then transported (temperature maintained at 4 °C) for DNA extraction analysis. Environmental DNA isolation and quantitative analysis The environmental DNA (eDNA) was extracted using NucleoSpin Soil Kit (MACHEREY-NAGEL GmbH and Co. KG, Duren, Germany) in accordance with the manufacturer's protocol. The eDNA extraction and the amplicon sequencing were done at Eurofins Pvt. Ltd., Bangalore. Quality of the DNA was checked on 0.8% agarose gel and quantified using a Qubit Fluorometer (Thermo Fisher Scientific, USA), with a detection limit of 10-100 ng·μL −1 . 16S rRNA amplicon sequencing library preparation Amplifications of the V3 and V4 regions of bacterial 16S rRNA gene were done using two primers 16S rRNA-F-5′GCC TAC GGGNGGC WGC AG3′ and 16S rRNA-R-5′ACTACHVGGG TAT CTA ATC C3′ (Klindworth et al. 2013). The amplicon libraries were prepared using the Nextera XT Index Kit (Illumina Inc.), in accordance with the 16S metagenomic sequencing library preparation protocol (Faircloth et al. 2012). The amplicon library was purified with AMPure XP beads. The amplified libraries were analyzed on 4200 Tape Station system (Agilent Technologies) using D1000 Screen tape as per manufacturer instructions, and the concentration was quantified by the Qubit Fluorometer. Based on the data obtained from the Qubit Fluorometer and the Bioanalyzer, 500 μL of the 10 pM library was loaded into MiSeq cartridge for cluster generation and sequencing. Paired-end sequencing method (read length 2 × 300 bp) was used. After the sequencing, high-quality metagenome reads were trimmed to remove the barcode and adaptor sequences. Data and statistical analysis Samples (NYS_MUD_OCT, NYS_MUD_DEC, OYS_ MUD_OCT, OYS_MUD_DEC, TAR_MUD_OCT, and TAR_MUD_DEC) were subjected to pre-processing of reads, de-replication, singleton removal, ASV clustering, chimera filtering, and each ASV annotation till species level, with QIIME2 release 2020.6 (Kuczynski et al. 2011). For quality control of the sequences, the DADA2 plugin in QIIME 2 was used to associate erroneous sequence reads with the true biological sequence from which they were derived, thus producing high-quality sequence variant data. Using DADA2, all reads were trimmed to 260 bp, based on the median quality score. In addition, chimeric sequences were detected and excluded from analyses. 16S rRNA ASVs were picked using a closed-reference ASV picking protocol against the Greengenes database (https:// data. qiime2. org/ 2020.6/ common/ gg-13-8-99-515-806-nbclass ifier. qza) In the next step, taxonomy assignments were associated with ASVs based on the taxonomy associated with the SILVA reference sequence defining each ASV (Edgar 2013). The taxonomic abundance at several levels was classified-kingdom, phylum, class, order, family, genus, and species. Sequences without homologous pair were classified as unknown. Statistical analysis based on PERMANOVA in ASV composition between the six samples was done. Taking them as unity factor, EdgeR and DESEQ2 were performed for calculating the differential abundance analysis. Many low abundance classes or ranks get omitted during computation. Thus, these statistical tools were used to eliminate this discrimination. Venn diagram and correlation matrix were used to depict the variation among the ASVs. Description of sampling sites Three hot spring soil samples were chosen for the present study-NYS, OYS, and TAR. NYS (New Yume Samdung) hot spring is situated at an altitude of Fig. 1 Geographical locations of the sampling site from where hot spring mud sediments were collected. The geographical position of the coordinates and elevation (above mean sea level) of the hot springs were measured with the help of GPSMAP 78S (Garmin, India) 4685.8 m above the mean sea level at 27.917302° N and 88.694308° E coordinates. OYS (Old Yume Samdung) hot spring lies just beside the NYS within few meters away. It is situated at an altitude of 4687.9 m above the mean sea level at 27.918242° N and 88.694935° E coordinates. Both hot springs are situated at the Yume Samdung valley. The third hot spring, Tarum/Takrum Tsha Chuu (TAR), is located at Tarum valley, Lachen, and is also known as Lha Bha Tarum Tsha Chuu. It is located at 2893 m above the mean sea level at 27.703888° N and 88.575277° E coordinates. In local dialect, hot springs are referred to as the "Tsha Chuu/Tatopani" (Das et al. 2012). The temperature of NYS, OYS, and TAR hot spring soil during the month of October was around 57 °C, 45 °C, and 44 °C that increased to 61 °C, 57 °C, and 49 °C during the month of December, respectively. Similarly, in the case of pH, at NYS, OYS, and TAR, they were moderately alkaline in the month of October, ranging from 8.5 to 9.2, whereas during the month of December, with decrease in the groundwater aquifer discharges, it got reduced to 7.8-9 (Table 1). Thus, it can be said that there were variations in temperature and pH observed in these hot spring soil components. The elemental analysis was done and published already in paper . Metagenomic data analysis The high-throughput sequencing assembly (Table 2) obtained gave 234,612 reads for NYS (October), 209,837 reads for NYS (December), 219,384 reads for OYS (October), 238,388 reads for OYS (December), 231,568 reads for TAR (October), and 254,002 reads for TAR (December). A total of 3803 ASVs were obtained from the six soil samples, with the minimum length of an ASV being 291 bp and maximum being 492 bp. The mean ASV length obtained was 450.3 bp. Statistical analysis based on PERMANOVA revealed significant differences in ASV composition between the six locations (F value = 3.534, r 2 = 0.469, p < 0.1). Diversity indices and rarefaction curves The diversity indices were calculated using MG RAST and PAST software. Alpha diversity denotes the species variedness, and Chao value depicts the species richness among the environmental samples. The alpha diversity was found to be the highest in the hot spring TAR (143.3) in the month of December and TAR (139.1) in the month of October as per Fisher alpha diversity indices. The beta diversities were more in the month of October (0.28) than in the month of December (0.22) as per the Whittaker concept of beta diversity. In the month of October, the Shannon and Chao-1 diversity indices were also higher, 4.68 and 822.3, respectively, for Tarum hot spring. However, the Shannon and Chao-1 diversity indices were relatively higher, 4.66 and 855.2, respectively, in Old Yume Samdung hot spring in the month of December. However, alpha diversity for NYS in the month of December (131.6) and October (138) was relatively lower compared to TAR hot spring samples. Similarly, the Chao-1 diversity indices in the month of December ST1). Rarefaction curve plots the number of species as a function of the number of samples. Calculated rarefaction is represented by a line graph. The rarefaction curve not only deals with the sample coverage but also depicts whether the sampling depth was sufficient or not to estimate the diversity. The results in curve have shown the initial rapid rise as most common species are found. However, in both graphs, Tarum hot spring has shown the highest peak representing the accumulation of the rarest species. This result was in line with the results shown by diversity indices (Fig. 2a, b). Phylum level bacterial diversity Firmicutes, Proteobacteria, Bacteroidetes, Actinobacteria, Cyanobacteria, and Planctomycetes were the major abundant phyla present in all the solfataric soil sediments of the studied hot spring ecosystem of Sikkim (Fig. 3a, b). There was a considerable amount of variation in their abundance percentages when observed individually. The results have shown that the NYS hot spring possess the Firmicutes as the most dominant phyla in both months of October (23.2%) and December (29.9%) in the postmonsoon season. The major phyla found to be present in NYS were Firmicutes, Proteobacteria, Bacteroidetes, and Actinobacteria. The results have shown that the variation of phylum's during the two months October and December remains more or less constant. However, interestingly, it was shown that the relative abundance of various phyla decreases from October to December. The percent abundance of Firmicutes (Oct-23.2%; Dec-29.9%), Proteobacteria (Oct-21.0%; Dec-20.7%), Bacteroidetes (Oct-12.2%; Dec-5.8%), and Actinobacteria (Oct-5.4%; Dec-6.5%) was found in this study. However, more interestingly besides being less abundance, more phyla were found contributing towards the bacterial diversity in the month of December. In the case of OYS hot spring, the percent abundance of Firmicutes (Oct-24.3%; Dec-22.5%) was higher followed by Proteobacteria (Oct-21.5%; Dec-20.3%) and Bacteroidetes (Oct-8.2%; Dec-5.6%). However, the abundance of various phyla such as Actinobacteria (Oct-5.2%; Dec-7.3%), Cyanobacteria (Oct-3.4%; Dec-4.4%), and Planctomycetes (Oct-2.0%; Dec-4.4%) has increased from the month of October to December. The OYS hot spring analysis shows that more phyla were found contributing towards the bacterial diversity in the month of December likewise NYS. Thus, this can be hypothesized that the winter climate favored the ecological growth of microorganisms. In the case of TAR hot spring, the percent abundance was found to be variable than that of the other two hot springs discussed above. Proteobacteria (Oct-19.1%) was the abundant phylum in the month of December; however, during the month of October, the relative abundance of Proteobacteria (Dec-20.8%) was higher than the month December. Firmicutes were abundant in the month of October (Oct-22.1%; Dec-16.4%) followed by Bacteroidetes (Oct-9.7%; Dec-7.4%). The abundance of phyla Actinobacteria (Oct-6.9%; Dec-7.4%) and Planctomycetes (Oct-4.4%; Dec-8.4%) was found to be increased from the month of October to December. Similar to the other two hot springs, it has shown that more phyla were found contributing towards the bacterial diversity in the month of December. Thus, the bacterial diversity is favored by winter climate. Overall, it can also be said that at the phylum level, TAR hot spring had more variation and diversity compared to the other studied hot springs. Besides these characterized phyla discussed above, unclassified phyla were abundant in both months among all the hot springs. All the data are tabulated in Supplementary ST2a, b. Genus level bacterial diversity The genus level diversity showed more unclassified genera similar to phylum level classification which suggests the possibility of discovering novel genera from these hot spring solfataric soil sediments. Proteobacteria, Firmicutes, Bacteroidetes, and Verrucomicrobia were some of the common phyla among which unclassified genera were present. There was higher variation among all the individual samples observed during both the months (Fig. 4a, b). The relative abundance of similar genera was found in both months. The results have shown that the genera such as Clostridium, Bacillus, Paenibacillus, and Desulfotomaculum were prevalent in all the hot springs in both Genus level diversity of December month hot spring (mud) samples months; however, their relative abundance varies considerably. In the case of NYS hot spring, it was shown that the abundance of Clostridium (Oct-12.0%; Dec-18.9%) was higher in both months followed by Bacillus (Oct-2.9%; Dec-4.5%), Desulfotomaculum (Oct-2.7%; Dec-2.2%), and Paenibacillus (Oct-1.5%; Dec-2.2%). However, Planctomyces and Terrimonas were having similar abundance. The results have also shown that Lactobacillus which has the property of probiotics was also present in the both the months and a nitrogen fixing bacteria; Azospirillum was also found in both months with moderate abundance. The relative abundance was found to be higher among discussed genera in the month of December. Similar to phylum-wise distribution, it has been shown that more genera were contributing towards the bacterial diversity in the month of December. Venn diagram analysis was done to understand the distribution of shared ASVs at phylum and genus level of the hot spring soil bacterial diversity (Fig. 5a, b). Less proportions of ASV were shared between the six hot springs. 13 phyla are common among all the samples or hot springs, 19 are common in December samples and 15 in October samples. 10 genera were common in all the hot springs, 8 were common in October month and 20 we common in December as per revealed by Venn diagram. The distribution of shared ASVs across the sediments revealed less overlap among each other. Thus, it showed higher variation and diversity among the samples. Discussion In India, the geothermal exploration began in early 1973 by Geological Survey of India and they reported more than 350 hot springs having temperature range varying above 40 °C-100 °C throughout the entire sub-continent region (Narsing Rao et al. 2021). Based on the tectonic movements, the hot springs of India were categorized into orogenic and non-orogenic. Sikkim naturally hosts many hot springs. It is a major tourist attractive state of India where nature is in its juvenile form and a refreshing season greets its visitors. Previous studies through culture-independent studies done on the water samples from some hot springs of Sikkim located at Polok, Borong, Reshi, and Yumthang showed bacterial diversity at both the phylum and genus levels. Those hot spring water samples were abundant in Proteobacteria (Polok-47%; Borong-63%; Reshi-76%) and Yumthang hot spring was predominant with Actinomycetes (98%). The most abundant genera in the hot spring water of Sikkim were Acidovorax, Acinetobacter, Exiguobacterium, Flavobacterium, Ignavibacterium, Paenisporosarcina, Paracoccus, Pseudomonas, Rhodococcus, Serratia, Sulfuritalea, Thermodesulfovibrio, Thermus, andThiobacillus (Najar et al. 2018, 2020;Panda et al. 2016;Sharma et al. 2020). Similarly, in the present metagenomic study, we have found microbes belonging to thermophiles, alkaliphiles, and mesophiles mainly. Hot spring soil ecology is usually governed by complex uncultured bacteriomes. The hot spring soil ecology of the Sikkim Himalayas had higher percentages of Gram-negative bacterial phylum such as Proteobacteria and Bacteroidetes as compared to Gram-positive bacterial phylum of Firmicutes and Actinobacteria. Interestingly, in the hot spring water samples of the present study, it has been found that Gram-positive bacteria such as Firmicutes were abundant (Najar et al. 2018Sharma et al. 2020). In the various other hot spring soil sediments also, a similar observation was found. The hot spring soil of Tatapani, Bor Khleung, NYS, and Eritrea all had similar range of temperature varying from 50 °C to 70 °C. The Tatapani hot spring soil of Orissa and Solfatara Crater, Italy, as studied by Sahoo et al. (2015) and Crognale et al. (2018) had the highest abundance of Proteobacteria (45%) and Bacteroidetes (23.4%); they lied within the same temperature ranging from 45 °C to 65 °C (Sahoo et al. 2015;Crognale et al. 2018). Globally, the hot spring soils are rich in Proteobacteria which seems a common characteristic feature in soil ecology. Bacteroidetes was also relevantly abundant in Taptapani, Bor Khleung, NYS, OYS, and TAR. The resident signature soil flora such as Planctomycetes and Chlorobi which are commonly found in sulfur-rich hot springs were also found here. However, the abundance of Chlorobi was less. Temperature plays a (Sharp et al. 2014). During the month of December, there were drastic changes in the abundance percentage for majority of the phylum and genera as compared to that from the month of October. At phylum level, the abundance of majority of the phyla such as Firmicutes, Proteobacteria, Bacteroidetes, and Actinobacteria decreased during the month of December except Planctomycetes which showed higher abundance, and at genus level, more or less similar trend was found. Firmicutes and Proteobacteria were the most abundant phyla obtained in various studies in India and around the hot springs of world. Similar to our study, Firmicutes and Proteobacteria are found to be abundant in Jakrem, Bakreshwar, hot springs of Odisha such as Taptapani, Athamallik, and Tapovan (Sahoo et al. 2015;Panda et al. 2015;Chaudhuri et al. 2017). At the genus level, our results are distinct from those of other studies on hot springs of India. Chloroflexus, Roseiflexus, Anaerolinea, and Caldilinea were found major genera in the hot springs of Odisha (Sahoo et al. 2015). In the other study, the hot spring soil sediments of Manikarnan (Himachal Pradesh, India), as reported by Mahato et al. (2019), was abundant in Acidothermus, Alishewanella, Arthrobacter, Bifidobacterium, Brevundimonas, Burkholderia, Chloroflexus, Frankia, Meiothermus, Nocardia, Rhodothermus, Thermobaculum, and Thermosynechococcus (Mahato et al. 2019). These findings are in contrast to our study where we found the abundance of Clostridium, Bacillus, Lactobacillus, and Desulfotomaculum. Similarly, other studies from the hot springs of Northeast India, i.e., from Sikkim, showed distinct results shown by the present studied hot springs of Yume Samdung and Tarum. Our previous studies on the hot springs of Sikkim such as Polok, Borong, and Yumthang have shown the abundance of Acinetobacter (7.69%), Flavobacterium (3.85%), Vogesella (3.85%), Ignavibacterium (2.88%), Sediminibacterium (2.88%), Thermodesulfovibrio (2.88%), and Acidovorax (1.92%) which are totally distinct from our present study (Najar et al. 2018). However, similar to our results, Clostridium has been found abundant in Jakrem and Bakreshwar hot springs (Panda et al. 2015;Chaudhuri et al. 2017). Global comparisons with our findings show that Firmicutes and Proteobacteria are the dominant taxa. A study on Malaysian hot springs shows the abundance of Firmicutes (38.5%) and Proteobacteria (16.3%) (Chan et al. 2017). Similar results were found by Ghilamicael et al. (2018) while studying five hot springs in Eritrea (Ghilamicael et al. 2018). At global level, the genus Clostridium has been found to be abundant in various hot springs such as hot springs of Yunnan-Tibet, Eritrea, Argentina, and Sri Lanka (Liu et al. 2020;Rupasinghe et al. 2022). According to the literature review, variation of the genera and phylum depends on various abiotic parameters such as pH, temperature, dissolved oxygen, and other physicochemical components present in the hot springs (Li et al. 2015;Podar et al. 2020). Thiobacillus, Planctomyces, and Arthronema and phylum Cyanobacteria, Acidobacteria, and Armatimonadetes had the highest variation/fluctuations in their relative abundance percentage in NYS, OYS, and TAR, throughout both the months (Fig. 6). This may be due to the change in temperature (Wang et al. 2013;Sharp et al. 2014;Badhai et al. 2015) as well as the variability of ground aquifer discharge with the onset of winter during the postmonsoon season. Many researchers have shown the correlation of temperature with the dominance of various phyla and interpreted by many researchers as a function of temperature. Subudhi et al. (2017) have shown the predominant shifting of thermophilic cyanobacteria as a function of temperature and also have shown the abundant growth of different strains at different temperatures (Subudhi et al. 2017). Similarly, Sahoo et al. (2015) have correlated and linked the dominant nature of Proteobacteria in the hot springs of Odisha, India, as a function of temperature (Sahoo et al. 2015). Firmicutes and Bacteroidetes could easily withstand these climatic variations and did not have much effect on their relative abundance percentage. This might be due to their physiological and cellular adaptation to the geothermal environment. There was a temperature change from mesophilic (45 °C and 44 °C) to thermophilic (57 °C and 49 °C) from October to December. As the monsoon season is from the month of June to September, at the Sikkim Himalayas, hence during the first month of the post-monsoon season, i.e., in October, there is higher abundance of the species in the hot spring soil ecosystem. This might be due to the fact that during rain in these high altitudes, frequent geomorphological changes occur that enhances the bacterial diversity. Also, with heavy rainfall, the aquifer discharge is substantially high and hence more enrichment of the bacterial ecology occurs. With the recede in rainfall, during the later stages of post-monsoon, i.e., from November onwards, these high altitudes start experiencing snowfall and winter onsets. Thus, with less groundwater aquifer discharge and also drastic change in the atmospheric conditions, the bacterial diversity at the month of December changes. Besides, it was shown that the percent relative abundance of these phyla in the month of December is relatively less and relatively more phyla were found contributing towards the bacterial diversity in the month of December. This may be also correlated with the temperature change. One of the reasons may be the less water dilution due to reducing rainfall and thus free flow of hot water through plumbing systems and making conditions favorable for the growth of many mesophilic and thermophilic microorganisms. It may be hypothesized that as temperature changes from mesophilic to thermophilic, the psychrophilic and mesophilic bacteria start diminishing so we are getting lower abundances. However, as temperature rises, many specific bacteria with optimum thermophilic temperature start growing and contributing towards the bacterial diversity. In our study, we have seen the accumulation and additional contribution of bacterial genus from lower temperature (October) to higher temperature (December) such as Isosphaera, Acidimicrobium, Ruminococcus, Gemmata, and Rhodopirellula. It has been shown that the particular bacteria accumulate at different temperatures such as at higher temperature only a few genera (e.g., bacterial Caldisericum, Thermotoga, and Thermoanaerobacter and archaeal Vulcanisaeta and Hyperthermus) often dominated in high-temperature environments (Li et al. 2015). Similarly, in another study, it was shown that there is a higher diversity at lower temperatures such as at the lowest temperature (38℃) the water and microbial mats of the Hverahólmi lagoon were dominated by a large diversity of mesophilic and mildly thermophilic heterotrophic as well as photosynthetic bacteria, including Alpha and Betaproteobacteria (Roseomonas, Rhodobacter, and Tepidimonas), Bacteroidetes (Chitinophaga, Saprospira), and Cyanobacteria (Cyanobium, Leptolyngbya). However, when temperature increases, a certain portion of bacteria were contributing to microbial diversity and few specific microbes started accumulating such as Pyrobaculum, Aquificae, and Thermi (Podar et al. 2020). Correlation matrix represented through heat map analysis ( Fig. 7) was done to compare the unculturable bacterial diversity present in soil among the nine different hot springs of the world-Atri (55 °C-58 °C) and Taptani ). The phyla Proteobacteria, Planctomycetes, Bacteroidetes, Chloroflexi, Actinobacteria, and Verrucomicrobia were positively correlated and had similar abundance percentages in these hot spring soil sediments as reported by various researchers (Panda et al. 2016;Sharma et al. 2020;Sahoo et al. 2015;Crognale et al. 2018;Ghilamicael et al. 2017;Kanokratana et al. 2004). These phyla are commonly associated with the soil flora and are an important contributor of bacterial diversity in the hot springs worldwide. Another important aspect was the discovery of many unclassified genera, which suggests that novel flora might be habituating these hot springs. The present study was able to get valuable insights into the microbial diversity of thermal springs in Sikkim Himalayas, and the results of this study would be valuable in designing future studies on target species found in these springs. There is other future research prospective that needs to be conducted on industrially important thermophiles identified in this study. Conclusion In this first ever report on the two-month comparative study of the bacterial diversity from hot spring soil of Sikkim Himalayas during the post-monsoon season, there were not much remarkable changes in the bacterial diversity both at 1 3 the phylum and genus levels. Although only very few studies have been done on the seasonal variation of hot spring soil bacteriomes, a general trend was found common between the available reports globally. Proteobacteria and Bacteroidetes were the worldwide abundant phyla. In the case of genera, the Himalayan Geothermal Belt had comparably similar profile, with the abundance of Thiobacillus, Chloroflexus, and Meiothermus which was very much distinct from our present study. Moreover, other hot springs of Europe and Africa had varied genera owing to their geological difference. Also, it is evident that temperature difference plays a very crucial role in determining the bacterial diversity. With the drop-in temperature about 5 °C-7 °C on an average, from the month of October to December, a huge variation of species occurs in the studied Sikkim Himalayan hot spring soil niches. In the present study, winter season favors the bacterial diversity Fig. 7 Heat map showing the comparison of bacterial diversity in the hot spring mud samples globally as many psychrophilic and psychrotolerant or mesophilic microbes. Thus, this may be the reason we are getting more taxa during the month of December; however, the abundance of these taxa is less. Future studies on the pre-monsoon seasonal variation of the bacterial diversity will complete this initiative and present us with a prismatic view of the complete annual uncultured bacteriome profile. These types of comparative analysis are the result of high-throughput sequencing, and it helps in understanding these extreme ecologies where the habitats are fragile and are at risk from geomorphological hazards.
v3-fos-license
2022-09-06T13:03:41.286Z
2022-09-01T00:00:00.000
252080417
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.dajour.2022.100123", "pdf_hash": "57d96ca7a05f275da37b77aa5757a3fa04c54990", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1537", "s2fieldsofstudy": [ "Economics" ], "sha1": "8fd73b4a7f6856624d6de7702ef2f658773b0abd", "year": 2022 }
pes2o/s2orc
A multicriteria evaluation methodology for assessing the impact of COVID-19 in EU countries Our purpose in this paper is to develop an integrated multicriteria evaluation methodology for assessing the impact of COVID-19 in the 27 countries of the European Union. Initially, a specialized and comprehensive set of normalized criteria metrics that capture several dimensions of the pandemic, such as infection, mortality, recovery and testing rates, is carefully specified. Then, by means of a well-established and intuitive weighting system, which directly takes into consideration the experts’ preferences, the gravity of each criterion is determined properly. Next, two of the most popular multicriteria ranking techniques, i.e. the TOPSIS and the PROMETHEE II, are simultaneously exploited in order to derive and integrate the obtained evaluations. Moreover, the value of the suggested decision support system is enriched through the introduction of a novel element in the field of multicriteria analysis, i.e. the ‘2-dimensions evaluation plane’, a data visualization concept which provides rich information to the decision maker, by fruitfully blending the ranking results of the utilized multicriteria methods. The validity of the proposed approach is verified through an indicative illustrative application on Coronavirus data for the European Union countries, deriving a wide spectrum of insightful conclusions. Finally, the flexibility of the suggested framework is also stressed, since it can be fully customized, both in terms of the selected criteria and its weights, and run repetitively under a specific time-step frequency, according to the evolution dynamics and implications of the underlying pandemic. Introduction & problem setting The Coronavirus outbreak is first and foremost a human tragedy, affecting millions of people [1]. It is the most infectious disease pandemic, taking into consideration the numbers infected, the mortality rate and the demand for healthcare services [2]. Globally, according to the World Health Organization, as of August 19, 2022, there have been 591,683,619 confirmed cases of COVID-19, including 6,443,306 deaths [3]. But the Coronavirus pandemic is also having a growing impact on the global economy, since the consequences from the organizational shutdowns and other measures are unprecedented. Under these disastrous circumstances, contributions from the field of operations research and management science (ORMS) have played a critical role [4,5]. Choi [6] and Queiroz and Wamba [7] highlight and set the pace on the interconnection of ORMS and COVID-19. Interesting applications on the relevant conjoint field are the ones of Abdin et al. [8], Amaratunga et al. [9], Sinha et al. [10], and Baveja et al. [11]. And in general, the application of operations research techniques, like simulation, optimization and system dynamics, in the healthcare sector is very broad; see Eldabi et al. [12], Kotiadis et al. [13], Viana et al. [14] and Tako and Kotiadis [15]. The ORMS quantitative framework the inherent complexity of problems with multiple conflicting criteria, in which the decision maker's (DM) preferences must be taken into consideration and incorporated in the decision process. For modern treatments of the MCDM, the interested reader should see Greco et al. [32] and Papathanasiou and Ploskas [33]. Our purpose in this paper is to develop an integrated multicriteria evaluation methodology for assessing the impact of COVID-19 in the 27 countries of the European Union (EU). Initially, a specialized and comprehensive set of normalized criteria metrics that capture several dimensions of the pandemic, such as infection, mortality, recovery and testing rates, is carefully specified. Then, by means of a well-established and intuitive weighting system, which directly takes into consideration the experts' preferences, the gravity of each criterion is determined properly. Next, two of the most popular MCDM ranking techniques, i.e. the TOPSIS and the PROMETHEE II, are simultaneously exploited in order to derive and integrate the obtained evaluations. Moreover, the value of the suggested decision support system is enriched through the introduction of a novel element in the field of MCDM, i.e. the '2-dimensions evaluation plane', a data visualization concept which provides rich information to the DM, by fruitfully blending the ranking results of the utilized multicriteria methods. The validity of the proposed approach is verified through an indicative illustrative application on Coronavirus data for the EU countries, deriving a wide spectrum of insightful conclusions. Finally, the flexibility of the suggested framework is also stressed, since it can be fully customized, both in terms of the selected criteria and its weights, and run repetitively under a specific time-step frequency, according to the evolution dynamics and implications of the underlying pandemic. To the best of our knowledge, this is the first time a methodology is presented for assessing the impact of COVID-19 in the 27 countries of the EU. We also stress at this point that, MCDM has come about because of the need for powerful tools to analyze problems with complex structures. With this work as an example, another purpose of the paper is to encourage all with an indepth knowledge of the arsenal of MCDM tools, that have been built up over the past half-century, to work on effective applications, as now is the time they are needed most. The paper proceeds as follows: In Section 2, we present the proposed methodological framework and all the technical aspects of the adopted theoretical modeling. In Section 3, we elaborately discuss the testing procedure. Finally, the concluding remarks are given in Section 4. Proposed methodology In this section, we provide a general description of the suggested framework, we develop the mathematical formulation of the employed multicriteria evaluation methods, i.e. TOPSIS and PROMETHEE II, and finally we briefly present the theoretical background of the weighting system chosen to the methodology's criteria. General description The aim of the proposed methodology is to assess the impact of COVID-19 in EU countries. The logical diagram of the approach is graphically depicted in Fig. 1. The basic steps of the methodology are summarized as follows (see Sections 2.2-2.4 for the whole spectrum of the underlying technical implications): Step 1: Firstly, an elaborate set of ratios, i.e. evaluation criteria, that capture all the pandemic aspects, such as infection, mortality, recovery and testing rates, is determined by the DMs. Step 2: In the next step, by utilizing a specialized and intuitive weighting method, which fully takes into account the DMs' preference system, the importance of each criterion is specified. Step 3: Then, two MCDM ranking algorithms, the TOPSIS and the PROMETHEE II, are simultaneously applied for producing two discrete evaluation rankings, based on the selected criteria and weights. Step 4: Next, the '2-dimensions evaluation plane' is introduced, so as to visualize and integrate the ranking results of the 2 MDCM methods and provide the DM with comprehensive insights. Step 5: The proposed approach is applied on Coronavirus data for the 27 EU countries under a rolling period of 4 consequent weeks and the results are validated by the DMs. Step 6: The suggested approach is fully customizable, either regarding the criteria and its weights or the set of evaluation alternatives, i.e. countries, considering the pandemic dynamics of each time. The methodology is proposed for use by epidemiologists, health care officials, and other experts. People like the authors, that is, people with professional backgrounds in OR-MS and MCDM, would then carry out the roles of 'analyst' and 'facilitator' in the decision making process (as in [25]). In operation, one set of criteria is as laid in Table 1 with six criteria clustered along three major dimensions: (a) the infection dimension, (b) the mortality dimension, and (c) the recovery & testing dimension. There is no doubt that the assignment of importance weightings to each criterion is a crucial issue for the application of multicriteria methods. For example, outranking methods are non-compensatory, thus the interpretation of weights is different than for a compensatory MAUT-based system [34]. Rogers et al. [35] distinguish four methods which can be employed to weight criteria for use within multicriteria methods: (a) the direct weighting system [36], (b) the Mousseau system [37], (c) the allocation system [38,39], and (d) the 'resistance to change grid' weighting method [40]. The method chosen for the determination of weights in the proposed framework is the allocation of Simos [38,39]. This method concentrates many advantages [41]. First, it is relatively simple and straightforward. Second, the weights obtained can be directly connected to the DM's concept of personal importance. And third, this method has been widely in a very large number of real-world applications. The underlying technical details of the Simos weighting system, along with its step-by-step implementation for the COVID-19 impact assessment application, appear in Section 2.4. Finally, we stress that the main reason for utilizing the TOPSIS and PROMETHEE II methods in the current study has to do with their conformity to the nature of the evaluation problem, which calls for a ranking of the alternatives; also, it is associated with the fact that these methods are easy to perceive by the DM. The second reason for exploiting the above methods is connected with the ease of their implementation; no assignment of parameters is necessarily required by the DM, such as indifference, preference and veto thresholds. Improper determination of these parameters may lead to inconsistent results, that actually do not reflect the DM's preference system. Finally, the choice of these methods is also based on their quite extended applicability, regarding various types of modern decision support problems. The TOPSIS and the PROMETHEE methods can be considered as classical MCDM techniques that have received a lot of attention, not only from scholars, researchers and practitioners [32,[42][43][44]. The TOPSIS method TOPSIS is the product of Hwang and Yoon [45] and Chen and Hwang [46]. TOPSIS stands for Technique for Order of Preference by Similarity to Ideal Solution. Representative applications in a number of areas can be found in the reviews of Palczewskia and Sałabun [43] and Salih et al. [44]. Consider a problem with alternatives numbered 1 to , and criteria numbered 1 to . Let each alternative be evaluated with respect to each criterion. This yields a decision matrix = ( ) × , where in is the value assigned to alternative by criterion . According to TOPSIS, the first step is to make the criteria dimensionless. This is done by normalization, which is accomplished by re-scaling the columns of , that is, by converting each value into an as follows: Then each is converted into a value as follows: where the are the criterion weights obtained from the weighting system. In this way, the th row of = ( ) × is the weighted normalized criterion vector of the th alternative. The next task of TOPSIS is to construct the ideal (zenith) and antiideal (nadir) solutions of the problem. The simplest case is that the ideal and anti-ideal points are fixed by the DM, but this should be avoided as it would imply that the DM can actually make a credible elicitation of the two points and it would add more subjectivity to the procedure. A better approach is to construct the components of the ideal solution + = ( + 1 , … , + ) by means of: and the components of the anti-ideal solution − = ( − 1 , … , − ) by means of: Now it is necessary to calculate how far the weighted normalized criterion vector of each alternative is from the ideal solution. This is done by computing: Similarly, it is necessary to calculate how far the weighted normalized criterion vector of each alternative is from the anti-ideal solution. This is done by computing: Using these two distances, we compute each alternative's relative closeness to the ideal solution: After reordering the alternatives from best relative closeness to worst, the alternative at the top of the list is the problem's solution. The PROMETHEE method PROMETHEE is the product of Brans and Vincke [47] and Brans et al. [48]. Insightful applications of PROMETHEE are found in the review of Behzadian et al. [42]. One of the creators of PROMETHEE, Bertrand Mareschal, maintains a list of references on his website (www. promethee-gaia.net), which as of January 2020 contained over 2200 references. Once again, consider a problem with actions or alternatives, , which are to be evaluated on a set of criteria, . Suppose, without loss of generality, that all criteria are to be maximized. For each criterion j and for each pair of actions ( , ), assume the DM is able to express his or her degree of preference in the form of ( , ) ∈ [0, 1], where the order of notation is that action a is preferred to b based upon the difference ( , ) = ( )− ( ). The degree of preference is obtained using a preference function chosen by the DM. The preference functions that have been proposed are: (a) the usual criterion, (b) the U-shaped criterion, (c) the V-shaped criterion, (d) the level criterion, (e) the V-shaped criterion with indifference region, and f) the Gaussian criterion. These six types are easy to define and have a clear intuition for the DM. Depending on the function chosen, threshold values may be required. For example, if the DM selects a V-shaped criterion with indifference region, the DM is then required to specify the threshold values of (strict preference) and (indifference). If the difference between the evaluation of and on the th criterion is smaller than the indifference threshold , then neither action is preferred. If the difference between the evaluations of and is greater than the preference threshold, , ( , ) > , then action is preferred to action . In order to evaluate how much action a is preferred to b over all criteria, the preference index ( , ) is calculated using a weighted sum of the degrees of preference ( , ). The weights, > 0, are to reflect the importance of each criterion in the decision. That is, the greater the weight, the more important is the criterion. The preference indices are: and where ( , ) expresses the degree to which is preferred over for all criteria, and ( , ) represents how much is preferred to . As each action is compared with other − 1 actions, positive + and negative + outranking flows can be defined as follows: The positive flow + ( ) expresses how much alternative outranks all other − 1 alternatives, thus it represents the global preference for action in comparison to all the other actions. The higher the value of + ( ), the better the alternative is. The negative flow, − ( ), expresses how alternative is outranked by all other − 1 alternatives, thus it represents the global weakness of in comparison to all the other actions. The smaller − ( ), the better the alternative is. Based on the positive and negative outranking flows, the PROMETHEE I partial ranking is defined as follows: The positive and the negative flows can be combined to obtain the net outranking flow, defined as follows: PROMETHEE II exploits the above net flow in order to provide a complete ranking of actions, from best to worst; the higher the value of ( ), the better the alternative is: The Simos weighting system Simos [38,39] proposed a technique allowing any DM (not necessarily familiarized with multicriteria decision aiding) to think about and express the way in which he or she wishes to hierarchize a set of criteria in a given context. This procedure aims to communicate to the analyst the information needed in order to attribute a numerical value to each criterion when used in ranking-type methods [49]. The system has been used in research and practice [40] and seems eminently suitable to the study at hand. Certain shortcomings in the method have led to a revised approach being proposed by Figueira and Roy [50]. The distinguishing feature of this weighting method lies in the linkage between allocation cards and the criteria. The name of each criterion is inscribed on a card, and then given to the DM in random order. The DM is asked to physically manipulate these cards in order to rank them and insert blank cards where appropriate in order to reinforce ranking differences when necessary. The active participation by DMs in the procedure gives them an understanding of the approach. The Simos method is summarized as follows [40]: i. Allocation cards are handed to the person being questioned, with the name of each criterion on a separate card. Thus, if there are criteria in total being considered in the decision problem, cards are initially handed out. In order to avoid influencing the DM, it is advisable not to assign any number to each of the individual cards. Blank cards are also available but are generally not handed out until step iii. ii. The person being questioned is then asked to order the cards from 1 to in order of importance, with the criterion ranked first being the least important and the one ranked last deemed the most important. If certain criteria are, in the opinion of the DM, of the same importance (and therefore the same weighting), their cards are grouped together. This physical procedure results in a complete ordering of the k criteria. iii. Finally, the person being questioned is asked to consider whether the difference in importance between any two successively ranked criteria (or groups of criteria graded equally) should, upon reflection, be more or less pronounced. For the weighting process to suitably reflect this greater or smaller gap in importance, one can ask for blank cards for use between two successively ranked cards (or group of cards), with the number of blanks used for reflecting the size of the gap. The step-by-step implementation of the Simos weighting system appears in Table 2, while final weights as determined by experts, both per criterion and dimension appear in Table 3. Apparently, the mortality related dimension, i.e. the criteria associated with deaths and intensive care unit patients, are associated with the highest significance, following the infection dimension and finally the recovery and testing one. Indicative application The proposed methodology has been applied on data concerning the 27 countries of the EU. The EU, a political and economic union of 27 member states that are located primarily in Europe has an estimated total population of about 447 million. Its nominal GDP estimate for 2022 is 17.9 trillion Euros, while the GDP per capita is 45,567 Euros [51]. It is critical to note that the usefulness of the proposed methodology is not affected by the fact that it is applied only to the EU area, since it can be considered for any region, upon data availability. The type of data that are employed in this application are also available to all researchers and scientists, through the Worldometer© database (https://www.worldometers.info/coronavirus), in which data are collected from official reports, directly from government's communication channels or indirectly, through local media sources when deemed reliable. The study period includes 4 consecutive weeks, with the data sets, i.e. EU countries and their corresponding values in the selected criteria, to be recorded on April 6, 13, 20 and 27, 2020. Indicatively, the corresponding performance matrix, for the first run of April 6, 2020, is shown in Table 4. The obtained evaluation results, i.e. country rankings, according to the TOPSIS and PROMETHEE II Conclusions The world is facing a very aggressive pandemic of a magnitude and speed that are almost unprecedented. Under these circumstances, evaluation frameworks for measuring the impact in each country might be extremely helpful decision support tools. The critical features of the approach presented are outlined as follows: (a) Incorporation in the evaluation process of several criteria, which in a realistic basis capture the pandemic impact, (b) Incorporation of the experts' preference system, regarding both the choice of these criteria and their significance, and (c) Incorporation of multicriteria methods, i.e. the TOPSIS and PROMETHEE II, which are well adapted to the nature of the problem, as they provide complete rankings of the alternatives, i.e. the 27 EU countries. More specifically, we developed an integrated multicriteria evaluation methodology for assessing the impact of COVID-19 in the EU region. A comprehensive set of normalized ratios that capture the main dimensions of the pandemic, such as infection, mortality, recovery and testing rates, is carefully specified, in close collaboration with epidemiologists. By means of a well-established weighting system, which directly takes into consideration the experts' preferences, the gravity of each criterion is determined properly. Further, two MCDM ranking techniques, are simultaneously exploited in order to derive and integrate the obtained evaluations. The value of the suggested decision analytics system is enriched through the introduction of an innovative element in the field of MCDM, i.e., the '2-dimensions evaluation plane', a data visualization concept which provides effective information to the DM, by fruitfully blending the ranking results of the utilized multicriteria methods. The results obtained are fully compatible with the experts' heuristic and qualitative assessment and have been fully validated by them. Finally, the flexibility of the suggested framework is a critical benefit, since it can be fully customized, both in terms of the selected criteria and its weights, and run repetitively under a specific time-step frequency. In closing, further work that may be considered for broadening the suggested framework can be related to the expansion of its focus, by assessing more dimensions, such as the financial and/or social impact of the COVID-19 pandemic, thus including additional criteria or criteria of not only quantitative, but also qualitative nature. Moreover, the suggested approach might be enriched in the future with a new methodological component, beyond the epidemiological data. Expanding analysis with such information as unemployment, drop in GDP, etc., will provide additional insights to the decision makers, with focus on policy alternatives and courses of action.
v3-fos-license
2021-07-26T00:05:32.848Z
2021-06-15T00:00:00.000
236296643
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/13/12/1670/pdf", "pdf_hash": "82963639c46e06948b8db0f5984199bfd440857a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1539", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "sha1": "c946059568a2c5c103f46923f9bf3660bd67f854", "year": 2021 }
pes2o/s2orc
Mechanistic Model and Optimization of the Diclofenac Degradation Kinetic for Ozonation Processes Intensification : This work focused on estimating the rate constants for three ozone-based processes applied in the degradation of diclofenac. The ozonation (Oz) and its intensification with catalysis (COz) and photocatalysis (PCOz) were studied. Three mathematical models were evaluated with a genetic algorithm (GA) to find the optimal values for the kinetics constants. The Theil inequality coefficient (TIC) worked as a criterion to assess the models’ deviation. The diclofenac consumption followed a slow kinetic regime according to the Hatta number ( Ha < 0.3). However, it strongly contrasted with earlier studies. The obtained values for the volumetric rate of photon absorption (VRPA) corresponding to the PCOz process (1.75 × 10 − 6 & 6.54 × 10 − 7 Einstein L − 1 min − 1 ) were significantly distant from the maximum (2.59 × 10 − 5 Einstein L − 1 min − 1 ). The computed profiles of chemical species proved that no significant amount of hydroxyl radicals was produced in the Oz, whereas the PCOz achieved the highest production rate. According to this, titanium dioxide significantly contributed to ozone decomposition, especially at low ozone doses. Although the models’ prediction described a good agreement with the experimental data (TIC < 0.3), the optimization algorithm was likely to have masked the rate constants as they had highly deviated from already reported values. Introduction In the last two decades, an extensive list of pharmaceuticals and personal care products has ranked as contaminants of emerging concern (CECs). They have been frequently found in aqueous systems disrupting the normal development of the local biota [1][2][3]. The main concerns among these substances are their recalcitrant and cumulative behavior over the ecosystems. Nowadays, available technologies at industrial wastewater treatment plants (WWTPs) have not been enough to remove these pollutants from municipal effluents [4,5]. Advanced oxidation processes (AOPs) are well-known alternatives to deal with these types of compounds, which have been widely employed in the degradation of several CECs and proved to yield high degradation rates, even with removals over 99% at appropriate conditions [6][7][8][9]. Even so, AOPs still get challenged when it comes to the mineralization of recalcitrant compounds. This situation increases operational costs as more energy, reactive amount, and operation time are required [10][11][12]. However, the simultaneous application of AOPs can substantially improve degradation and mineralization by enhancing the production of highly oxidizing species. These processes are known as intensifications, and they have proved to considerably increase the production rate of hydroxyl radicals HO • [13,14]. These processes have overcome Mechanisms of Reaction for Ozone Processes The reaction mechanism for the ozonation process has been widely studied, even for different pH conditions. Staehelin et al. and Tomiyasu et al. proposed the first reaction mechanisms for ozone in water at acidic and alkaline conditions, respectively [44][45][46]. The significant difference between these pathways consisted of ozone decomposition reactions that occur because of alkalinity. However, because of the available variety of catalysts, the mechanisms for COz processes are less generalized. Jans and Hoigne carried the first known work in studying the effect of a catalyst over the ozonation process in 1998. Since then, different researchers have proposed feasible reaction paths based on the type of catalyst [47]. For the particular case of heterogeneous PCOz with titanium dioxide (TiO 2 ), the available literature information is still scarce. Moreover, the already proposed mechanisms Water 2021, 13, 1670 3 of 23 rely on independent knowledge over ozonation and heterogeneous photocatalysis mechanism [24,48]. Thus, this work collected part of this information to present a generalized mechanism, described in Table 1. Table 1. Generalized reaction mechanism for ozone and ozone-catalyst-based processes. Homogeneous Initialization Heterogeneous attack of HO • Homogeneous production of HO • Ti(IV) − HO • + DCF k 12 Homogeneous attack of HO • TiO 2 Hole Trapping Homogeneous propagation Initializing heterogeneous reaction Propagation heterogeneous reaction 1 The reaction involves Lattice Oxygen sites [24]. With the addition of TiO 2 , the process was also affected by heterogeneous reactions promoted by adsorption phenomena at the catalyst surface. Then, the COz mechanism was also described by reactions (14)- (18). As the hydroxide ions OH − were adsorbed on the catalyst, they were prone to react with ozone molecules to promote the production of hydroxyl radicals [49]. Furthermore, the heterogeneous hydroxyl attack reactions (16)- (18) additionally promoted the decomposition of diclofenac. However, as not enough information for adsorption kinetics was available, these reactions were modeled with a single global rate law. Finally, UV radiation exposure led to the PCOz mechanism, which gathered photolysis and photocatalysis reactions. Here, the incidence of photons caused the catalyst excitation, where incident photons were likely to promote electrons at the catalyst valence band (VB) towards the conduction band (CB), which resulted in the formation of pairs electron-hole (19). The former allowed oxide-reduction reactions to take place. Thus, the holes promoted the generation of hydroxyl radicals by reactions (20) and (21), while electrons reacted with ozone to produce ozonide radicals (24). However, recombination of the photo-generated pair was also possible. Therefore, electrons could return to their VB (25). Meanwhile, at the homogeneous liquid phase, dissolved ozone was photolyzed (26), and the generated hydrogen peroxide was prone to react with hydroxyl radicals by (27). Setup of the Flotation Cell The experimental measurements were obtained in a previous work from Lara et al. employing diclofenac sodium salt (C 14 H 10 Cl 2 NNaO 2 , >98%) as a preccursor for diclofenac [22]. The reaction system comprised an acrylic storage tank with a reaction volume V re = 4.5 L, two fluorescent tubular lamps (Repti Glo 5.0 Compact) with a rated power of 20 W, and an Ozonator AZ2 model 5GLAB for ozone supply. This novel reactor has proven to involve turbulent zones which minimize the mass transfer limitations in the water treatments with ozone [50]. The flotation cell was modelled for the diclofenac degradation by Oz, COz, and PCOz. The predictions were compared against experimental data from [22], where the Oz process registered measures at ozone doses (C (in) O 3 ) of 2.66 and 7.40 ppm. Meanwhile, the COz and PCOz processes employed a factorial design 2 2 including the catalyst load (C mp ) as a factor with levels 300 and 800 ppm. The levels for the ozone dose factor were kept as equal as in the Oz. Mathematical Models According to Table 1, the mathematical models for each process were formulated based on: (i) absorption equilibrium, (ii) perfect mixing for both gas and liquid phases, (iii) little volatility for the water dissolved species, (iv) the catalyst particles were uniformly distributed, (v) the diclofenac photolysis was negligible, (vi) adiabatic conditions for the reaction system, (vii) the mass transfer resistance in the gas phase was negligible, (viii) continuous operation for the gas phase, (ix) batch operation for the liquid phase, (x) the ozone interface concentration described by Henry's law (He), and (xi) water and oxygen concentrations were considered constants as they were in excess. Assumption (iii) implied that the models neglected the transport rate of the waterdissolved species to the gas phase. Then, ozone and oxygen were the only substances considered in the gas phase. The ozone accumulation rate was described according to (28), where the first and second terms, respectively, represented the net advective and the mixed convective-diffusive effects over the ozone transport to the liquid phase. The enhancement factor E accounted for the increase in the ozone mass transfer rate because of the chemical reactions at the interface. The volumetric mass transfer coefficient k L a and the gas holdup φ (g) were computed with the empirical equations from Inkeri et al. for gassed stirred tank reactors [51], and the ozone gas flow-rate Q (g) was maintained fixed to 2.0 L min −1 . Meanwhile, the ozone concentration at the interphase C * O 3 was a function of the mean logarithmic ozone gas concentration C O 3(g) , according to Equation (29). Before evaluating the mathematical models, the Hatta number (Ha) was computed for each of the experimental observations based on a first-order rate law according to Equation (30). Then, depending on the type of regime, the enhancement factor was computed [49]. Furthermore, the computed pseudo-first-order rate constant was employed to roughly estimate the electrical energy per order (EE/O) corresponding to each experimental configuration [52]. Finally, the GA was applied, and the models' predictions for the optimal rate constants were compared against the experimental data. Then, the Theil Inequality Coefficient (TIC) was employed as a criterion to assess the models' deviation as proposed by Beltrán et al., Equation (31). Values under 0.3 indicated a good agreement between the experimental measurements y e and the model's predictions y c [36]. Ozonation The degradation rate for the Oz process was mainly affected by the ozone and diclofenac concentrations f (C O 3 , C DCF ). Reactions (1)-(9) described the rate laws for the chemical species. Equation (32) described the resulting system of differential Equations F. Here, R was a vector of rate laws, and Z was a matrix of stoichiometric coefficients whose rows and columns represented the species and the kinetic rate constants each. Additionally, the element z (n,m) accounted for the ozone at the gas phase. All the remaining elements of the row n and the column m were zeros as none of the water-dissolved species could pass to the gas phase. The vector of initial conditions Y was given to the model according to (35). Catalytic Ozonation The catalytic ozonation model was also a function of the catalyst load f (C O 3 , C DCF , C mp ). Reactions (1)-(9) and (14)-(18) described the set rate laws. According to the general assumptions from Section 3, the adsorption-desorption rates at the catalyst surface were at equilibrium with the liquid phase. Therefore, the concentration of adsorbed species was described in terms of the homogeneous concentration (36). Water 2021, 13, 1670 6 of 23 As the heterogeneous reactions were surface interactions, its rate laws relied on the available surface rather than the reaction volume. Therefore, a dimensionality factor was required to represent the global volumetric effect of these interactions based on the load and specific surface of the catalyst C mp S g . Then, the R vector was modified according to (37). The corresponding Z matrix was constructed similarly to (34), but rather than m chemical reactions, it was composed by m + h, where h was the total number of heterogeneous chemical reactions. The Y vector for the initial condition was the same as that from (35). Although reaction (23) has an active role in heterogeneous photocatalysis processes, it was assumed negligible compared to (24). Hence, it was supposed that most of the active sites Ti(I I I) would take preference to interact with ozone rather than oxygen molecules [36]. The reaction (22) was considered at equilibrium. Therefore, the concentration of active sites was quantified in terms of the electron concentration C e − , Equation (40). Radiant Field A fundamental part of modeling photocatalysis-based processes is the quantification of the VRPA. Among the literature, researchers have widely aborded stochastic and deterministic approaches estimating the VRPA [53][54][55]. Although stochastic methods based on Monte Carlo simulations have proved highly precise, their implementation involves a Water 2021, 13, 1670 7 of 23 high computational effort, resulting in a time-consuming task [56][57][58][59]. Meanwhile, some deterministic methods have proved to yield prediction approximated enough to Monte Carlo simulations. Among these methods, the Six-Flux Model (SFM) has been successfully adapted to different photoreactor geometries [60][61][62][63][64]. As the study of the radiant field was not the main objective of this work, the estimation of the VRPA at the flotation cell followed the SFM approach (See Section S.1). The optical properties of the catalyst were obtained from [61]. Equation (41) describes the local volumetric rate of photon absorption (LVRPA) for each lamp as a function of the intensity I 0 . As the studied system comprised two non-concentric lamps, Equation (42) re-defined the relative coordinate to the emission source r p as a composed coordinate r p (r, θ), where the constant a was the distance from the reactor center to the emission source. Then, to account for the effect of both emission sources, the real LVRPA value for a given point inside the flotation cell was computed as the summation of the individual lamp's contribution, Equation (43). The overall rate of photon absorption in the system (VRPA) was given by Equation (44) [63]. Figure 1 depicts the geometry for the studied system. lamp's contribution, Equation (43). The overall rate of photon absorption in the system (VRPA) was given by Equation (44) [63]. Figure 1 depicts the geometry for the studied system. Numerical Solution The kinetic parameters were estimated by the U-NSGA-III algorithm from the Pymoo library of Python [65]. Seada and Deb initially proposed this method in 2016 to deal with mono and multi-objective optimization problems [43]. The objective function was the weighted least squares shown in Equation (45). The index M accounted for the involved processes (Oz, COz, and PCOz), , ( ) Numerical Solution The kinetic parameters were estimated by the U-NSGA-III algorithm from the Pymoo library of Python [65]. Seada and Deb initially proposed this method in 2016 to deal with mono and multi-objective optimization problems [43]. The objective function was the weighted least squares shown in Equation (45). The index M accounted for the involved processes (Oz, COz, and PCOz), r i,j were the residual values, and σ i,j was the experimental variance for the i th experiment at the j th measurement. Algorithm 1 described pseudocode for the case of the present optimization problem. Both population size and the maximum number of iterations were set to 150. First, the algorithm distributed an initial population P of vector parameters p i within the problem space and evaluated its associated error. Then, the offspring were generated based on binary tournament selection events. Each winner from a tournament was matched with an individual from another tournament, resulting in two offspring individuals c 1 and c 2 [43]. Algorithm 1 Pseudocode for the application of the U-NSGA-III. Degenerated U-NSGA-III Inputs: Mono-objective problem ∑ T , experimental dataŶ, Boundaries for the problem space (X min , X max ), and known physicochemical constants. Output: Best explored solution p best P = Initialize(N = 150) Before introducing the obtained pair of candidates to the offspring set Q, the algorithm applied mutation operators over the vectors. This approach helps the algorithm keep diversity between individuals and improves the exploration of the problem space [66]. Once Q equaled the size of P, the algorithm computed the error for the offspring individuals. Finally, a new population was constituted by selecting the best candidates from Q and P. This methodology continued until the maximum number of generations was met. Sensitivity Analysis A sensitivity analysis was employed to study the effect of each kinetic constant on the models' error. The corresponding sensitivity coefficients were used as criteria to neglect poor influential chemical reactions. Although global sensitivity analysis algorithms are preferred since they provide more valuable information [67], the local-based methods had the advantage of quick computing and easy applicability. Therefore, a one-at-a-time (OAT) approach was implemented to estimate the sensitivity coefficients for the rate constants. Hence, the number of samples was set to M = 500, and the error was evaluated while varying one rate constant and keeping the remaining fixed to their optimal values. Then, the variance of the error for that constant was computed [68,69]. Thus, the corresponding sensitivity coefficients S(k i ) were estimated based on Equation (46). Results All the processes described a slow kinetic regime (Ha < 0.3). Therefore, no significant enhancement of the mass transfer occurred by the chemical reactions at the gas-liquid interface (E ≈ 1). The values for the mass transfer parameters and the computed values for the Ha corresponding to each experiment were reported in Sections S.2 and S.3 of the Supplementary Material, respectively. The EE/O in terms of diclofenac degradation for the COz and PCOz were lower than the Oz process only when operating at high ozone dose conditions. COz presented the lowest EE/O, with a value of 7.22 kWh m −3 for 7.44 ppm ozone dose and 800 ppm catalyst load, which suggested that these conditions promoted the fastest degradation (See Section S.4). Ozonation The ozonation model was constructed based on the reactions (1)- (9), resulting in the system of ordinary differential equations (ODEs) from Equation (47). Table 2 reported the values for the involved reaction rate constants. Although the estimated values in the present work had a good agreement with the experimental data, their values highly deviated from already reported kinetic constants for the Oz process. The sensitivity analysis showed that only reaction (1) had a significant effect on the Oz model. Its sensitivity coefficient presented a value over 0.9, as depicted in Figure 2. Thus, the mechanism was reduced to a single homogeneous reaction given by (1). The resulting mathematical model was only dependent on the ozone and diclofenac concentrations. M min The sensitivity analysis showed that only reaction (1) had a significant effect on the Oz model. Its sensitivity coefficient presented a value over 0.9, as depicted in Figure 2. Thus, the mechanism was reduced to a single homogeneous reaction given by (1). The resulting mathematical model was only dependent on the ozone and diclofenac concentrations. Both complex and simplified models fitted the diclofenac degradation kinetic. Nevertheless, for the high ozone dose conditions (7.40 ppm), the model deviation increased because of the high sensitivity to the ozone concentration. Figure 3 depicted the concentration profiles. According to Figure 3b, no ozone decomposition occurred. It explained the negligible influence of the hydroxyl radicals on the diclofenac degradation. Catalytic Ozonation In addition to (1-9), the rate laws from (14-18) affected the COz process. Then, the Catalytic Ozonation In addition to (1)-(9), the rate laws from (14)-(18) affected the COz process. Then, the corresponding system of differential equation was formulated as a function of F Oz , Equation (48). The optimal values for the heterogeneous rate constants were presented in Table 3. Due to the lack of information, their values could not be compared with literature. Table 3. Heterogeneous kinetic constants for the catalytic ozonation. Kinetic Constants Computed Values Units According to the sensitivity coefficients from Figure 4, the reaction mechanism for the catalytic ozonation process was simplified by just considering (1), (5), (8) and (9)- (11) to affect the mathematical model. L mol m min According to the sensitivity coefficients from Figure 4, the reaction mechanism for the catalytic ozonation process was simplified by just considering (1,5,8,(9)(10)(11) to affect the mathematical model. The low concentration at the ozone dose (2.66 ppm) caused the decomposition at the catalyst surface (14) to compete with the direct diclofenac destruction by ozone (1), Figure 5a. It occurred for both experimental and predicted data. However, for high ozone dose conditions (7.44 ppm), the ozone availability could drive the diclofenac degradation without competing. Additionally, Figure 5b showed that the ozone decomposition at these conditions was negligible. The low concentration at the ozone dose (2.66 ppm) caused the decomposition at the catalyst surface (14) to compete with the direct diclofenac destruction by ozone (1), Figure 5a. It occurred for both experimental and predicted data. However, for high ozone dose conditions (7.44 ppm), the ozone availability could drive the diclofenac degradation without competing. Additionally, Figure 5b showed that the ozone decomposition at these conditions was negligible. The low concentration at the ozone dose (2.66 ppm) caused the decomposition at the catalyst surface (14) to compete with the direct diclofenac destruction by ozone (1), Figure 5a. It occurred for both experimental and predicted data. However, for high ozone dose conditions (7.44 ppm), the ozone availability could drive the diclofenac degradation without competing. Additionally, Figure 5b showed that the ozone decomposition at these conditions was negligible. As the change in the number of active sites affects the adsorption equilibrium of catalytic reactions, the increase in the catalyst load led to a lower accumulation of dissolved ozone. Furthermore, the high ozone dose conditions did not promote the formation of hydroxyl radicals. Based on this, the ozone excess conditions could have led reaction (5) to become a scavenging pathway for the hydroxyl radicals. The former suggested that the ozone concentration had a strong influence on the kinetic behavior of the process. As the change in the number of active sites affects the adsorption equilibrium of catalytic reactions, the increase in the catalyst load led to a lower accumulation of dissolved ozone. Furthermore, the high ozone dose conditions did not promote the formation of hydroxyl radicals. Based on this, the ozone excess conditions could have led reaction (5) to become a scavenging pathway for the hydroxyl radicals. The former suggested that the ozone concentration had a strong influence on the kinetic behavior of the process. Photocatalytic Ozonation The catalyst particles near the emission sources strongly screened the photons for outer particles. Thus, the LVRPA for the catalyst particle near the walls of the flotation cell was almost null. Figure 6 depicted the computed distribution of the LVRPA for a transversal view of the reactor at catalyst loads of 10, 50, 100, and 800 ppm. According to these observations, the range of catalyst loads employed was far from the VRPA optimal value. Figure 7 confirmed it, as it showed that the maximum value for the VRPA was located at 12 ppm. According to these observations, the range of catalyst loads employed was far from the VRPA optimal value. Figure 7 confirmed it, as it showed that the maximum value for the VRPA was located at 12 ppm. The mathematical model was fed with the dataset from Figure 7 to interpolate the corresponding VRPA value given the catalyst load. The resulting system was constructed in terms of COz F . The photolysis and photocatalysis reactions from Table 1 were included according to (49). The optimal values for the photocatalysis and photolysis rate constants were summarized in Table 5. According to Figure 8, the simplified mechanism only included reactions (1,(8)(9)(14)(15)21,(26)(27). The participation of (25) acted as a route for radical hydroxyl consumption with a higher probability than (16). The mathematical model was fed with the dataset from Figure 7 to interpolate the corresponding VRPA value given the catalyst load. The resulting system was constructed in terms of F COz . The photolysis and photocatalysis reactions from Table 1 were included according to (49). The optimal values for the photocatalysis and photolysis rate constants were summarized in Table 4. According to Figure 8, the simplified mechanism only included reactions (1), (8), (9), (14), (15), (21), (26) and (27). The participation of (25) acted as a route for radical hydroxyl consumption with a higher probability than (16). Again, the ozone excess allowed the direct diclofenac destruction by ozone to dominate the process kinetics. However, the degradation kinetic was even slower for the low ozone dose than observed in the COz process. It was attributed to the ozone photolysis reaction. Figure 9 presents the concentration profiles for each of the experiment configurations. Because of the photolysis, ozone accumulation was slower compared with the COz and Oz processes. Besides, from Figure 9d, the production of hydrogen peroxide was observed, which suggested the active role of the photolysis reaction. Again, the ozone excess allowed the direct diclofenac destruction by ozone to dominate the process kinetics. However, the degradation kinetic was even slower for the low ozone dose than observed in the COz process. It was attributed to the ozone photolysis reaction. Figure 9 presents the concentration profiles for each of the experiment configurations. Because of the photolysis, ozone accumulation was slower compared with the COz and Oz processes. Besides, from Figure 9d, the production of hydrogen peroxide was observed, which suggested the active role of the photolysis reaction. Discussion According to the EE/O estimates in terms of degradation, the PCOz and COz processes could save electrical consumption compared to the Oz process only when there was enough ozone in the system to carry direct diclofenac destruction by ozone, reaction (1), without competing with the decomposition reactions (7.40 ppm). Although COz presented the lowest EE/O, the authors encourage further estimations of the EE/O in terms of Discussion According to the EE/O estimates in terms of degradation, the PCOz and COz processes could save electrical consumption compared to the Oz process only when there was enough ozone in the system to carry direct diclofenac destruction by ozone, reaction (1), without competing with the decomposition reactions (7.40 ppm). Although COz presented the lowest EE/O, the authors encourage further estimations of the EE/O in terms of TOC removal in order to fairly compare these processes, according to the work from Yu et al. [31]. Kinetic Rate Constants The kinetic rate constants from Table 2 described a high discrepancy with the values reported in the literature. It suggested that the proposed mathematical model overestimated the rate at which the ozone molecules reacted in the system. Thus, the optimization algorithm weighted this situation by giving lower values to the kinetic constants and fit the experimental data. According to Beltran et al., the ozonation of diclofenac describes a fast kinetic regime. Although this contrasted the current results, a fast kinetic regime would explain the expected high value of the homogeneous kinetic constants. The former would imply that a significant amount of ozone must react at the gas-liquid interface before dissolving [36]. Then, a different mathematical approach would be required to deal with the microscopic material balance, i.e., Benbelkacem et al. proposed to simultaneously integrate the microscopic and reactor material balances and compute the enhancement and depletion factors for each time step [70]. To the best of the authors' knowledge, no experimental data was reported to compare the heterogeneous and photolysis rate constants. However, it was observed that the catalyst played a significant role in the production of hydroxyl radicals by ozone decomposition. The main reason was the use of a metallic-oxide-based catalyst as it is known to promote electron transfer because of the formation of functional groups at the surface of the catalyst [71]. On the other hand, the photocatalysis rate constants had negligible values compared to k (het) 10 and k (het) 11 . It suggested that photocatalysis yielded a poor influence on the diclofenac degradation. This idea was in agreement with [22]. In their observations, the photocatalysis of diclofenac yielded the slowest degradation. The TiO 2 bandgap could explain this behavior as it limits the catalyst photoactivity [72][73][74]. Meanwhile, (27) was a potential path for scavenging hydroxyl radicals according to the value of k 17 . It was the highest value within the estimated rate constants, with a magnitude even higher than the reported value [36], exhibiting an error superior to 100%. [35,75,76]. Even so, it was not discarded that the optimization algorithm could mask k 17 to fit the experimental data. Therefore, more information about the processes is required to establish more robust constraints over the optimization problem. No asseveration regarding the degradation products could be established as they were not experimentally measured. However, it was expected the occurrence of hydroxylated diclofenac species, mainly 5-hydroxydiclofenac, which is more reactive than its isomer 4-hydroxidiclonac that is produced during the metabolization of diclofenac [77,78]. The major concern about this degradation product of diclofenac is that it could be further oxidated into quinone imine derivatives, which are suspected to be responsible for the diclofenac toxic effects [7,79]. Ozonation Model The ozone dose condition limited the model accuracy. Nevertheless, the model predicted the experimental data within an acceptable error margin. For the case of the experimental conditions, the Theil inequality coefficient was under 0.3. Thus, there was a good agreement with the studied data. According to Figure 3a, the mathematical model overestimated the ozone effect over the diclofenac degradation. The fact that the mechanism did not consider intermediate species may have influenced the model predictions. Earlier studies on the diclofenac degradation with ozone have demonstrated that intermediates such as aminyl radicals and hydroxylated diclofenac derivatives affect the process kinetics [35,77]. The negligible influence over the diclofenac degradation presented by k 3 and k 4 ( Figure 2) explained the absence of ozone decomposition in Figure 3b. However, this contrasted with Flyunt et al., as it is expected that ozone processes will yield hydroxyl radicals [80]. Additionally, the hydroxyl radicals are likely to attack the aromatic ring because of their electrophilic nature. According to Sein et al., the hydroxyl radicals can initiate a mechanism for the production of 5-hydroxydiclofenac [35]. Therefore, the optimization problem required more accurate constraints based on the concentration of additional chemical species. Catalytic Ozonation Model According to Figure 5a,b, the catalyst supported the ozone decomposition, but this process was susceptible to the ozone dose. Thus, no decomposition occurred at 7.40 ppm O (in) 3(g) . Following the sensitivity analysis of Figure 4, the direct diclofenac destruction by ozone (1) was not the most influential over the mechanism for this model. Instead, reaction (15) assumed this role. Then, for the case of 2.66 ppm O (in) 3(g) ozone was not in excess and diclofenac molecules had to compete with (14), whose rate constant was three orders of magnitude above k 1 . Thus, for these conditions, the diclofenac degradation was slower. In addition, the increase in the catalyst load made the degradation rate even slower as a large surface was available for reaction (14). Although the ozone excess in the system enhanced the diclofenac degradation rate, Figure 5c showed that such conditions did not promote a significant generation of hydroxyl radicals; however, the high value of k (het) 11 suggested the opposite. According to this, it was likely that a deficit of hydrogen ions limited the rate of reaction (15). Figure 5d supported this hypothesis as it did not show consumption of the ozonide radicals O −• 3 . Then, the rate of reactions (3) and (9) was negligible. The former contrasted most of the research in COz processes [34]. Nevertheless, it should not be discarded that unfavorable setting of the operational conditions can lead processes to poor performance. According to this, not all the ozone doses favored mineralization. Based on the model's simplifications, it was stated that the OAT sensitivity analysis did not provide enough information to discard kinetic constants without affecting the model outputs. Thus, the sensitivity analysis required a more robust analysis based on global methods to assess the interaction between parameters [67,81]. The most significant discrepancy of the simplified model was observed in Figure 5d, where the trend of concentration profiles described a completely different behavior. However, predictions for the diclofenac remained unchanged as the hydroxyl attack reactions (4) and (16) did not influence this output. Photocatalytic Ozonation Model As observed in Figure 6, the catalyst particles were likely to screen the photon flux for particles near the reactor walls. Consequently, further research should weigh the available number of active sites against the VRPA to find optimal operation conditions. According to Figure 7, the catalyst loads employed in this work were considerably distant from the computed optimum value for the VRPA. However, considering the results for the COz model, a reduction in the catalyst load could also reduce the number of active sites. It was observed that the behavior described in Figure 7 was analogous to the results from Colina et al. [61], which supported the proposed methodology to estimate the VRPA in cases of multiple non-concentric lamps with annular photoreactors. The only considerable photocatalytic reaction was (21). Meanwhile, the reactions induced by photolysis both seemed to affect the model outputs. Due to the number of interactions, the diclofenac degradation was slower than the COz. Figure 9b showed that reactions (14) and (27) produced a fast decay in the ozone accumulation. This effect was scaled with the catalyst load employed, as a larger surface was available for reaction (14) to occur. Additionally, from Figure 9c, it was observed for the cases of high ozone dose that the concentration of hydroxyl radicals increased suddenly when the dissolved ozone concentration approximated to zero. The attained accumulation was higher than that observed in the COz model. Meanwhile, the accumulation predicted for the hydrogen peroxide described a similar behavior to the observations from Peyton and Glaze for ozone photolysis on distilled water [82]. However, for the present work, the equilibrium concentration dropped to zero, Figure 9d. Finally, as observed in the COz model, the simplification based on the sensitivity analysis added no significant error in the diclofenac concentration, but outputs for other chemical species changed considerably. Conclusions Through this study, a generalized reaction pathway for ozone and ozone-catalystbased processes was proposed. The mechanism could describe the degradation kinetics for all the experiments in the modified flotation cell for different configurations of the ozone dose and the catalyst load. The models might be extended for applications with other metal oxide-based catalysts. However, it is worth noting that optical and surface properties should be replaced and the kinetic constants accounting for surface reactions should be estimated. Although the employed experimental equations for the calculation were proposed for stirred tanks, they proved to describe ozone unreactive transport from the gas to the liquid phase for the current experimental conditions. However, the computed slow kinetic regime was discordant to the established theories from previous works. Therefore, it was stated that a first-order rate law was not a determinant criterion to characterize kinetic regimes for ozone-based processes applied in the degradation of diclofenac. The proposed approach for the VRPA estimation stood valid within the employed range of experimental variables of the current study and for the proposed mathematical models, although no formal validation was designed. Nevertheless, the selected values of the catalyst load were far from yielding an optimum for the VRPA. No premature convergence was observed in estimating the rate constants; this suggested a high probability of reaching a global minimum for the objective function (18). In addition, all the experiments presented a TIC < 0.3. The former was traduced as a strong agreement between predictions and experimental data. Nevertheless, the homogeneous rate constants were discordant with the already reported values, which suggested that the optimization algorithm masked the magnitudes of the constants to fit the data. Only the Oz process accepted the simplifications based on the sensitivity analysis without considerable changes on their outputs. However, for predictive purposes of the diclofenac concentration, the three simplified models could be employed within the current experimental conditions. The PCOz promoted a higher amount of hydroxyl radical than the COz. However, this production only became significant when all the dissolved ozone was consumed. Moreover, more research is required to establish the optimal experimental configuration for the PCOz to exploit photoactivity and surface availability. The use of UVB/UVC could improve the rate of photocatalysis reactions.
v3-fos-license
2018-12-10T21:43:32.825Z
2017-07-21T00:00:00.000
55433120
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://iej.zaslavsky.com.ua/article/download/106652/102487", "pdf_hash": "8618ac58b71b20df9961f5444f9f18f3dd8fcc90", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1540", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8618ac58b71b20df9961f5444f9f18f3dd8fcc90", "year": 2017 }
pes2o/s2orc
Peripheral nervous system damage in hypothyroidism : current view on the problem ( literature review ) The article presents the pathogenesis of polyneuropathy symptoms in patients with hypothyroidism, whereby the approaches to the pathogenetic therapy are substantiated. The article represents the most important aspects of hypothyroid polyneuropathy treatment according to the latest international guidelines and perspective scientific approaches to optimize the medical care for patients with this pathology. Nowadays the thyroid pathology is known to be one of the most widely spread ones in the structure of endocrine diseases [2,15].Recently the increase of occurrence of autoimmune thyropathies, mostly followed by the development of hypothyroidism has been observed in Ukraine and other countries.At the same time, the number of surgeries for nodular forms of goiter, tumors, etc., frequently resulting in hypothyroidism, growths as well.The results of epidemiological studies show, that the overall prevalence of manifested hypothyroidism in the population is 0.2-2 %, subclinical one -7-10 % among women and 2.3 % among men, and morbidity rates are continuously rising every year [15].There are more than 98 hundred people with this disease, officially registered in Ukraine at the beginning of 2016 [16]. At its early stages the disease is accompanied by a wide range of neurological syndromes, which often dominate in the clinical manifestation of the disease and involve practically all levels of the nervous system [7,11].However, despite a long history of detection of interconnections between the thyroid and neurological pathologies, the study of damaging mechanisms for the nervous system in hypothyroidism still remains a topical issue of contemporary neuroendocrinology. Deficiency of thyroid hormones in the body leads to disorders of water and electrolyte balance, protein, lipid, carbohydrate metabolisms, causing morphologicalfunctional and biochemical changes in various organs and systems [2,15].Hypothyroidism is accompanied by disturbed synthesis of neurotransmitters, increased levels of blood lipids, lowering the energy potential of the cells, the activation of free radical processes, reduction of synthesis of a nitric oxide and endothelial dysfunction, disturbance of blood microcirculation disorders, imbalance of proinflammatory cytokines and adipocytokines, etc. [18].A dramatic inhibition of energy and anabolic processes, typical for hypothyroidism, promotes an organic damage of the nervous system [22]. Hypothyroid neurological disorders are various and numerous [14,15].Marked changes in the peripheral nervous system, typical for hypothyroidism, are implemented in the development of pseudomyotonic and pseudomyasthenic syndromes, radiculopathies, polyneuritis, tunnel neuropathies, and polyneuropathies as well.The latter are found in 18-72 % of patients with hypothyroidism [7,11,22], and symptoms of polyneuropathy can develop not only in manifested hypothyroidism, but also in subclinical one [30].However, there is no consensus concerning the pathogenesis of hypothyroid polyneuropathy (PNP), correlation between the degree of its manifestation and hormonal status, state of the neuromuscular system during the compensation of the underlying disease.Thus, indicating a direct connection between the level of thyroid hormones, the degree of hypothyroidism compensation and polyneuropathy symptoms, some researchers believe that all clinical, electroneuromyographic and histopathological changes in patients with hypothyroid polyneuropathy are reversible in case an adequate replacement therapy is initiated [25,28].However, according to the other studies, clinical and pathomorphological signs of neuromuscular system disorders remain after the compensation of hypothyroidism [24,27,29]. Degenerative, toxic, metabolic, ischemic and mechanical factors, leading to the changes of the connective tissue interstitium, myelin sheath and axial cylinder of the nerve are known to be underlying factors promoting the formation of PNP, particularly hypothyroid one.Hence, the division of polyneuropathy into axonopathy, related to the underlying primary damage of the axial cylinders of nerves, and myelinopathy, characterized by the disturbance of nerve conduction due to the myelin sheaths loss, is generally accepted.However, during the progression of the disease their combinations usually occur [17]. The development of the PNP in case of hypothyroidism is considered to be related to the mucinous infiltration of the perineurium resulted in nerves compression, as well as to disorders of oxidative processes due to the thyroid hormones insufficiency.Schwann's cells are primarily influenced by the metabolic disorders, and that leads to segmental demyelination.Thus, as morphological studies have demonstrated, glycogen and mucin deposits in Schwann's cells, bulbous thickening of the myelin sheath with mucinous inclusions, segmental demyelination, increased number of demyelinated fibers of a small diameter, reduced number of myelinated fibers of a large diameter are observed in the peripheral nerves [18,22]. Clinically, hypothyroid PNP is manifested by the pain and paresthesias in the distal parts of the extremities, muscular weakness, seizures, polyneuritic impairment of sensitivity, decrease or loss of tendon reflexes [14,17].Such symptoms are amplified to complete immobilization while staying in a cold room or during winter.Severity of PNP clinical manifestation depends on the degree of involvement of motor, sensory and autonomic fibers into the pathological process. Movement disorders are manifested as muscular weakness, located mainly in the distal areas, mostly in the extensor muscles, accompanied by hypo-or areflexia.In severe cases, patients are unable to stand or walk, hold objects in their hands. Sensory changes are associated with positive (paresthesia, hyperpathies) and negative symptoms (loss of joint, muscle and tendon proprioception, leading to an imbalance when standing and walking, reduction of skin tactile and pain sensitivity). Autonomic symptoms appear as sympathalgias, vasomotor, trophic and secretory disorders (burning pain, sweating changes, swelling of the distal areas of the limbs, abnormalities of their color and temperature, sores, muscular changes, nails deformation). However, the diagnostics of motor, sensory and autonomic symptoms of PNP is generally impeded by multiform clinical manifestations of hypothyroidism and involvement of numerous tissues.Thus, the development of hypothyroid myopathy, characterized by permanent muscular weakness (more substantial in muscles of proximal areas of the limbs), prolonged muscular contraction and their relaxation period, by convulsions, muscular hypertrophy and hardening, myalgia, increased mechanical excitability of muscles during percussion, etc., is associated with the limitation and inhibition of move-ments, mistakenly considered as paresis or paralysis [1].Moreover, the boundaries of loss and irritation symptoms, the type of sensory disorders, especially in mixed areas, vary in a wide range due to the variability of overlapping of innervation zones by adjacent nerves and variability of autonomous zones, as well as due to the double, triple innervation of certain muscles and skin areas. In addition to complete neurological examination and selection of typical neurological symptoms a significant role in the differential diagnosis of the type of hypothyroid peripheral nervous system disorder is played by electroneuromyography (ENMG).During electromyographic examination the decrease in amplitude and slowing of the speed of an impulse transmission by sensory and motor nerves is recorded, the results of ENMG enable to differentiate of the damage of muscle and nerve fibers, detect the degree of nerve fibers' damage, differentiate between axonal (axonopathies) and demyelinating (myelopathy) PNP.Thereby, a slow speed of the transmission of an impulse along the nerve, increased distal latency period, change of F-response, blockage of the transmission and temporary dispersion are usually indicative of the myelin sheath damage, whereas the decrease of impulse level is a sign of axonal degeneration [19]. In doubtful cases nerve biopsy may be very helpfulhistological changes in them are absent in case of the progressive muscular degeneration [14,18]. Furthermore, neurological diagnosis is primarily polysyndromic: it is made according to the prevalence of clinical signs (sensory, motor, autonomic) and the distribution of lesions (symmetrical/asymmetrical, proximal/distal).These statements are important not only in terms of diagnostics, but also for adequate treatment and prognosis. A particular algorithm to manage patients with hypothyroid PNP doesn't exist.Treatment and rehabilitation of patients with hypothyroidism should be based on hormone replacement therapy in doses, enabling to achieve and maintain euthyroidism [16].On the other hand, considering those metabolic disorders developed under thyroid hormones insufficiency and causative of the central and peripheral nervous system damage, administration of additional medications normalizing the mentioned changes is substantiated.Therefore, in addition to the treatment of the underlying disease and achievement of hormonal status compensation, two main approaches may be suggested in the therapy of hypothyroid PNP: pathogenetic therapy (influence on the mechanisms of nerve fibers damage, stimulation of the regeneration of damaged nerve fibers) and symptomatic, targeted on the correction of the symptoms, first of all -on the pain management and patients' quality of life improvement. Pathogenetic therapy is aimed at slowing of the neuropathy progression and correction of neuropathic deficiency.To achieve this, α-lipoic acid and group B vitamins are used [12,26]. α-lipoic (thioctic) acid medications create the basis of pathogenetic treatment of PNP [4,9,12].Accumulated in nerve fibers, they inactivate free radicals, block their generation, restore endogenous systems of antiradical protection, thereby providing a powerful antioxidant Том 13, № 4, 2017 Огляд літератури /Literature Review/ effect, and restore a disturbed endoneural blood flow, normalizes the content of NO (a regulator of the vascular wall relaxation), improves endothelial function, reduces total cholesterol blood level, increases antiatherogenic lipoprotein fraction content [4].Thioctic acid is a coenzyme to key enzymes of the Krebs cycle that explains its efficacy to optimize the energy metabolism of neurons.This action results in the improvement of nerve conduction by motor and sensory nerve fibers.Moreover, a positive effect of the medication on the liver cells is noticed -reduction of the severity of morphological manifestations of fatty liver and normalization of biochemical parameters [4,9,12]. A significant role in pharmacotherapy of PNP belongs to vitamin therapy.Group B vitamins improve the metabolism in the nervous tissue, metabolism of mediators, the transfer of excitation and, as a result, increase the rate of impulse transmission by nerve fibers, as well as implement a moderate analgesic effect, promote the processes of regeneration and remyelinization of nerve fibers [5,6,13]. As a result of phosphorylation processes thiamin (vitamin В 1 ) is converted in the body to cocarboxylase, which is a coenzyme of numerous enzymatic reactions and plays an important role in carbohydrate, protein and fat metabolism.Vitamin В 1 is involved in the synthesis of neurotransmitters that modulate the transmission of nerve impulses in the synapses, possesses anticholinesterase activity, hence stipulating neuromuscular conduction; it is one of the components of nucleic acid synthesis and stimulates the plastic and reparative processes in the nervous tissue [10]. Vitamin В 2 (riboflavin) is a catalyst for cell respiration, known to play an important role in redox processes of the nervous system, regulates the metabolism of carbohydrates, proteins, fats, potentiates the effect of pyridoxine and tryptophan, stimulates the regeneration of tissues. Pyridoxine (vitamin В 6 ) reduces blood level of cholesterol and lipids, promotes the conversion of folic acid to its active form; it is a coenzyme in the metabolism of amino acids and proteins in the central nervous system cells, in the synthesis of biogenic amines, components of myelin sheath of neurons, neurotransmitters of the central and peripheral nervous system, thus providing a synaptic transmission. Cyanocobalamin (vitamin В 12 ) provides hematopoietic, erythropoietic, anti-anemic, metabolic action, normalizes blood clotting processes, diversely influences liver function, including hematopoietic one, as well as digestive system, activates the metabolism of carbohydrates and fats, affects the synthesis of RNA, DNA.In addition, vitamin В 12 suppresses the abnormal changes in case of degenerative atrophy of the nerve cells, causes resynthesis of myelin, creates the myelin sheath, and thereby provdes the restoration of normal nerve fiber structure and functions [5,10]. Hence, the use of neurotropic group B vitamins can be considered as an important element of pathogenetic therapy of the PNP, promoting the regression of sensitivity disturbances, vegetative symptoms and pain syn-drome.Since their simultaneous use is essential for the treatment efficacy, the use of combined B vitamins medi cations is practically expedient and simplifies patients' treatment significantly [5,6,13]. In addition to the mentioned drugs pathogenetic therapy of hypothyroid PNP is reasonably contributed by reparants (Actovegin, Solcoseryl) that demonstrate antioxidant, antyhypoxaemic, neurotrophic, neuroprotective action and are widely used in the rehabilitation of patients with various nervous system diseases of the vascular, atrophic, infectious, traumatic and other genesis [8,20,23]. In case of motor disorders anticholinesterase agents are used -neostigmine methylsulfate (proserine), ipidacrine (neiromidin), etc.These medications can restore and stimulate the neuromuscular transfer, restore impulse transmission by the peripheral nerves, enhance contractility of the smooth muscles, improve memory and learning ability through stimulation of nervous impulse transmission in the central nervous system as well, specifically moderately stimulating the CNS, provide analgesic effect due to the ability to block sodium permeability of the membranes. The basis of symptomatic therapy of PNP is preferably related to the correction of the pain syndrome, programmed by the first-line drugs as anticonvulsants of a new generation -gabapentinoids (pregabalin, gabapentin) -soluble amino acids, chemically similar to the endogenous inhibitory neurotransmitter g-aminobutyric acid (GABA), involved in the transmission and modulation of the pain [3,12]. Gabapentin has a number of biochemical properties that enable its influence on the pathogenesis of chronic neuropathic pain syndrome [3]: -interaction with α2-δ2-subunits potential-dependent calcium channels, suppression of entry of Ca 2+ ions into neurons inhibits excessive excitability of cell membranes, reduces sensitization of nociceptors; -an increase of GABA synthesis stimulates the activity of glutamate decarboxylase, resulting in the enhancement of antinociceptive system activity; -inhibition of the synthesis of glutamate (stimulating neurotransmitter with exitotoxicity) leads to a decrease in excitability structures nociceptive system and prevents neuronal death; -modulation of the activity of NMDA (N-methyl-Daspartate)-receptors affects the processes of «pain memory» formation. Effective and safe for all types of spontaneous and stimulus-dependent neuropathic pain, gabapentin has flexible circuit of dose titration that provides highly individualized selection of therapy based on clinical characteristics of the patient and his pain syndrome [3]. Pregabalin is a modern anticonvulsant that has shown its efficacy regarding any type of neuropathic pain, fibromyalgia, seizures with a high analgesic activity and a positive impact on concomitant emotional-depression manifestations [21]. Treatment algorithm for neuropathic pain in PNP also includes tricyclic antidepressants (eg, amitriptyline), selective serotonin reuptake inhibitors (duloxetine, Огляд літератури /Literature Review/ venlafaxine), etc. [12], but their psychotropic side effects and cholinolytic action significantly limit their administration in patients with hypothyroidism. Non-pharmacologic strategies for management of painful neuropathy include physiotherapeutic techniques (acupuncture, transcutaneuos electrical stimulation, high-wave external muscle stimulation, etc.). Thus, neuromuscular disorders in case of thyropathies, including hypothyroidism, are known as polymorphic and still create significant diagnostic and therapeutic challenges.The processes of demyelination with secondary axonal damage, resulted from metabolic and bioenergetic disorders, initiated by thyroid hormones insufficiency, underlie the development of dysmetabolic polyneuropathy in hypothyroidism.Concerning this, early diagnosis, adequate replacement therapy and achievement of hormonal status compensation remain the major concerns to prevent and postpone the progression of hypothyroid polyneuropathy.However, pathophysiologically targeted therapy is a meaningful approach in management strategies for neuropathic changes, that amphasizes the importance of further study of pathogenesis, clinical and neurophysiological manifestations of hypothyroid polyneuropathy, particularly during the compensation period of the underlying endocrine disease, and develops algorithms for differential treatment.
v3-fos-license
2021-11-04T15:26:03.233Z
2021-10-01T00:00:00.000
242093850
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/jge/article-pdf/18/5/776/41042274/gxab051.pdf", "pdf_hash": "e3cdc3a1a98e916e2dde034e6babb28ab3c35ae9", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1541", "s2fieldsofstudy": [ "Geology" ], "sha1": "2ab9a5d74abfc5880e0374ea8bca6b753e0bd71a", "year": 2021 }
pes2o/s2orc
An optimised one-way wave migration method for complex media with arbitrarily varying velocity based on eigen-decomposition The classical one-way generalised screen propagator (GSP) and Fourier finite-difference (FFD) schemes have limitations in imaging large angles in complex media with substantial lateral variations in wave velocity. Some improvements to the classical one-way wave scheme have been proposed with optimised methods. However, the performance of these methods in imaging complex media remains unsatisfying. To overcome this issue, a new strategy for wavefield extrapolation based on the eigenvalue and eigenvector decomposition of the Helmholtz operator is presented herein. In this method, the square root operator is calculated after the decomposition of the Helmholtz operator at the product of the eigenvalues and eigenvectors. Then, Euler transformation is applied using the best polynomial approximation of the trigonometric function based on the infinite norm, and the propagator for one-way wave migration is calculated. According to this scheme, a one-way operator can be computed more accurately with a lower-order expansion. The imaging performance of this scheme was compared with that of the classical GSP, FFD and the recently developed full-wave-equation depth migration (FWDM) schemes. The impulse responses in media with arbitrary velocity inhomogeneity demonstrate that the proposed migration scheme performs better at large angles than the classical GSP scheme. The wavefronts calculated in the dipping and salt dome models illustrate that this scheme can provide a precise wavefield calculation. The applications of the Marmousi model further demonstrate that the proposed approach can achieve better-migrated results in imaging small-scale and complex structures, especially in media with steep-dipping faults. Introduction The pre-stack depth migration can better deal with lateral velocity changes compared with the pre-stack time migration. The one-way wave depth migration (OWDM) has played a vital role in seismic migration in past decades. Because the one-way wave equation is separated from the full-wave equation, the OWDM method inherits the ad-vantages of the correct phase information of the two-way wave method and compensates for the shortcomings of the ray migration method. Therefore, the OWDM method has been widely used for collecting practical seismic information. Extensive research has been carried out on oneway wave imaging algorithms. Approximation expressions have been implemented extensively to improve the OWDM method. The initial phase-shift method, proposed to solve the wave equation in the frequency-wavenumber domain, is limited by the geophysical assumption of a homogeneous media (Gazdag 1978). This assumption is too stringent as actual media feature significant lateral changes in velocity. To overcome this issue, some improved schemes have been derived, such as the phase-shift plus interpolation scheme and the subsequently developed split-step Fourier (SSF) scheme (Gazdag & Sguazzero 1984;Stoffa et al. 1990). Because of the limited compensation for low-order perturbation, these schemes can only offer correct imaging when the media present mild lateral velocity variations. To further improve the ability to image medium with significant lateral velocity changes and large-angle structures, the Fourier finite-difference (FFD) and almost simultaneously proposed generalised screen propagator (GSP) schemes have started serving seismic data imaging (Ristow & Rühl 1994;Wu 1994;Jin & Wu 1999;De Hoop et al. 2000;Huang & Fehler 2000;Le Rousseau & De Hoop 2001). Both methods use a relatively higher-order correction term to process the lateral velocity variability. Consequently, better imaging results can be obtained in imaging steep dip angles than the SSF method. Cheng et al. (2001) proposed the finite-difference depth migration method in the frequency-space domain to better adapt velocity variations. Various propagators with distinct precision and efficiency can be produced by different approximation strategies, such as pseudo-screen operators, complex screen operators, windowed screen propagators, optimised product propagators and optimal approximate propagators. These methods, based on the decomposition of the velocity into background and disturbance velocities for wavefield extrapolation, lead to better results compared to the SSF method, yet they remain unable to correctly image the wavefield in complex media. Based on the global optimum theory, some scholars have attempted to use more accurate multinomial coefficients in approximate expansions to increase the imaging accuracy of complex media with large angles, and developed optimisation strategies for the OWDM method (Biondi 2002;Zhu et al. 2008;Jia & Wu 2009;Chen 2010;Sun et al. 2010;Zhang et al. 2010). By using methods for approximating the propagator of the one-way wave equation, these approaches produced several improvements in large-angle imaging. However, they did not eliminate the limitation of the classical multistep calculation method in a mixed domain to suit for arbitrarily variant velocity. Therefore, a new method of using approximation theory to approach the correct velocity in the one-way wave equation needs to be further developed. Eigen-decomposition is a mathematical method generally used for seismic data processing and inversion (Luo et al. 2003;Luk et al. 2005;Gheimasi et al. 2010;Wen et al. 2017). Some studies used this method to perform wave extrapolation in OWDM, but the performance was still inadequate. Kosloff & Kessler (1987) proposed an accurate depth migration method using eigendecomposition to improve the conventional phase-shift method. Yan & He (1994) derived a seismic wave propagating matrix in two-dimensional transversely isotropic media using the eigen-decomposition method and pointed out that this scheme can yield an accurate and straight forward propagator. Grimbergen et al. (1998) proposed an eigen-decomposition extrapolation method based on modal expansion, which divided the wavefield into guided and radiated wavefields. You et al. (2018) used the eigendecomposition method to achieve the amplitude-preserved calculation in OWDM. You et al. (2019) developed a scheme using matrix multiplication for wavefield extrapolation in laterally varying media, and obtained similar calculation results of the Helmholtz operator with the eigen-decomposition scheme. However, these methods are limited by their finite calculation accuracy unless a higher-order approximation with extensive computation is used. Trigonometric function approximations based on the Legendre, Taylor series and Chebyshev polynomial expansions have been used in physics, seismic modeling, and data processing and inversion (Dikhaminjia et al. 2015;Naghadeh & Morley 2017;Araujo & Pestana 2019). Nevertheless, few studies exploited the infinite-norm approximation of the trigonometric function to improve seismic wave exploration and imaging. In this study, a strategy for eigenvalue and eigenvector decomposition of the Helmholtz matrix was investigated for OWDM. A novel scheme was proposed for computing the one-way propagation operator with the best polynomial approximation based on the infinite norm. The computing accuracy of the Taylor series and the least-squares (LS) approximation were used to compare with our scheme. The imaging performance was verified in the impulse response of three typical models in the imaging of media with significant lateral inhomogeneity and complex structures. The migrated sections computed by the classical one-way GSP, staggered-grid finite-difference (SFD) methods were compared with those obtained by the proposed scheme. Meanwhile, full-wave-equation depth migration (FWDM) methods deal with the wavefield with less approximation for the wave equation and can imaging waves at large angles compared to the OWDM methods (Kosloff & Baysal 1983;Sandberg & Beylkin 2009;You et al. 2016). The method proposed in You et al. (2016) overcame the surface constant velocity assumption and obtained a better imaging amplitude than the conventional RTM method. In their study, the Helmholtz operator was used to achieve the wavefield exploration based on the first-order stress-velocity acoustic equation in matrix form, which relates to the proposed scheme. Therefore, we adopted this method as a reference to verify the proposed scheme. Finally, to further demonstrate the ability of the proposed scheme for imaging complex media with dipping angles, detailed comparisons with the Marmousi model were carried out. 777 2. Depth extrapolation with the best approximation algorithm One-way wave-equation depth extrapolation The frequency-domain full-acoustic wave equation in the case of two-dimensional media with constant density can be defined as: where x denotes the horizontal orientation of the model, z denotes its vertical orientation, v(x, z) is the velocity and p(x, z, ) is the pressure field. Assuming that the medium is homogeneous, the two oneway wave equations divided from equation (1) can be expressed as: where p u and p d represent the up-going and down-going pressure wavefields, respectively, s(x, ) denotes the source wavelet, d(x, ) represents the recorded seismic data at a certain depth and Λ is the square root operator, which is regarded as the most crucial parameter for the OWDM. The typical approach to calculate wavefields in the adjacent layer using a one-way wave migration operator is p(x, z + Δz, ) = e ±iΛΔz p(x, z, ). In this work, an eigen-decomposition method for matrices is proposed, which uses the best polynomial approximation to calculate the operator Λ. In the frequency-space domain, the Helmholtz operator L is defined as: In this equation, the diagonal elements of the matrix 2 ∕v 2 (x, z i ) consist of all velocity points of horizon coordination in the same depth, and this is why the proposed method can deal with the lateral velocity variations; the matrix 2 ∕ x 2 is a second-order finite-difference discretisation, which can be substituted into a higher-order discretisation for improving the calculation precision. In terms of matrix transformation theory, the self-adjoint matrix L in equation (6) can be decomposed into a multiplication of eigenvalues and eigenvectors so that where the superscript T represents the matrix transpose, the matrix M denotes the eigenvalues of L, and the matrix V denotes the eigenvectors of L. When the n-order square matrix L is provided, and if there exists a set of constant { 1 , 2 , ⋯ n } ∈ M and nonzero vectors V to satisfy the equation LV = 1,2⋯n V, in that way, it can be defined M as the eigenvalue and V as the eigenvector of L. In the framework of the matrix eigen-decomposition theory, the eigenvalues of matrix functions are functions of the eigenvalues of the original matrix and their eigenvectors remain unchanged; thus, the eigenvalues and eigenvectors of the sub-functions can be expressed as: Next, by introducing the Euler formula, the one-way wave propagator can be further expressed as: Finally, the proposed propagator for the OWDM utilising the matrix eigen-decomposition can be expressed as: (13b) Schleicher et al. (2008) compared the cross-correlation and the deconvolution imaging conditions. From their conclusions, the most precise imaging condition is the latter. In the method proposed herein, the following imaging strategy is adopted: where R(x, z, ) and S * (x, z, ) are the receiver wavefield and the conjugate of source wavefields, respectively. An issue that must be addressed is the evanescent waves in the depth wavefield extrapolation. It is well known that whether the square root operator is a positive real number depends on the operator L. When the operator L is greater than zero, significant waves propagate in the process of wavefield extrapolation. Conversely, when the square root operator becomes complex, the evanescent waves grow exponentially with the depth direction in the one-way wave migration algorithm. Evanescent waves must be suppressed during depth extrapolation because they cause instability in practice. By considering equation (10), negative eigenvalues lead to evanescent waves, and positive eigenvalues lead to physically significant propagating waves. In this study, the evanescent waves were removed by retaining the positive eigenvalues and discarding the negative ones. Best approximation algorithm for trigonometric function calculation Many methods can achieve a fast approximation of trigonometric functions. For instance, the search-table method works by setting the specific interval of function sampling with a fixed resolution, calculating its value, and establishing a table to be stored in memory. The corresponding address can be accessed to obtain the required trigonometric values when required. The table-checking method is efficient and straightforward, but it requires a certain amount of memory. Taylor series also can be used to evaluate the trigonometric functions. According to the Taylor theorem, a differentiable function can be approximated as the primary limit of the expansion. However, the convergence speed of the Taylor series is slow. Clearly, it is impossible to achieve high precision using a low-order approximation. Another method to approximate trigonometric functions uses least-square polynomials, which aim to minimise the square distance between the polynomial and the target function. The calculation of the LS polynomial coefficients is straightforward. Moreover, this method is easier to implement, and the approximation efficiency is much higher than that of the Taylor series. However, the LS polynomial method is not optimal as some lowerorder methods can achieve the same accuracy. In this study, the best approximation method for the cosine function is discussed and derived based on the infinite norm. Because the function f (m) = cos(m) is continuous in the essential period [0, 2 ], the n-order polynomial approximation of the interval can be expressed as: where p n (N = 0, 1, ⋯ , N) denotes the polynomial coefficients. To determine the polynomial coefficients, the maximum value of the difference between the polynomial f (m) and g(m) is minimised, that is (Zhou & Guo 2013): According to the numerical analysis theory, assuming that there is an extreme point m i ∈ [0, 2 ], if m = m i , it is symmetric and an additional staggered extreme point exists. Otherwise, they must be equal. In this way, the polynomial coefficients can be determined by the algorithm based on the Remez iterative method (Remez 1934). To achieve an accuracy of 10 -3 , the order of the approximation polynomial was set to six. Considering the properties of the cosine function, the interval range of the approximating function must be limited to within [0, ∕2]. Thus, a piecewise approximation strategy was applied to ensure the most optimal approximation in the definition interval [0,2 ]. This way, the same accuracy can be achieved by a fifth order polynomial approximation, which can be written as Then, using the relation m = √ XΔz, the cosine term of the propagator in OWDM can be obtained as (15) can be obtained by the method described previously. Finally, wavefield extrapolation can be performed by substituting equation (18) into equation (13). As seen in figure 1, the 10th, 15th and 20th Taylor series and the nineth LS approximations are used to compare the approximation degree with a standard cosine function. It can be seen that the fifth infinite-norm polynomial approximation can achieve the same accuracy as the nineth order LS approximations and the 20th order Taylor series. For understanding the calculation procedure of the proposed migration scheme more intuitively, the implementation process of the algorithm is as follows: Initialise the imaging data I(x, z) = 0. (i) Loops for all depth levels. (ii) Loops for all frequencies. (a) Compute the operator L using equation (6). (b) Compute the eigenvalues and eigenvectors of the matrix L using equation (7). (1) For each source: Perform depth extrapolation using equation (13b) with equation (18). GPU technology was used to speed up the algorithm evaluation and achieve high computational efficiency. Eigendecomposition is a computationally intensive part of the extrapolation process. When the wavefields at a certain depth were deduced for the initial shot gather, the adopted strategy included the storage of the eigenvalues and eigenvectors of a computed matrix using equation (7). Then, the decomposition results were applied to other shot gathers at the same depth; this way, the repeated computation of the same eigenvalues and eigenvectors of the operator L was avoided. Impulse response testing in media with arbitrarily varying velocity The inhomogeneous medium model measured 1000 × and classical one-way GSP scheme were used to compute the impulse responses for comparisons with the proposed eigen-decomposition scheme using the fifth infinite-norm polynomial approximation. In this experiment, the impulse response of the SFD technique was used as a typical curve, and the FWDM scheme used also served as a reference. The typical wavefronts are outlined with white dotted lines in figure 3. On a comparison, it was found both the one-way GSP impulse response curves and the proposed matrix eigen-decomposition method with the best approximation could provide almost accurate phase information when the imaging angle was less than roughly 75°. However, for greater angles, the phase information generated by the eigen-decomposition method and the used FWDM scheme matched the theoretical impulse response curve much better than the classical GSP scheme. One of the superiorities of FWDM is that there is no limitation in imaging angle. In figure 3, it can be seen that there had been no noticeable differences in imaging angles among the SFD, FWDM and proposed methods. Although the proposed eigen-decomposition method uses a low-order polynomial approximation, the highly accurate approximation of the square root operator compensates for the errors of using the minimum or average velocity as a reference velocity to approach the actual velocity. In summary, the proposed method can generate an accurate wavefield calculation of one-way propagators, especially in the case of large angles with strong velocity differences. Impulses in media with laterally varying velocity and dipping reflection The velocity model measured 3000 × 3000 m with a 10 × 10 m grid. A variable velocity between 1500 and 2500 m s −1 above the dipping reflector was introduced in the model. to generate a standard wavefield simulation, and the results of the classical one-way GSP algorithm and FWDM method used were compared with those obtained by the proposed scheme. In this test, the forward simulated wavefield calculated by the SFD technique was used as a base for comparison. When the wave passes through the inclined interface, the incident and transmitted waves separate in response to the velocity difference between the two sides of the interface, as indicated by the black arrow in figure 5a. The extrapolated wavefield calculated by the migration method proposed in this study and the FWDM method used matched the reference standards. However, the extrapolated wavefields calculated by the classical one-way GSP scheme show that the incident and transmitted waves do not separate at the interface and almost stick together, as indicated by the black arrow in figure 5b. The extrapolated wavefield computed by the classical oneway GSP scheme is significantly different from that obtained from the theoretical analysis and seems to yield more imaging artifacts in the migration section. Therefore, it was concluded that the proposed migration algorithm can achieve a more precise imaging performance compared to the classical one-way algorithm. Impulse responses in the SEG salt dome model The SEG salt dome model has strong velocity variations in vertical and horizontal directions, which has the excellent testing ability for forward modeling and pre-stack depth migration. The description of wavefields in a medium with significant velocity changes is always the focus and main challenge of seismic inversion and imaging. An accurate description of wavefield propagation is the basis of seismic imaging and inversion; thus, it is necessary to study the wavefield calculation, and especially the description of the wavefield in complex media. In numerical experiments, the SFD technique, the classical GSP algorithm, the FWDM method used here and the migration algorithm proposed in this paper were all used to calculate the wavefield propagation in the SEG salt model. As seen in figure 6, the salt dome body velocity (4500 m s −1 ) is approximately twice that of the surrounding salt dome (2200 m s −1 ) and the velocity model shows strong velocity variations. In this experiment, a Ricker wavelet of 30 Hz was chosen as the source and was placed at z = 0 m and x = 3000 m. The mesh interval of the velocity model was 10 × 10 m. When the seismic waves pass through the salt dome, 782 Figure 6. The salt dome velocity model. the extrapolated wavefield calculated by the classical one-way GSP scheme presents more significant errors than that calculated by the SFD technique. As indicated by the black arrows in figure 7b, the one-way GSP scheme does not accurately calculate the wavefield propagation. This is not conducive to imaging the structure below the salt dome. However, the extrapolated wavefield calculated by the algorithm in this study is consistent with the results calculated by the forward simulation of the SFD method, and closer to the FWDM method used here, as indicated by the arrow in figure 7a and 7d. Despite the more abundant waves, including the down-going waves, the turning waves and the prismatic waves generated by the FWDM method, it can be concluded that the migration method presented herein provides a more stable wavefield calculation performance in media with strong velocity variations. Multiple shots in the Marmousi model As is often noticed, the Marmousi model is commonly used to verify the capabilities of imaging methods. The model features steeply dipping faults with large angles, a cutting layer with sharp velocity variations and multiple anticline structures. The Marmousi model was used in this study to test the imaging performance by using multiple shots. The size of the model was 3.5 × 7.5 km, with a grid of 560 × 1200 nodes. Accurate velocity media were used to produce shot gathers by the SFD technique, as shown in figure 8a, and a smoothed velocity was used for migration, as shown in figure 8b. In this test, a wavelet with the center frequency of 60 Hz (served as a moving seismic source) was used to compute the wavefield with a sampling interval of 0.0005 s. The imaging sections generated by the classical one-way GSP scheme based on an accurate velocity model and those generated by the proposed eigen-decomposition method with the best polynomial approximation based on the infinite norm of the exact velocity model or with a smoothed velocity are plotted in figure 9a-c, respectively. No evident difference can be seen among the migration results, which means our method is not very dependent on the accuracy of the velocity model. The velocity dependence of the migration method is essential for imaging real seismic data owing to the difficulty in estimating the actual underground velocity distribution correctly. By observing the migrated results, it can be seen that all migration methods allow a good overall imaging of underground structures. Hence, to quickly determine the differences between the two methods directly, several areas were chosen to perform a more detailed comparison, as shown in figure 9d-f. In particular, figure 9d shows the amplified part of the Marmousi velocity model as the most complex structure, and figure 9 parts e and f show the partial enlargement of the classical GSP scheme and the infinite-norm approximation migration scheme, respectively. This detailed comparison clearly shows that the proposed migration method is advantageous owing to its superior ability to deal with a wavefield at large angles and steep dipping compared with the classical oneway GSP scheme. As shown in the black dotted slanted box, the steeply dipping fault imaged by the proposed migration scheme at the contact between different strata is clearer compared to the distorted and inaccurate image generated by the classical GSP scheme. On the other hand, the region in the black dashed rectangle in the middle-deep level (a representative complex area for testing migration algorithms) remains a challenge. However, the result of the proposed migration scheme in this region appears to be more explicit and has fewer artifacts than the result of the classical GSP method. The dark arrows in the figure show that the proposed migration scheme provides more regular migration results relative to the accurate velocity model compared to the oneway GSP scheme. Overall, these results clearly demonstrate that the proposed migration scheme can indeed image complexes with large dipping angles and multi-interbedded formations. In addition, the computational cost is a concerning problem of using the proposed scheme. As shown in Table 1, the GPU device information we used was listed, and the computation comparisons with the conventional one-way FFD propagators and the utilised FWDM scheme were also introduced. Note that for a fair comparing, the most timeconsuming part, such as the quadratic-term corrections of the FFD method, the eigen-decomposition of the proposed method and the spectral projector to suppress the evanescent waves of the FWDM method used, were all computed on the GPU graphics and their calculation results stored in the RAM. Here, two computing strategies for implementing the proposed algorithm were compared, named strategy A and strategy B, respectively. Strategy A has been introduced already. Strategy B could be achieved by preparing all the source and receiver data for the same size in the frequency-space domain and directly extrapolating all the wavefields simultaneously. Without storing eigenvalues and eigenvectors, and less a round of loop of shot gathers, strategy B can realise a higher calculation efficiency than strategy A but with more memory requirements. Discussions The square root operator is a fundamental parameter in OWDM as it determines the imaging accuracy. The values of the Helmholtz operator depend on the media velocity at a given depth for a single frequency point. Therefore, existing parallel computing techniques can be implemented to improve the computational efficiency by restoring the values of the eigen-decomposition algorithm. In a three-dimensional case, the calculated quantities will sharply rise. The wavefield extrapolation operator will involve 3D tensors and their decompositions, which deserves further study. The proposed best polynomial approximation of trigonometric functions can be applied in OWDM, as proposed in this study, but also in the depth migration scheme involved in one-way wave operators, such as the two-way depth migration (TWDM) and amplitude-preserving FWDM schemes, to improve the imaging abilities of those methods based on one-way operators. Moreover, although the migration methods based on full-wave equation have some advantages in seismic exploration, including the extensively studied RTM and the recently fast-developed TWDM, they still have some problematic or unresolved issues such as the colossal memory overheads and artifacts in RTM, the evanescent waves issue and the boundary condition problem in FWDM. That is, different migration schemes have their peculiar characteristics and applicable conditions, and no method can be 785 Where f denotes the number of frequency points, x is the horizon grids, sn denotes the total shot number, and z is the depth grids. competent for any imaging condition. Thus, as an optimised one-way wave method, with the high-angle imaging ability and lower computational costs than FWDM, the proposed scheme is still worth developing further. Conclusions In this study, a new scheme was introduced that integrated Euler transformation with the best polynomial approximation of the cosine function based on the infinite norm into the matrix eigen-decomposition migration framework. Subsequently, a one-way wave-equation migration scheme was deduced for imaging complex structures by using a highaccuracy one-way propagator with a low-order polynomial approximation. The reliability of the proposed scheme was demonstrated by analysing the propagation of wavefields in several models with significant lateral inhomogeneity in velocity. In the impulse response experiments involving an inhomogeneous medium, more accurate phase information could be acquired from the proposed scheme than from the classical GSP scheme when they were compared with the theoretical curves generated by the SFD technique and FWDM method. Similarly, the wavefields transmitted from a dipping reflector in a partial variable-velocity model showed that the imaging results obtained by the proposed scheme agreed with the theoretical propagation laws. Improving the accuracy and stability of propagators in complex media is a fundamental problem. The more refined and higher-quality imaging results produced by the proposed method in the impulse response of the SEG salt model and in the multiple shot migrated results of the Marmousi model provided strong evidence of the ability and potential of the proposed scheme for solving this issue.
v3-fos-license
2019-03-11T13:06:58.003Z
2012-06-11T00:00:00.000
14253080
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.5348/ijcri-2012-05-119-cr-4", "pdf_hash": "86ddef750eef4dacb1903a11a32849fe3c68a75d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1543", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b4d01e3cb3ce17fc7dad3bbf9982d164c0739dba", "year": 2012 }
pes2o/s2orc
Primary extranodal marginal zone Bcell lymphoma of hard palate: A case report Introduction: MALT (mucosa associated lymphoid tissue) lymphomas comprise a heterogeneous group, originally thought to be derived from the marginal zone B­cells that are found surrounding B­cell follicles and within the adjacent lymphoepithelium. They arise most commonly in extranodal organs such as the stomach, major salivary glands and thyroid. Thus, they can be precisely described as extranodal marginal zone B­cell lymphomas (ENMZL). Here, we report a case of MALT lymphoma arising in the palatal minor salivary glands. Case Report: A 55­year­old woman presented with two years history of a left posterior palatal mass. Clinical investigations of the case included computed tomography (CT) scan for the assessment of bone destruction and possible intra­maxillary extension. Histopathological features and immunohistochemistry findings were consistent with the diagnosis of extranodal marginal zone B­cell lymphoma of MALT type. The lesion was treated by complete surgical excision and followed for two years. Conclusion: Even though oral localization of ENMZL is rare, it should be included in the differential diagnosis of benign­ appearing swellings of oral cavity. INTRODUCTION Malignant lymphomas of the oral cavity are uncommon and account for 3.5% of all oral malignancies [1]. Lymphomas arising from the mucosa associated lymphoid tissue (MALT) were originally described by Isaacson and Wright in the gastrointestinal tract. They specified them as Bcell lymphomas [2]. MALT lymphomas account for a significant proportion of extranodal lymphomas. In the head and neck region except for the salivary glands the occurrence of these lymphomas is very rare, hence very few cases of primary intraoral MALT lymphoma have been reported so far [3,4]. In the present paper, we report a case of 55yearold female with soft tissue swelling involving posterior hard palate. Clinioradiological profile was suggestive of a chronic inflammatory lesion or a benign neoplasm of palatal minor salivary glands. Microscopic examination CASE REPORT OPEN ACCESS CASE REPORT A 55yearold female presented with a swelling in posterior palatal region since two years. Intraoral examination revealed a swelling involving posterior hard palate on left side, which was soft in consistency ( Figure 1). Computed tomography (CT) scan confirmed an oval soft tissue density lesion (19×14 mm) along the left posterolateral aspect of hard palate, partly extending along the soft palate with no evidence of intramaxillary extension of the lesion ( Figure 2). Diagnostic incisional biopsy of the lesion revealed it as benign lymphoid hyperplasia. After complete surgical excision, microscopic examination of the lesion showed diffuse infiltration of monomorphic small round cells within the connective tissue. Sheets of lymphoid cells were separated from the overlying epithelium by a welldefined band of fibrovascular tissue ( Figure 3). Some lymphoid follicles with follicle center and mantle zone were seen. Many minor salivary gland acini and ducts were observed, some of which showed infiltration by lymphoid cells, forming the lymphoepithelial lesions ( Figure 4). On higher magnification, the round cells were centrocyte like, with scant cytoplasm and nuclei containing clumped chromatin. Some plasma cells were also noted. In addition to routine hematoxylin and eosin staining, a panel of immunohistochemical markers was used to arrive at final diagnosis. Leukocyte common antigen (LCA) was used to confirm the neoplasm was indeed composed of lymphocytes. Monoclonal intracytoplasmic immunoglobulin was detected by immunohistochemistry, which confirmed neoplastic origin of the Bcells ( Figure 5). The tumor cells were PanCK, CD5, CD3, CD10, CD20+, and CD23. Immunohistochemistry for additional markers was not performed because the aforementioned histopathological findings were considered diagnostic for MALT lymphoma. Because fresh tissue was not available for study, flow cytometry and cytogenetic analysis was not performed. Peripheral blood examination and protein electrophoresis were within normal limits. A bone marrow biopsy was negative for the presence of neoplastic cells. The present case is consistent with the diagnosis of lowgrade MALT lymphoma. The patient had uneventful postoperative course. DISCUSSION About 20% of oral nonHodgkin's lymphomas arise in palatal soft tissues [5]. Extranodal marginal zone lymphomas (ENMZL) constitute a heterogeneous group showing neoplastic cells resembling normal marginal zone B cells [5]. These lymphomas are characterized by their mucosal and glandular tissue localization and commonly referred to as mucosaassociated lymphoid tissue (MALT) lymphomas [6]. ENMZL have peculiar clinicopathological profile that set them apart from other lymphomas. They tend to remain localized for prolonged intervals and the lesion is commonly confused with an inflammatory process (as evident in the present case). However, the presence of monoclonality in these lesions correlates with risk of dissemination and therefore supports their designation as lymphoma [7]. Etiopathogenesis of MALT lymphoma is poorly understood. Chronic inflammatory conditions such as chronic sinusitis, Sjogren's syndrome, benign lymphoepithelial lesion or myoepithelial sialadenitis (MESA) have been suggested as precursors for the development of MALT lymphoma in this location [3,7]. The indolent nature of the disease in these cases is manifested by persistence of the lesion without overt clinical evidence of distant spread even after 14 months [8]. Persistence of the lesion for duration of 24 months in present case is suggestive of MALT lymphoma. Histopathological distinction between a reactive lymphoid infiltrate and MALTlymphoma can be difficult. According to Vega et al., larger the lymphoid infiltrate, greater the likelihood of lymphoma [7]. Monomorphous lymphoid population exhibiting centrocytelike or monocytoid morphology, forming wide zones surrounding epimyoepithelial islands is a useful histological finding suggestive of MALT lymphoma [7]. Kojima et al. described two distinct histopathological patterns of primary oral MALT lymphoma. The first pattern is characterized by occasional follicular colonization and the presence of lymphoepithelial lesions. Another pattern shows a prominent follicular colonization resembling the "floral variant" of follicular lymphoma [4]. Present case showed histological features described in the former pattern. In contrast to the relatively poor outcome of other B cell lymphomas, MALT lymphomas have a better prognosis making them amenable to complete surgical removal [3]. Survival in the present case, two years after diagnosis, is consistent with this finding. CONCLUSION Rarity of occurrence of MALT lymphoma in oral cavity, clinical profile resembling benign process, peculiar histopathology and varied immunohistochemistry findings make such cases interesting to study. Considering their lowgrade malignant behavior and indolent clinical course, ENMZL should be included in the differential diagnosis of benign appearing swelling of oral cavity *********
v3-fos-license
2016-05-04T20:20:58.661Z
2013-02-18T00:00:00.000
14303473
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1002/emmm.201201717", "pdf_hash": "22fe94f29218d38939b79ac6c821bf8d2d619471", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1544", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "af4bed528ef1e936f5f3d8e5e8033ca2f1a55f50", "year": 2013 }
pes2o/s2orc
Distinct functions of chemokine receptor axes in the atherogenic mobilization and recruitment of classical monocytes We used a novel approach of cytostatically induced leucocyte depletion and subsequent reconstitution with leucocytes deprived of classical (inflammatory/Gr1hi) or non-classical (resident/Gr1lo) monocytes to dissect their differential role in atheroprogression under high-fat diet (HFD). Apolipoprotein E-deficient (Apoe−/−) mice lacking classical but not non-classical monocytes displayed reduced lesion size and macrophage and apoptotic cell content. Conversely, HFD induced a selective expansion of classical monocytes in blood and bone marrow. Increased CXCL1 levels accompanied by higher expression of its receptor CXCR2 on classical monocytes and inhibition of monocytosis by CXCL1-neutralization indicated a preferential role for the CXCL1/CXCR2 axis in mobilizing classical monocytes during hypercholesterolemia. Studies correlating circulating and lesional classical monocytes in gene-deficient Apoe−/− mice, adoptive transfer of gene-deficient cells and pharmacological modulation during intravital microscopy of the carotid artery revealed a crucial function of CCR1 and CCR5 but not CCR2 or CX3CR1 in classical monocyte recruitment to atherosclerotic vessels. Collectively, these data establish the impact of classical monocytes on atheroprogression, identify a sequential role of CXCL1 in their mobilization and CCR1/CCR5 in their recruitment. ***** Reviewer's comments ***** Referee #1: This paper from the Soehnlein and colleagues probes some of the mechanistic consequences of the observation that classically activated monocytes levels increase in hypercholesterolemic mice. By a positive-selection, adoptive transfer approach in animals that have been made leukopenic by cyclophosphamide administration, they showed that classically activated monocytes aggravate early atherogenesis. In an extensive series of experiments, the authors explore the contribution of chemokines and their receptors in this phenomenon. Their data support a role of CCR 1 and CCR 5, and conversely do not find that CCR2 or CX3CR1 contribute to monocyte recruitment. Aspects of this work are novel, and the mechanistic insights add value to the field of atherogensis in mice. Cells for reconstitution were prepared by positive selection. Could this treatment have had confounding effects by activating WBC or altering their fate upon transfer? Does the cyclophosphamide treatment affect lymphocyte functions related to atherogenesis? Does cyclophosphamide have a general effect of collagen? All cyclophosphamide treated animals had lower levels of collage regardless of the monocyte population reconstituted ( Fig 1G). Why is apoptosis greater in the CM depleted lesions ( fig 1F)? With respect to the experiments in fig 3, only one time point is reported in the effects of chemokine knockouts on CM amounts is lesions. How long do CM persist in lesions before maturation to macrophages, and could the genetic manipulations alter this rate, such that a kinetic analysis would be more informative than a single timepoint? The authors are quite aware of what they call "stage-dependent" effects of interventions on atherosclerosis. In view of this important issue, they should try to temper throughout this manuscript the strength of their conclusions that are based on study of only one time point. The authors should omit the speculation regarding targeting of HDL on page 11. This is unjustified given the current state of knowledge. The authors should use a better term than "atherosclerotic endothelium" on page 10.) The use of non-standard abbreviations is not helpful to the reader. The field is already confused by use of Gr and Ly6 nomenclature. Is the use of CM and NCM here needed? Referee #2: In the manuscript "Distinct function of chemokine receptor axes in the atherogenic mobilization and recruitment of classical monocytes" Soehnlein, Weber and colleagues reexamine the role of classical monocytes in atherosclerosis with an emphasis on chemokine receptors. They conclude that: (1) Ly-6C high classical monocytes are selectively atherogenic; (2) the CXCR2-CXCL1 chemokine axis is responsible for the mobilization of classical monocytes to the blood; (3) CCR1 and CCR5 are the main receptors that promote classical monocyte accumulation in lesions. Overall, the study recapitulates previous observations with new methods while challenging other findings. The question is whether the authors provided sufficient evidence to support their conclusions. 1. The first conclusion is that classical monocytes are atherogenic. This is the basis of Figure 1. The conclusion recapitulates previous studies with a new, tour-de-force technique. In fact, it's remarkable that the authors see such effects on disease simply by adoptively transferring monocytes. However, it is unclear from Figure 1 whether the effect has anything to do with the accumulation of monocytes. The authors show in Supporting Figure 8 that adoptively transfered monocytes are indeed found in recipient blood. Do you also see adoptively transferred monocytes and macrophages in aorta? Given the data in Figure 1, lesional macrophages in group II should be CD45.2 and not CD45.1. 2. Next, the authors conclude on the basis of Figure 2 that CXCR2-CXCL1 axis is crucial to mobilize monocytes during HFD. The authors argue in Supplemental Figures 2 and 3 that the other chemokines typically associated with monocyte recruitment in atheorosclerosis are dispensable to hypercholesterolemia-induced atherosclerosis. The data are interesting. Is there statistical significance in G between HFD-isotype and HFD-anti-CXCL1? There should be if we are to conclude that CXCL1 is important. The increase shown by flow cytometry of CXCR2 on classical monocytes is modest. The authors should substantiate the finding with another method. Also, is there any impact on atherosclerosis with repeated anti-CXCL1? 3. Finally, the authors show that CCR1 and CCR5 but not CCR2 or CX3CR1 are involved in monocyte accumulation to lesions. This is the most controversial part of the paper and probably the most important. It must therefore be very convincing -unfortunately, it is not. First, the authors should report lesion size and number of macrophages in CCR1 and CCR5 apoE mice. In figure 3 the authors show reduced numbers of monocytes in the aorta in CCR1 and CCR5 mice which suggests a problem with influx but could also mean maturation, survival, exit. Second, the CFSE experiment is not convincing. There are almost no CFSE cells accumulating in lesions so even with the stars that denote statistical significance in Figure 4B the data are weak. The authors should perform experiments to more convincingly show that CCR1 and CCR5 are the essential chemokine receptors. Overall, this is an interesting paper but has not yet convincingly proven its most important conclusions. Referee #3: Soehnlein et al describe in their study a new and very topical way to functionally differentiate between contributions of different monocyte subsets to atheroprogression. The authors have also extensively investigated the involvement of various chemokines and chemokine receptors and their conclusions about the underlying chemokine network seem sound and valid. This reviewer is, however, missing a treatment of non-signalling chemokine co-receptors in this otherwise highly interesting study. Before acceptance for publication it is therefore suggested to -include references to decoy receptors such as D6 and discussions thereof -include references to glycosaminoglycan(GAG) and proteoglycan chemokine co-receptors such as heparin sulfate and discussions thereof -investigate (e.g. by real time PCR or by FACS analyses) the involvement of the above-mentioned non-signalling chemokine co-receptors Since many chemokine-targeting therapies have failed for various reasons in the past, it is required to aim at a fairly complete picture of chemokines/receptor/co-receptor networks before seriously speculating about therapeutic targeting of CCL5/CCR1/CCR5 interactions. Referee #1: Cells for reconstitution were prepared by positive selection. Could this treatment have had confounding effects by activating WBC or altering their fate upon transfer? Reply: This question raised by the referee is certainly an important point that needs to be discussed. However, we would like to point out that groups II through IV received white blood cells exposed to the same cocktail of antibodies. Since FACS-sorting depletion of individual monocyte subsets had distinct effects on lesion sizes, we were confident that our antibody-based selection strategy did not alter activation of white blood cells. To further investigate if the positive selection had an effect on leukocyte activation, we FACS-sorted white blood cells based on their FSC/SSC properties. In one instance the cells were incubated with the antibody cocktail used in figure 1 (anti-CD45, anti-CD115, anti-Gr1), whereas in the other instance they remained untouched. To assess leukocyte activation, we measured surface markers (CD11b, CD62L), the production of reactive oxygen species, and the exposure of phosphatidyl serine. In none of these measurements, antibody-based selection had a significant impact on function or phenotype of monocytes or neutrophils (new Supporting Information Figure 1). Does the cyclophosphamide treatment affect lymphocyte functions related to atherogenesis? Reply: This is certainly an important question and our reply to this has various complex aspects. As for the question above, we would like to point out that groups I through IV received the same dose of cyclophosphamide (CPM). Hence, whatever the effect of CPM on lymphocyte or resident cell function may be, it can be considered to be the same in each of the groups. Beyond this, groups II through IV are repopulated with white blood cells (the majority of which are lymphocytes that can also be detected in the circulation, see Supporting Information Figure 2) which were not exposed to CPM and hence are functionally not impaired. Moreover, lesion sizes in groups I through IV are specifically modulated by presence or absence of classical monocytes making a major contribution of lymphocytes unlikely. Finally, in models of diet-induced atherosclerosis in Apoe -/mice myeloid cells have a dominant role, while lymphocytes are known to play only minor roles (e.g. Dansky et al., PNAS, 1997). To experimentally assess the impact of CPM on lymphocyte function we repeated groups 0 and I with a smaller number of Apoe -/mice. The capacity of lymphocytes to proliferate was tested with a cocktail containing anti-CD3, anti-CD28, and IL2 (see figure A above). In these experiments lymphocytes from CPM-treated mice proliferated, although the proliferation rate was reduced when compared to saline-treated mice. However, serum levels of IFNγ, a marker cytokine for Th1 polarization, were not reduced in CPM-treated mice (see figure B above). Taken together, it appears that although lymphocyte proliferation is affected by CPM, this does not impact on IFNγ production. Together with the minor role of lymphocytes in Apoe -/mouse models of atherosclerosis, the consistency of CPM treatment in all groups, and the transfer of native lymphocytes upon WBC reconstitution, we believe that the effect of CPM on lymphocytes is of negligible importance in this study. Does cyclophosphamide have a general effect on collagen? All cyclophosphamide treated animals had lower levels of collagen regardless of the monocyte population reconstituted (Fig 1G). Reply: Cyclophosphamide affects lymphocyte proliferation but not IFNγ production. Apoe -/mice received HFD for 8 weeks. During the last 4 weeks, mice were treated with cyclophosphamide (100 mg/kg BW, 2x/week, i.p.) or saline. A: Lymphocyte proliferation was assessed following stimulation with anti-CD3, anti-CD28, and IL2 (proliferation cocktail). B: IFNγ concentration in the serum as determined by ELISA. The direct effect of cyclophosphamide on collagen synthesis is well documented (e.g. Hansen & Lorenzen, Acta Pharmacol Toxicol, 1977) and we refer to this work in the revised version of the manuscript. Despite the decreased collagen synthesis in cyclophosphamide-treated mice, it is interesting to see that individual monocyte populations do not further affect local collagen metabolism. Why is apoptosis greater in the CM depleted lesions (fig 1F)? Reply: This is a legitimate question raised by the referees. Classical monocytes exhibit a higher capacity to phagocytose bacteria, nanoparticles, as well as apoptotic cells (Settles et al., PLoS one, 2011;Wildgruber et al., PLoS one, 2009, Nahrendorf et al., J Exp Med, 2007, Grage-Griebenow et al., Immunobiology, 2000 when compared to non-classical monocytes. Hence, the accumulation of apoptotic cells in lesions of mice receiving WBC depleted of classical monocytes likely reflects the lack of monocytic cells with higher phagocytic capacity. We have incorporated this explanation into the result section (page 6). Table 4). These data largely corroborate the data obtained in mice receiving high-fat diet for 8 weeks, which form the foundation of this manuscript. Whereas macrophage accumulation was more markedly reduced in Apoe -/-Cx 3 cr1 -/mice than in Apoe -/-Ccr2 -/and Apoe -/-Ccr5 -/mice at both time points, consistent with a role of CX 3 CR1 in macrophage survival (e.g. Landsman et al., Blood, 2009), the absence of CCR1 limited macrophage accumulation at early time points but appeared to favour macrophage accumulation at later stages (page 9). In the revised manuscript, we have also emphasized that further experimentation is needed to address the role of chemokines in the processes subsequent to arterial monocyte infiltration (page 15). We would further like to point out that figures 3 and 4 exclusively focus on the interface of monocyte transition from the blood stream to the arterial wall. Any subsequent step is subject to multiple complex influences involving maturation, survival, polarization, and egress. We do not believe that the complexity of the post-infiltration cascade can be assessed by correlative data or by assessment of monocyte/macrophage ratios at different time points. The authors are quite aware of what they call "stage-dependent" effects of interventions on atherosclerosis. In view of this important issue, they should try to temper throughout this manuscript the strength of their conclusions that are based on study of only one time point. Reply: In conjunction with the new Supporting Information Table 4, we have now integrated discussions regarding stage-dependent effects at various places of the manuscript. The authors should omit the speculation regarding targeting of HDL on page 11. This is unjustified given the current state of knowledge. Reply: In accordance with the referee's comment, we have now omitted our statement regarding HDL targeting. The authors should use a better term than "atherosclerotic endothelium" on page 10.) Reply: We have now changed this phrase to "activated endothelium covering atherosclerotic lesions". The use of non-standard abbreviations is not helpful to the reader. The field is already confused by use of Gr and Ly6 nomenclature. Is the use of CM and NCM here needed? Reply: We agree with the reviewer and hence replaced CM and NCM by the terms classical and nonclassical monocytes throughout the manuscript. These terms were previously recommended to be used as standard terms (Ziegler-Heitbrock L et al., Blood, 2010). Referee #2: 1. The first conclusion is that classical monocytes are atherogenic. This is the basis of Figure 1. The conclusion recapitulates previous studies with a new, tour-de-force technique. In fact, it's remarkable that the authors see such effects on disease simply by adoptively transferring monocytes. However, it is unclear from Figure 1 whether the effect has anything to do with the accumulation of monocytes. The authors show in Supporting Figure 8 that adoptively transferred monocytes are indeed found in recipient blood. Do you also see adoptively transferred monocytes and macrophages in aorta? Given the data in Figure 1, lesional macrophages in group II should be CD45.2 and not CD45.1. Reply: To address this very important point raised by the referee, we have now repeated the experiments outlined for group II in figure 1 with CD45.1 recipient mice and CD45.2 donor leukocytes. To assess the presence of CD45.2 cells in the aorta, we stained aortic root sections for CD45.2 and CD45.1 and assessed the presence of CD45.1 + and CD45.2 + leukocytes in aortas by flow cytometry. In both analyses, we could detect CD45.2 donor-derived leukocyte in abundant numbers. These data are now incorporated as new Supporting Information Figure 3. Supplemental Figures 2 and 3 that the other chemokines typically associated with monocyte recruitment in atheorosclerosis are dispensable to hypercholesterolemia-induced atherosclerosis. The data are interesting. Is there statistical significance in G between HFD-isotype and HFD-anti-CXCL1? There should be if we are to conclude that CXCL1 is important. The increase shown by flow cytometry of CXCR2 on classical monocytes is modest. The authors should substantiate the finding with another method. Also, is there any impact on atherosclerosis with repeated anti-CXCL1? Reply: Stimulated by this interesting and highly relevant array of questions, we have now initiated an additional set of experiments, where we have further dissected the role of CXCL1 in hypercholesterolemia-induced monocytosis and subsequent lesion formation. Apoe -/mice were fed a high-fat diet for 4 weeks, during which they received an anti-CXCL1 or an isotype control antibody. While mice injected with the isotype-control antibody developed a classical monocytosis, mice receiving an anti-CXCL1 antibody did not (new Figure 2G). In line, classical monocytes in the bone marrow and spleen of mice injected with the antibody directed against CXCL1 exhibited a trend towards increased classical monocyte counts (new Supporting Information Figure 8), indicating a retention of classical monocytes at these two sites of monocyte production. Aortic root lesion sizes of mice treated with anti-CXCL1 were smaller, when compared to mice receiving the isotype control IgG and further displayed reduced accumulation of classical monocytes as well as macrophages in the aorta as was assessed by flow cytometry (new Figure 2H/I). Data from our chemokine receptor PCR array further indicated that CXCR2 expression on classical monocytes is indeed increased under conditions of hypercholesterolemia thus confirming our flow cytometry analyses. However, we must agree with the referee that functional significance thereof is not clear and we hence moved these data into the supplementary information (Supporting Information Figure 7). Next, the authors conclude on the basis of Figure 2 that CXCR2-CXCL1 axis is crucial to mobilize monocytes during HFD. The authors argue in 3. Finally, the authors show that CCR1 and CCR5 but not CCR2 or CX3CR1 are involved in monocyte accumulation to lesions. This is the most controversial part of the paper and probably the most important. It must therefore be very convincing -unfortunately, it is not. First, the authors should report lesion size and number of macrophages in CCR1 and CCR5 apoE mice. In figure 3 the authors show reduced numbers of monocytes in the aorta in CCR1 and CCR5 mice which suggests a problem with influx but could also mean maturation, survival, exit. Second, the CFSE experiment is not convincing. There are almost no CFSE cells accumulating in lesions so even with the stars that denote statistical significance in Figure 4B the data are weak. The authors should perform experiments to more convincingly show that CCR1 and CCR5 are the essential chemokine receptors. Reply: In light of previous publications in the field, we certainly agree with the reviewer that the information provided in figures 3 and 4 maybe somewhat controversial and hence requires corroboration. However, we would like to point out, that figure 4 is a consequence of figure 3, the latter displaying a lack of correlation between circulating and lesional classical monocytes. As this could indicate a defect in recruitment as well as alterations in maturation or egress, we performed experiments detailed in figure 4. Hence, we designed two alternative strategies that allow to specifically address the interface of monocyte transition from the blood stream to the arterial wall independently of homeostatic or post-emigration processes. To our knowledge, apart from these three approaches (correlation studies, intravital microscopy using short term treatment with inhibitors, adoptive transfer experiments with short circulation time post transfer) employed here, there is no additional experimental setup that allows to specifically investigate infiltration of classical monocytes independently of homeostatic and post-recruitment mechanisms only. Even murine parasymbiosis models, which are for ethical concerns impossible to perform in Europe, have their limitations. In these setups, the accumulation of monocytes in arterial lesions over several weeks is subject to influences by many mechanisms of monocyte differentiation, polarization, maturation, and egress and hence no clear-cut conclusion on emigration can be drawn. To further corroborate the data provided in figures 3 and 4, we have now added an extensive table summarizing lesion sizes, circulating monocyte counts, circulating classical monocyte counts, prevalence of classical monocytes and macrophages in the aorta, as well as the correlation of circulating and lesional classical monocytes. All these parameters are provided for Apoe -/-, Apoe -/-Ccr1 -/-, Apoe -/-Ccr2 -/-, Apoe -/-Ccr5 -/-, and Apoe -/-Cx 3 cr1 -/at two different time points of high-fat diet feeding (new Supporting Information Table 4). To further substantiate data from the adoptive transfer experiments, we employed the same strategy but instead used the CD45.1/CD45.2 system to track classical monocytes. Based on improved discrimination of donor cells within the aortas of CD45.1/Ldlr -/mice, we can corroborate both the number of lesional monocytes as well as the importance of CCR1 and CCR5 for arterial monocyte influx (new Figure 4C/D). The numbers of donor-derived lesional monocytes in both adoptive transfer approaches employed in this study are in the range of what was found in previous studies using similar approaches (e.g. Tacke et al., J Clin Invest, 2007) and may hence truly reflect monocyte recruitment rates. Thus, we believe that further studies are required to dissect rates of arterial monocyte turn-over. Referee #3: Before acceptance for publication it is therefore suggested to -include references to decoy receptors such as D6 and discussions thereof Reply: As suggested by the referee, we have made reference to decoy receptors in the discussion section (page 14). -include references to glycosaminoglycan (GAG) and proteoglycan chemokine co-receptors such as heparin sulfate and discussions thereof Reply: As suggested by the referee, we have included references to GAGs and chemokine co-receptors in the discussion section (page 14/15). -investigate (e.g. by real time PCR or by FACS analyses) the involvement of the above-mentioned non-signalling chemokine co-receptors Reply: Various studies will be required to fully investigate and understand the role of non-signaling chemokine co-receptors in atherosclerosis. We thank the reviewer for giving us the chance to provide initial data on the role of such receptors in atherosclerosis. Here, we investigate the expression of decoy receptors D6 and CXCR7 and the CCL5 co-receptor CD44 on classical monocytes by flow cytometry. In these experiments we could not find increased expression under conditions of hypercholesterolemia (new Supporting Information Figure 10). Since many chemokine-targeting therapies have failed for various reasons in the past, it is required to aim at a fairly complete picture of chemokines/receptor/co-receptor networks before seriously speculating about therapeutic targeting of CCL5/CCR1/CCR5 interactions. Reply: We fully agree with the referee on this point. We have therefore removed speculations about possible therapeutic targeting of the CCL5-CCR1/-CCR5 axis. Thank you for the submission of your revised manuscript to EMBO Molecular Medicine. We have now received the enclosed reports from the referees that were asked to re-assess it. As you will see the reviewers are now globally supportive and I am pleased to inform you that we will be able to accept your manuscript pending the following final amendments: 1) Please make sure you modify your Abstract and Discussion as suggested by Reviewer 1. 2) The text in the figures is rather blocky/blurry. Please provide higher resolution versions, and check to make sure that text/line-art remains clear even when zooming in. 3) Where you have not done so, please follow the other instructions listed below I strongly advise you to submit your revised manuscript within two days to ensure, provided the changes have been satisfactorily applied, acceptance before the Holiday season. I look forward to seeing a revised form of your manuscript as soon as possible. ***** Reviewer's comments ***** Referee #1 (General Remarks): The authors have responded appropriately to my major concerns, and provided relevant additional new experimental data. I suggest that they remove the modifier "unequivocally" before "establish" in the abstract, and last paragraph of thje discussion as redundant, and unjustified given the contrived nature of their model. I also think their dismissal of T-cells in atherosclerosis on the basis of the Dansky paper ignores a large body of other data regarding modulatory effects of T cells in atherogenesis. Referee #2 (Comments on Novelty/Model System): This is a very strong revision. The authors have addressed all my questions very well. Referee #2 (General Remarks): I have no more remarks.
v3-fos-license
2022-11-23T14:12:07.172Z
2023-03-21T00:00:00.000
257696922
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": null, "pdf_hash": "c553d3f933c7ad2b679998ee44b8431d572bacf4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1545", "s2fieldsofstudy": [ "Biology" ], "sha1": "ac457ab644daabe529c69a151b7c9175bd523107", "year": 2023 }
pes2o/s2orc
Replication‐induced DNA secondary structures drive fork uncoupling and breakage Abstract Sequences that form DNA secondary structures, such as G‐quadruplexes (G4s) and intercalated‐Motifs (iMs), are abundant in the human genome and play various physiological roles. However, they can also interfere with replication and threaten genome stability. Multiple lines of evidence suggest G4s inhibit replication, but the underlying mechanism remains unclear. Moreover, evidence of how iMs affect the replisome is lacking. Here, we reconstitute replication of physiologically derived structure‐forming sequences to find that a single G4 or iM arrest DNA replication. Direct single‐molecule structure detection within solid‐state nanopores reveals structures form as a consequence of replication. Combined genetic and biophysical characterisation establishes that structure stability and probability of structure formation are key determinants of replisome arrest. Mechanistically, replication arrest is caused by impaired synthesis, resulting in helicase‐polymerase uncoupling. Significantly, iMs also induce breakage of nascent DNA. Finally, stalled forks are only rescued by a specialised helicase, Pif1, but not Rrm3, Sgs1, Chl1 or Hrq1. Altogether, we provide a mechanism for quadruplex structure formation and resolution during replication and highlight G4s and iMs as endogenous sources of replication stress. Introduction Eukaryotic DNA replication is a highly regulated process carried out by a complex molecular machine known as the replisome (Bell & Labib, 2016).Parental duplex DNA is unwound by the replicative helicase, CMG, and nascent DNA is subsequently synthesised by polymerase ε on the leading strand, or polymerase δ on the lagging strand.The replisome must replicate all regions of the genome accurately while encountering various obstacles such as DNA damage, protein barriers and transcription-replication collisions (reviewed in Zeman & Cimprich, 2014).All of these can lead to replication stress, which poses a challenge to genome integrity (Gaillard et al, 2015). Another potential barrier to replisome progression is the DNA template itself.Aside from the canonical B-DNA conformation, certain DNA sequences can fold into secondary structures, particularly from ssDNA exposed during replication.Examples of wellcharacterised secondary structures include hairpin (Gacy et al, 1995;Nadel et al, 1995), triplex (H-DNA) (Mirkin et al, 1987), Gquadruplex (G4) (Fry & Loeb, 1994) and intercalated-Motif (iM) (Gehring et al, 1993) structures, which are thought to act as barriers to replication.For example, hairpin-forming repeats cause replication stalling in bacteria, yeast, and human cells (Voineagu et al, 2008).Moreover, two unbiased studies have mapped sites of replication fork collapse in vivo, highlighting poly(dA) sites (Tubbs et al, 2018) and a variety of structure-forming repeats (Shastri et al, 2018).Therefore, secondary structures may be responsible for the fact that repetitive sequences are unstable and give rise to genomic instability (reviewed in Brown & Freudenreich, 2021). G4 structures are formed from guanine-rich sequences through the stacking of G quartets formed by Hoogsteen base pairing (Sen & Gilbert, 1988).Their abundance increases during S-phase, suggesting they arise as a consequence of replication (Biffi et al, 2013;Di Antonio et al, 2020).G4-forming sequences are defined by a motif consisting of four consecutive tracts of at least three guanines (G-tracts), separated by one to seven non-G nucleotides (G 3 N 1-7 G 3 N 1-7 G 3 N 1-7 G 3 ) (Huppert & Balasubramanian, 2005) which are referred to as 'loops'.The length and composition of these loops affects the stability of the G4 structure.For example, shorter loops composed of thymine residues correlate with more thermally stable structures, while longer loops result in less stable structures (Piazza et al, 2015).G4s can exist in a variety of structural topologies, depending on the orientation of the G-rich strands relative to one another (parallel, anti-parallel or hybrid; Ou et al, 2008).G4forming sequences are abundant in the human genome and are known to have a range of physiological roles.Firstly, they have been associated with telomere maintenance (Maiti, 2010).Secondly, they are enriched in promoter regions (Chambers et al, 2015) where they can affect the transcriptional state of genes (reviewed in Robinson et al, 2021).More recently, ChIP-Seq of G4-structures on a genome-wide scale revealed that these structures mark sites of actively transcribed genes (Hansel-Hertsch et al, 2016).Finally, they are implicated in human diseases.For example, G4s correlate with telomere fragility and induce both genetic and epigenetic instability (Vannier et al, 2013;Schiavone et al, 2014;Papadopoulou et al, 2015) and have been shown to disrupt repressive chromatin structures (Sarkies et al, 2010).Moreover, G4s, such as those found at the c-myc gene promoter, are enriched at mutation hotspots in an array of cancers (De & Michor, 2011;Wang & Vasquez, 2017). i-Motifs arise from cytosine-rich DNA and therefore, in genomic contexts, can often be found on the opposite strand to G4s, for example at telomeres (Zeraati et al, 2018).These structures are held together by hemi-protonated cytosine-cytosine base pairs (C:C + ) and are stabilised by low pH (Gehring et al, 1993).Although the existence of iMs has long been established, there has been speculation about their biological relevance due to their apparent requirement for acidic conditions.However, recent studies have shown that certain iMs can form at neutral pH (Wright et al, 2017), requiring longer tracts of cytosines with a proposed consensus of four consecutive tracts of at least five cytosines, separated by 1-19 non-C nucleotides (C 5 N 1-19 C 5 N 1-19 C 5 N 1-19 C 5 ) (Wright et al, 2017).The thermal stability of iMs can be determined similarly to G4s by measuring the melting temperature of the structure.In addition to temperature, iM stability can be affected by the local pH, often described by the transitional pH (pH T )the pH at which the sequence is 50% folded (Wright et al, 2017).Similar to G4s, a longer tract of Cs and a shorter loop length creates a more stable iM.In contrast to G4s, iMs can only adopt an anti-parallel conformation.Although not highly prevalent in the yeast genome, a previous study has described a sequence on chromosome IV of S. cerevisiae that can fold into an iM structure (Kshirsagar et al, 2017).In addition to this, iMs can be found at transcription start sites and in the promoters of oncogenes, such as BCL2 (Kendrick et al, 2014), HRAS (Miglietta et al, 2015), KRAS (Kaiser et al, 2017), c-MYC (Simonsson et al, 2000) and VEGF (Guo et al, 2008). Many studies have indicated that G4s can impact replication in vivo.In the absence of specialised helicases, such as Pif1, G4s positioned on the leading strand template can undergo rearrangements and mutations (Lopes et al, 2011).While the replication machinery stalls at G4-forming sequences in a variety of organisms, there is conflicting evidence of stalling on the leading versus lagging strand template (Sarkies et al, 2010;Lopes et al, 2011;Paeschke et al, 2011;Dahan et al, 2018).In addition, a recent super-resolution microscopy study demonstrated that replication associated G4s interfere with replication stress signalling (Lee et al, 2021).Despite the abundance of evidence that G4s can interfere with replication, the underlying trigger remains unknown.Moreover, the mechanism of stalling and which replisome components are affected remain unclear. Primer extension assays have demonstrated that structureforming sequences inhibit synthesis.A variety of G4-forming sequences are sufficient to inhibit a range of polymerases in vitro, including the replicative polymerases ε and δ (Lormand et al, 2013;Edwards et al, 2014;Sparks et al, 2019b;Murat et al, 2020).There is some evidence that polymerase inhibition is affected not only by the stability of the G4 but also its topology (Takahashi et al, 2017).DNA synthesis through G4s can be aided by accessory factors such as specialised G4-unwinding helicases, including Pif1, REV1 and FANCJ (Salas et al, 2006;Fan et al, 2009;Ray et al, 2013;Sparks et al, 2019b).However, these studies were carried out with prefolded G4 structures using isolated polymerases.Whether such structures can form in the context of dsDNA unwound by CMG and how G4s affect the complete eukaryotic replisome is unknown. Much less is known about the effect of iM structures on eukaryotic DNA replication.The only available data are from primer extension assays, which have demonstrated that iMs can inhibit DNA synthesis by viral or prokaryotic polymerases in vitro (Catasti et al, 1997;Takahashi et al, 2017;Murat et al, 2020). Recent work from our laboratory demonstrated that a variety of repetitive sequences stall the eukaryotic replisome.These include tracts of poly(dG) n and poly(dC) n sequences, which can form G4s or iMs, respectively (Casas-Delucchi et al, 2022).These results are supported by recent work from the Remus lab, which demonstrated that poly(dG) n tracts stall the eukaryotic replisome when initially stabilised by an R-loop.This study also highlighted the ability of a pre-formed G4 (generated from the sequence (GGGT) 4 ) to inhibit unwinding by the CMG helicase (Kumar et al, 2021).Poly(dG) n and poly(dC) n sequences are unique types of G4 and iM-forming sequences due to their homopolymeric nature, and the stability and type of structures they form is highly polymorphic and thus difficult to study.As such, these sequences may not be representative of physiological G4 and iM-forming sequences.Yeast telomeric DNA alone does not affect replisome progression in vitro, and replication stalling is only seen when the telomeric binding protein Rap1 is present (Douglas & Diffley, 2021).However, yeast telomeres are relatively weak G4-forming sequences.Whether G4 and iM-forming sequences found in the human genome are sufficient to stall replication remains unclear. Here, we carry out an extensive study on physiologically derived G4 and iM sequences and investigate how they impact eukaryotic replication.Using a reconstituted budding yeast DNA replication system, we find that a single G4 or iM-forming sequence is sufficient to cause replisome stalling.The ability of these sequences to stall replication correlated with their ability to form stable structures, strongly suggesting that secondary structure formation was the underlying trigger for replication stalling.Interestingly, CMG was able to unwind past these sequences, while DNA synthesis by polymerase ε was inhibited, leading to helicase-polymerase uncoupling and exposure of ssDNA.Direct detection of secondary structures by singlemolecule sensing using solid-state nanopores established the lack of pre-formed structures prior to replication, whereas conditions that enhanced replication fork uncoupling led to increased fork stalling.Together, these observations support a model whereby replicationdependent structures arise behind CMG on ssDNA exposed during unwinding, resulting in inhibition of polymerase activity.Remarkably, stalling could only be rescued by the G4-unwinding helicase Pif1, but not by other implicated helicases -Rrm3, Sgs1, Chl1 or Hrq1, highlighting the specificity of this enzyme.Moreover, we found that iMs can cause DNA breakage. Altogether, this study describes the response of the eukaryotic replisome to a variety of physiological structure-forming motifs and highlights their ability to induce replication stress.This may provide a potential mechanism as to why these sequences are both genetically and epigenetically unstable and may explain their high mutational frequencies in cancer. Results The replisome stalls at a single quadruplex-forming sequence To establish how quadruplex-forming sequences affect the eukaryotic replisome, we cloned several well-characterised G4-and iM-forming sequences into a 9.8 kb plasmid.These were used to generate substrates for in vitro eukaryotic replication using purified budding yeast proteins (Yeeles et al, 2015(Yeeles et al, , 2017)).The G4-forming sequences tested include one of the most well-characterised G4s found in the human genome, c-MYC Pu22.This is a 22-nt-long segment derived from the human c-MYC promoter region (Dai et al, 2011) which is frequently mutated in cancers.Other G4forming sequences tested include (i) the human telomeric repeat sequence (TTAGGG) 4 , (ii) the Bu1 + 3.5 G4 motif (derived from the avian DT40 genome) which induces replication-dependent epigenetic silencing (Schiavone et al, 2014), (iii) the CEB25 L111(T) motif derived from the naturally occurring human CEB25 mini-satellite, where all three loops have been modified to a single thymine, leading to a thermally stable G4 structure (Piazza et al, 2015), (iv) the GGGGCC repeat of the C9orf72 gene which is associated with familial amyotrophic lateral sclerosis (Thys & Wang, 2015), (v) a repetitive sequence that forms a strong G4 and occurs roughly 1,000 times in the human genome (GGGT) 4 , and (vi) a tract of poly(dG) 16 which has previously been shown to induce replication stalling (Casas-Delucchi et al, 2022) (Table 1).Selection of iM sequences was based on their ability to form secondary structures at physiological pH (Wright et al, 2017).These exist in the human genome and are derived from the promoter regions of (i) DAP, (ii) DUX4L22, (iii) SNORD112, (iv) AC017019.1,(v) PIM1 and (vi) ZBTB7B (Table 2).Sequences were cloned 3 kb downstream of the ARS306 origin, from which site-specific replisome loading and firing occurs (Fig 1A).As replication initiates from a defined site in the template, we can infer the identity of the leading and lagging nascent strands.The structure-forming sequences we refer to throughout the manuscript are positioned on the leading strand template.The templates were linearised by digestion with AhdI before replication to avoid confounding effects of converging replisomes on circular replication templates.Upon replication initiation on the parental control substrate, two replication forks proceed from the origin in either direction, generating one longer leading strand product of 8.2 kb, and one shorter leading strand product of 1.5 kb.However, if the replisome stalls at a structure-forming sequence, a 3 kb band appears at the expense of the longer 8.2 kb leading strand product (Fig EV1).Lagging strand maturation factors have been omitted in these experiments.Therefore, lagging strand products remain as Okazaki fragments and run as a smear on denaturing agarose gels (Fig 1B, lane 1). As expected, the replicated control template generated two major leading strand products (8.2 and 1.5 kb) and Okazaki fragments To account for this, the intensity of the 3 kb stall band is quantified from three independent experiments and normalised to the intensity of the 1.5 kb band (Fig EV2A).A thermal difference spectrum (TDS) was obtained for each sequence to observe the characteristic G4-profile (data not shown).The thermal stability of the G4s was assessed via UV-vis melting, which revealed a positive 9.4 .This is highlighted in Table 1, which depicts the melting temperatures of G4s.Some of the G4s we tested, such as (GGGGCC) 6 , could not be assigned a melting temperature due to aggregation issues, likely due to formation of multimolecular structures.Similarly, poly(dG) 16 could not be accurately characterised.These sequences were therefore excluded from further studies.Replication stalling was also reproducibly induced by a range of physiologically derived single iM sequences, although to a lesser degree than G4s (Figs 1C and EV2C).In contrast to G4s, the intensity of replication stalling by iMs did not correlate with their relative thermal stability (Fig EV2D).Rather, there was a weak positive correlation between iM stalling and their transitional pH (pH T ) (Fig EV2E , Table 2). Stalling is dependent on structure formation To test the hypothesis that stalling is due to structure formation, we introduced mutations that abrogate or disrupt structures (Appendix Fig S1A and B).These included removing a G-tract from (GGGT) 4 to produce (GGGT) 3 and mutating the central guanine in each G-tract of c-MYC Pu22 to adenine.As an intermediate experiment, we mutated the loop regions in CEB25 L111(T) from thymine to adenine residues, which still allows for G4 formation but reduces its thermal stability (Piazza et al, 2015).Consistent with our hypothesis, we saw a consistent, though not statistically significant, reduction in stalling with the mutated sequences (Fig 1D and E).The reduction in stalling was less significant for CEB25 L111 (A), likely because it can still form a structure, albeit weaker.Importantly, biophysical characterisation demonstrated that these sequences either form weak structures, or none at all (Table 1).As expected, the melting temperatures of the mutated G4 sequences were reduced relative to the wildtype sequences (Appendix Fig S1C), resulting in a positive correlation between stalling and thermal stability (Appendix Fig S1E, Pearson correlation r value of 0.7).Although the melting temperature for CEB25 L111(A) remained relatively high as it was still able to form a G4, the stalling intensity was reduced when compared to CEB25 L111(T), which is consistent with its weakened thermal stability (Fig 1D and Appendix Fig S1C).Testing the effect of mutations on individual iM-forming sequences was challenging as stalling at a single iM was generally weaker than that observed at a single G4.We therefore tested the effect of two consecutive iMs and observed stronger replisome stalling (Fig 1F), which permitted analysis of mutants.When the iM-forming sequences DUX4L22 and SNORD112 were disrupted for their structure-forming ability (Appendix Fig S1B), stalling was consistently reduced, albeit not statistically significantly so (Fig 1F and G).This was less evident for DUX4L22, which is reflected in the fact that the mutations have a greater effect on the transitional pH and thermal stability of SNORD112, with a small effect on DUX4L22 (Appendix Fig S1D, F, and G; Table 2). Replisome stalling is dependent on the probability of structure formation Stalling induced by a single G4-or iM-forming sequence was reproducible, although relatively weak when compared to other types of structure-prone repeats (Casas-Delucchi et al, 2022).Larger arrays of consecutive quadruplex-forming sequences may exacerbate replisome stalling.To test this, we replicated templates containing an increasing number of consecutive G4 or iM-forming sequences (Fig 2A and B) and found that stalling increased with the number of structure-forming sequences (quantified from three independent experiments in Fig EV3A and B).Intriguingly, upon replication of iM sequences, a novel 5 kb replication product accumulated (Fig 2B).Although this product was sometimes weakly visible with G4-forming sequences, it was consistently prominent upon replication of iMs.This product corresponds to the length of leading strand 1 from the site of the structure-forming sequence downstream to the end of the template (Fig EV1).This could either be a result of intrinsic repriming events, or DNA breakage at the site of the iM.We investigate these possibilities later. The observed increase in stalling with arrays of quadruplexforming sequences could either be due to formation of multiple concurrent structures within the same molecule, or due to increased likelihood of structure formation.We noted that poly(dG) 60 , which could in theory form up to four G4 structures, induced more robust stalling than 16 consecutive G4-forming sequences (Figs 2A and EV3A).In the case of an uninterrupted tract of guanines, a G4 could form from any guanines within the sequence.This is in contrast to other distinct G4-forming sequences we tested (Fig 2A), where structure formation is constrained to a defined window of G-tracts.These observations suggest that in addition to structure stability Replisome stalling is dependent on probability of structure-formation and is affected by orientation. A Replication of G4 substrates containing poly(dG) 60 or an increasing number of consecutive (c-MYC Pu22) n or (GGGT) n repeats.Products were analysed on a denaturing agarose gel.B Replication of iM substrates containing poly(dC) 60 or an increasing number of consecutive (DUX4L22) n or (SNORD112) n repeats.Products were analysed on a denaturing agarose gel.C Replication of substrates containing a tract of poly(dG) 60 , either uninterrupted (lane 2), interrupted every 5 th guanine with either a thymine or adenine, or interrupted every 10 th guanine with either a thymine or adenine.Replication products were analysed on a denaturing agarose gel.D Replication of substrates containing a tract of poly(dC) 60 , either uninterrupted, interrupted every 5 th cytosine with either an adenine or thymine, or interrupted every 10 th cytosine with either an adenine or thymine.Replication products were analysed on a denaturing agarose gel.E Replication products from substrates containing a tract of poly(dG) 60 , or arrays of either (c-MYC Pu22) 8 or (GGGT) 28 on the leading strand template in the forward (F) or the reverse orientation (R).In the forward orientation, the G-rich strand serve as a template for leading strand synthesis, where in the reverse orientation, the Crich strand serves as a leading strand template.F Replication products from substrates containing a tract of poly(dC) 60 , or arrays of (DAP) 4 , (DUX4L22) 4 , or (SNORD112) 4 on the leading strand template in the forward (F) or the reverse orientation (R). Source data are available online for this figure. of 24 The (Fig 1), stalling efficiency is also dictated by the probability of secondary structure formation.To test this possibility, we interrupted the poly(dG) 60 sequence such that it could still support G4formation but constrained to specific G-tracts.Leading strand stalling was reduced when the sequence was interrupted, and this was more prominent when interruptions were more frequent (Fig 2C,compare lane 2 to lanes 3 and 4 and Fig EV3C).Interrupting the poly(dC) 60 tract produced similar results (Figs 2D and EV3D).We conclude that stalling efficiency is determined not only by structure stability but also by the probability of structure formation. Having established that replication is stalled due to a quadruplex-forming sequence on the leading strand template, we next sought to determine how a quadruplex-forming sequence on the lagging strand template affects leading strand replication.To this end, we cloned G4-forming sequences in the forward orientation (leading strand template) or reverse orientation (lagging strand template) and observed leading strand replication products (Fig 2E).It is important to note that although we have cloned structure-forming sequences on the lagging strand template (reverse), we did not include factors required for Okazaki fragment maturation, and as such we only observed the effects on leading strand replication.While leading strand stalling was observed when the G4-forming sequence was in the forward orientation, no stalling was seen when in the reverse orientation (Fig 2E, compare lanes 3 and 5 to lanes 4 and 6).Although we did not analyse lagging strand synthesis, previous work has demonstrated that the replication machinery skips over a lagging strand G4, resulting in a small gap, about the size of an Okazaki fragment, on the nascent lagging strand (Kumar et al, 2021). We performed similar experiments to determine how an iMforming sequence on the lagging strand template affects leading strand synthesis.Interestingly, leading strand stalling occurred in both orientations (forward and reverse) (Fig 2F).Given the shorter consensus sequence of stable G4s relative to iMs, we speculate that the stalling observed in the reversed orientation in this scenario is due to G4 formation by the complementary G-rich sequence on the leading strand template.In contrast, the C-rich sequences which are complementary to the G4-forming sequences in Fig 2E are not able to form very stable iM structures (Table 2, both have a low pH T $5.8).This may explain why we do not observe leading strand replication stalling when these sequences are positioned on the leading strand template (Fig 2E , lanes 4 and 6). CMG can eventually bypass a pre-formed G-quadruplex structure Having observed consistent replisome stalling at quadruplex-forming sequences, we next wanted to determine whether the stall was transient or persisted over time.To determine this, we carried out pulse-chase experiments, where newly synthesised DNA is labelled with radiolabelled dATP for 10 min, after which an excess of unlabelled dATP is added.This prevents labelling of newly initiating replication forks and allows specific analysis of forks that have initiated in the first 10 min of the reaction.Replication forks stalled at G4 and iMforming sequences were not resolved over time and persisted for up to 2 h (Fig 3A).This persistent arrest could either be due to blocked unwinding by the CMG helicase or lack of synthesis by polymerase ε. To address the first possibility, we carried out unwinding assays to determine the ability of CMG to unwind pre-formed G4 or iM structures.We chose sequences which induced the strongest replisome arrestnamely (GGGT) 4 and (SNORD112) 1 , as well as mutated versions which abrogate structure formation (Table EV1).These sequences were located on the translocating strand.To favour structure formation, we inserted a poly(dT) 19 stretch on the opposite strand (Batra et al, 2022).Consistent with previous work (Kumar et al, 2021), time course analysis revealed that CMG unwinding was initially inhibited by a G4 structure, evident within the first 5-10 min of the reaction (Fig 3B).This was particularly evident when compared to a G4 mutant sequence (Fig 3C),duplex (Appendix Fig S2A) or bubble (Appendix Fig S2B).However, inhibition of unwinding was not terminal, and CMG was able to eventually unwind G4 substrates to levels similar to the G4 mutated substrate (Fig 3B and C).Interestingly, an iM structure had little effect on CMG unwinding (Fig 3D and E).We observed that unwinding of mutated G4 and iM sequences was slightly less efficient than fully duplexed and bubble substrates (compare Fig 3C and E to Appendix Fig S2A and B).This may be due to interspersed contacts between adenine residues in the mutated sequences and the poly (dT) 19 loop.To bypass a pre-existing quadruplex structure, CMG may either dismantle the structure or 'hop' over and leave it intact.To distinguish between these possibilities, we assessed the effect of a G4stabilising ligand, PhenDC3 (De Cian et al, 2007).If CMG 'hops' over G4s, stabilising them should have no effect.In contrast, if CMG directly unwinds structures, further stabilising a G4 will inhibit unwinding.In the presence of a low concentration of PhenDC3 (0.25 μM), we observed little effect on duplex (Fig 3F) and G4 mutant unwinding (Appendix Fig S2C).In contrast, unwinding of the G4 substrate was inhibited (Fig 3G).Therefore, CMG bypasses pre-existing G4 structures by dismantling them rather than 'hopping' over them. Replication templates do not contain pre-formed secondary structures All the evidence gained thus far indicates that secondary structures are the cause for replication stalling.However, it was not clear whether structures were pre-formed (for example during A Pulse-chase time course experiment with (c-MYC Pu22) 4 or (SNORD112) 4 substrates.Reactions were initiated with radiolabelled dATP for 10 min, chased with excess 'cold' dATP and samples taken at the indicated time points.B-E CMG unwinding assays on substrates containing a pre-formed G4 (B), a mutant G4 (see Table EV1) (C), pre-formed iM (D) or a mutant iM (see Table EV1) (E).CMG unwinding was stimulated by the addition of 2 mM ATP following CMG loading in the presence of ATPγS.Samples were taken at the indicated time points.Products were run on 10% TBE gels.Input and boiled substrates were used as controls to visualise where original and unwound substrates migrate.The proportion of template unwound was calculated by measuring the intensity of the 'unwound' product band as a proportion of the total product intensity for each lane.F, G CMG unwinding assays on duplexed substrates (F) or substrates containing a pre-formed G4 (G).Reactions were carried out as in (B-E) but with the addition of 0.25 μM PhenDC3 where indicated. Source data are available online for this figure.position along a dsDNA molecule (Boskovic et al, 2019).DNA is passed through the nanopores and the ionic current measured.Folded G4s give rise to a variation in topology relative to dsDNA that in turn produces a characteristic signal in the ionic current (Fig EV4A).The DNA molecule can enter the nanopore in any orientation, which would place the G4 either proximal or distal (Fig EV4B and C).Besides current drops indicative of G4s, we also observe larger peaks that can be attributed to naturally occurring DNA knots (Fig 4A,'knot').The random position and frequency ($10%) of these knots is within the expectation for DNA molecules of this length (Plesa et al, 2016).The summary scatterplot of the peak positions in the first 100 informative events from our nanopore measurement of the positive control is shown in Fig 4B (top panel).Virtually all events include a G4 structure (99/100) within the expected relative positions of 0.2 and 0.8.We observed a smaller proportion of naturally occurring knots, randomly distributed along the DNA molecules.Importantly, in the absence of potassium ions, only knots were detected with no discernible G4 signals (Fig 4B,bottom panel). Having established these positive and negative controls, we next measured our replication substrates, with secondary structures expected at relative positions 0.3 and 0.7 (Fig EV4D and E).In contrast to the positive control, none of the replication substrates produced detectable quadruplex peaks.Rather, all substrates, including the empty control, exhibited random knots with a similar distribution and frequency (Figs 4C and EV4F).We conclude that our replication substrates do not contain any pre-existing structures. Replisome stalling at G4s leads to helicase-polymerase uncoupling As we had observed that our replication templates do not contain any pre-formed structures and that CMG can directly unwind such structures, we inferred that they must be forming as a consequence of replication.We hypothesised that structures forming behind CMG inhibit synthesis by polymerase ε, while CMG continues to unwind downstream.This phenomenon, termed helicase-polymerase uncoupling, occurs in response to various leading strand DNA damage lesions (Taylor & Yeeles, 2018) and other types of repetitive sequences (Kumar et al, 2021;Casas-Delucchi et al, 2022) and leads to the exposure of ssDNA.To determine if the same response occurs at G4-forming sequences, we utilised a previously established approach whereby an exogenous primer complementary to the region 264 nt downstream of the G4 sequence is added to the reaction (Taylor & Yeeles, 2018;Casas-Delucchi et al, 2022).This primer can only anneal if ssDNA is exposed, leading to restart of DNA synthesis, thereby serving as a readout for helicase-polymerase uncoupling.As shown in Fig 4D, addition of this primer led to the appearance of 5 kb restart products for all G4forming sequences tested, but these were not seen with a scrambled primer.This restart product was absent from the empty vector (Fig 4D, lane 1) but was evident with a template containing a CPD lesion (Fig 4D , lane 3), which served as a positive control.This is strong evidence that unwinding by the CMG helicase continues beyond the G4 sequence, but synthesis by polymerase ε is inhibited.Analysis of the mechanism of stalling at iM sequences using this method is more complex, due to the presence of the intrinsic 5 kb band.However, the facts that CMG can unwind past an iM (Fig 3D ), and our substrates do not contain any pre-existing structures (Fig 4C), suggest that stalling within iMs is also due to inhibition of DNA synthesis by pol ε, which most likely also results in helicase-polymerase uncoupling at iMforming sequences. Given that our substrates do not contain any pre-formed secondary structures (Fig 4C ), we considered the possibility that structures arise on the ssDNA template exposed by CMG unwinding.If this were true, then increasing the amount of ssDNA would enhance the likelihood of structure formation and consequently result in more polymerase stalling.Exposure of excess ssDNA is typically limited as polymerase ε is thought to be tightly coupled to CMG through a direct interaction (Zhou et al, 2017).As polymerase ε is essential for replication initiation, it cannot be omitted from replication reactions.However, deletion of its catalytic domain, which completely eliminates its polymerase activity, is compatible with replication initiation.Under these conditions, polymerase δ carries out leading strand synthesis.Since polymerase δ does not directly interact with CMG, this results in discontinuous synthesis that is not coupled to unwinding.Replication reactions with this polymerase ε mutant resulted in increased replisome stalling when compared to the wildtype protein (Fig 4E, compare odd lanes to even lanes).This was true for both G4 and iM-forming sequences.We conclude that replication fork uncoupling leads to enhanced fork stalling and propose that this occurs due to increased probability of secondary structure formation. Replication products break at i-Motifs Upon replication of substrates containing iM-forming sequences, we consistently observed the presence of a novel 5 kb product on ▸ A Schematic of a positive control DNA passing through the nanopore in the direction that positions the G4 proximally and a representative nanopore measurement event with a DNA knot in the middle of the molecule.The G-quadruplex structure and its corresponding current drop are marked in blue.The DNA knot and its corresponding current drop are marked in green.Numbers indicate the proportion through the DNA the G4 is positioned.B Summary scatterplots of the peak positions in the first 100 informative nanopore events for the positive control (where K + is added) and the negative control (no K + present).The negative control does not contain any G-quadruplex structure without the presence of potassium ions.Numbers indicate proportion of G4s or knots in 100 unfolded events.C Nanopore measurement results of the first 100 informative nanopore events for replication templates.Summary scatterplots of the peak positions are shown for an empty control and substrates containing (c-MYC Pu22) 16 or (SNORD112) 4 .Numbers indicate proportion of G4s or knots in 100 informative events.D Replication reactions carried out on G4-containing templates in the presence of a primer that anneals 264 nt downstream of the G4 (rp) or a scrambled control primer (scr).E Replication of G4 or iM substrates with either wildtype pol ε (wt) or a pol ε mutant with a deleted catalytic domain (Δcat).Products were analysed on a denaturing agarose gel. Source data are available online for this figure. 10 of 24 The denaturing gels, as highlighted previously.We hypothesised that these products may arise as a result of either re-priming or DNA breakage at the site of the iM during or after replication (Fig EV1).To address the latter possibility, we simulated broken replication products by digesting fully replicated control templates post-replication with an enzyme that cleaves within the insert (Fig 5A).The resulting product harbours a double-stranded break at the position of the iM on newly synthesised DNA.To simplify analysis and reduce the heterogeneity in product length arising as a result of flaps generated by strand displacement, polymerase δ was excluded from these reactions.We verified that this novel 5 kb band was unaffected by the presence of polymerase δ (Appendix Fig S3C).Upon analysis by native gel, we observed that the smaller population of products generated by iM substrates migrated at the same positions as the simulated 'broken products' (Fig 5B, compare lanes 2 and 3 to lane 4). To further analyse these broken replication products, we carried out two-dimensional (2D) gel electrophoresis.As expected, analysis of replication products of an iM template demonstrated the presence of stalled forks and incompletely replicated forks in the population of replication intermediates (Fig 5C).Full-length products consisted mostly of full-length leading strands 1 and 2 and Okazaki fragments.2D gel analysis maps the 5 kb band to the products identified on the native dimension as broken (Fig 5C).We also observed a weaker population of products in the denaturing dimension corresponding to the second faster migrating broken product on the native gel (Fig 5C).Importantly, these bands mapped to the same positions observed with the simulated broken products (Fig 5D ), were absent from the empty vector (Fig EV5A) and were also observed with a different iM-forming sequence (Fig EV5B).Together, these results suggest that replication induces breakage within iMs.We note that we cannot discount that in addition to these broken products, some intrinsic repriming events also occur (Fig EV1), which would generate products that migrate as full-length on the native dimension but may be masked by the strong signal of leading strand 1 (Fig 5C). Stalling at quadruplexes can be rescued by a specialised helicase Having established the mechanism of fork stalling within quadruplexes, we next wanted to understand how replisomes minimise or resolve stalls.Since increased fork uncoupling enhanced replisome stalling (Fig 4E ), we considered the possibility that improved coupling might reduce stalling.CTF18-RFC is an alternative PCNA clamp-loader to the canonical RFC1-RFC that has been proposed to increase coupling of DNA synthesis and unwinding by directly binding polymerase ε (Grabarczyk et al, 2018;Stokes et al, 2020).However, addition of CTF18-RFC, in the absence or presence of RFC1-RFC, did not affect stalling at either G4s or iMs (Appendix Fig S3A). Previous work from our laboratory had revealed that polymerase δ, as well as high concentrations of dNTPs, can rescue stalling at hairpin-forming sequences, but not at poly(dG) n or poly(dC) n (Casas-Delucchi et al, 2022).Consistent with this, polymerase δ was not able to rescue stalling at any G4 (Appendix Fig S3B) or iMforming sequence (Appendix Fig S3C).Similarly, an excess of any dNTP alone, or in combination, was not able to resolve stalling at either G4 (Appendix Fig S4A) or iM (Appendix Fig S4B) forming sequences.We considered the possibility that the relative ratio of dNTPs may be more important than their absolute concentration.This raised the prediction that an increased proportion of dCTP relative to dA/dG/dTTP might be able to rescue stalling at G4-forming sequences.However, stalled forks were not rescued by a further excess of dCTP (2-fold over dATP and 26-fold over dGTP and dTTP (Appendix S4C)).Altogether, this suggests that once the replisome stalls at a quadruplex-forming sequence, the stall is persistent and cannot be overcome by replisome-intrinsic mechanisms. We previously found that Pif1 could rescue forks stalled at poly (dG) n and poly(dC) n sequences (Casas-Delucchi et al, 2022), raising the possibility that it could rescue stalling at all quadruplex-forming sequences.Pif1 is a well-characterised G4-unwinding helicase shown to play a vital role in enabling the efficient replication of G4s both in vivo and in vitro (Ribeyre et al, 2009;Lopes et al, 2011;Paeschke et al, 2011Paeschke et al, , 2013;;Byrd et al, 2018;Dahan et al, 2018;Sparks et al, 2019b;Maestroni et al, 2020).However, there are additional helicases that bind and unwind G4 structures both in vitro and in vivo, such as Rrm3, Sgs1, Hrq1 and Chl1.Rrm3 is another yeast Pif1-family helicase with a high sequence and functional similarity to Pif1 (Bessler et al, 2001).Like Pif1, Rrm3 is a 5 0 -3 0 helicase which has overlapping functions with Pif1 and helps the replisome bypass barriers such as at tRNA promoters and telomeric DNA (Ivessa et al, 2002(Ivessa et al, , 2003)).Sgs1 is a 3 0 -5 0 RecQ family helicase shown to preferentially unwind G4 DNA and is the yeast homologue of the BLM and WRN helicases (Huber et al, 2002).Similarly, Hrq1 is the yeast homologue of another RecQ helicase, RecQ4.RecQ4 is one of the five RecQ helicases found in humans, with 3 0 -5 0 polarity.It functions in telomere maintenance (Ghosh et al, 2012) and has been shown to bind and unwind G4 structures (Rogers et al, 2017).Chl1 is the yeast homologue of human ChlR1 (also called DDX11).ChlR1 is a 5 0 -3 0 helicase that directly unwinds G4 structures and is proposed to help process G4s during DNA replication (Wu et al, 2012;Lerner et al, 2020).To test the role of these helicases, we carried out pulse-chase experiments by pulsing for 10 min, during which a persistent stall occurred at G4 or iM-forming sequences.We then added a candidate helicase and allowed replication to continue for a further 10 min to determine if the stall could be resolved.Since Chl1 is recruited to replisomes via an interaction with Ctf4 In contrast, and in agreement with our previous work, Pif1 was able to rescue forks stalled at both G4s and iMs (Fig 6A and B).The extent of rescue was less evident for expanded sequences such as (GGGT) 28 and (SNORD112) 4 which may indicate that consecutive G4 or iM structures pose a greater challenge to Pif1.Rescue was dependent on the helicase activity of Pif1, as no rescue was observed with the Pif1 ATPase mutant K264A.The fact that Pif1, but not other implicated helicases, was able to rescue forks stalled at quadruplex-forming sequences demonstrates its specificity, and highlights that only specialised helicases are able to resolve fork stalling at G4 and iM sequences. Discussion We have assessed the response of the eukaryotic replisome to a variety of G4 and iM-forming sequences and found that a single quadruplex-forming sequence alone is able to stall DNA replication.This is a significant finding as these classes of sequences are highly prevalent in the human genome.Current estimates are between 370,000 (Huppert & Balasubramanian, 2005, 2007) and $700,000 (Chambers et al, 2015) G4-forming sequences and 5,125 iM-forming sequences (Wright et al, 2017).These must all be replicated accurately to maintain genome integrity.Importantly, we found that the encounter of a replisome with a single quadruplex forming sequence can lead to the same mechanistic response as triggered in response to DNA damage.Moreover, in addition to inducing fork stalling, iMs have the propensity to induce DNA breakage.This raises the possibility that there are many physiological DNA sequences within the human genome that have the potential to threaten genome stability through distinct mechanisms. The ability of a G4-forming sequence to stall DNA replication correlates with its structure-forming potential.Consistent with this, mutated sequences that cannot form structures do not impact replisome progression.Together, these observations provide strong evidence that it is a G4 structure that causes replication stalling, and not the sequence itself.Importantly, we have characterised the effects of iMs on replisome progression.Here, we observed consistent replisome stalling at a variety of physiological iM sequences.Although the response of the replisome to iMs appears largely similar to G4s, we do observe some differences.First, replication stalling at iMs was generally weaker than that observed at G4s.We speculate that this may be because all iMs examined here are weaker structures than those formed by the G4s.This could mean they are less likely to form during replication, although once formed they appear to be a persistent and stable block to replisome progression.Second, the ability of iMs to stall replication seems to be less influenced by melting temperature than G4s and may correlate better with transitional pH.Interestingly, previous studies have found that iMs with a higher transitional pH have a higher potential for iM formation in vivo and in turn are more likely to undergo mutations and deletions in human cells (Martella et al, 2022).Moreover, the biophysical characterisation of these sequences was done on short oligonucleotides, and the actual thermal stability of these structures within the context of duplexed DNA may in fact be higher.Still, the fact that these structures are more influenced by pH than G4s may explain why we observe a weaker stall in our assays carried out under physiological pH conditions.The fact that stalling occurs in our in vitro system under physiological pH strongly supports the idea that iMs are indeed able to form and can be a robust block to the replisome, highlighting their biological relevance. Importantly, we have observed that iMs can also induce DNA breakage.This is a significant finding as this has the potential to threaten genome stability if not repaired correctly.How and when this DNA breakage occurs in the context of replication remains to be seen.Although we have obtained evidence that a proportion of replication products break at iMs, we cannot rule out the possibility that endogenous re-priming may occur preferentially at iM-forming sequences.All primases use a purine to initiate primer synthesis and therefore require a pyrimidine on the template strand.As such, polymerase α has been suggested to prime preferentially at CCC sequences (Davey & Faust, 1990).This may be a mechanism to preserve DNA integrity downstream of forks stalled at iMs, as has been suggested by polymerase α and the 9-1-1 complex downstream of forks stalled at G4s (van Schendel et al, 2021).Another possibility is that the broken products we observe are a consequence of stalldriven repriming events, which somehow promote breakage at iMs.We cannot distinguish whether breakage occurs after unperturbed replication or after stalling and repriming, as both events would yield identical products. Single molecule solid-state nanopore experiments demonstrate that our substrates do not contain any significant levels of preformed structures.Therefore, secondary structures must be forming during replication, resulting in inhibition of synthesis by polymerase ε.As ssDNA is the precursor for structure formation, we favour a model whereby structures form on ssDNA exposed behind the CMG helicase (Fig 6C).This is consistent with a recent super-resolution microscopy study which detected the presence of G4 structures between the CMG helicase and either PCNA or nascent DNA (Lee et al, 2021).Although CMG and polymerase ε are usually tightly ▸ Figure 6.Pif1 rescues stalling at both G4s and iMs. A, B Pulse-chase experiments carried out with the indicated templates.Reactions were initiated with radiolabelled dATP.After a 10 min pulse, either wildtype or ATPase-dead K264A (mut) Pif1 was added with the chase and samples taken after another 10 min. C Model illustrating the effects of G4s and iMs on replication.CMG unwinds past a leading strand quadruplex-forming sequence and secondary structures form from the exposed ssDNA.These structures inhibit synthesis by polymerase ε, leading to helicase-polymerase uncoupling.Pif1 can unwind both G4s and iMs and allow synthesis to resume.iMs can be resolved in this manner or lead to nascent DNA breakage in the absence or presence of a repriming event.Lagging strand products may remain intact. Source data are available online for this figure.coupled in the replisome, the length of ssDNA running between the exit channel of CMG and the active site of polymerase ε is unknown.The most recent structural models of the eukaryotic replisome suggest a gap of at least 16-nt (Yuan et al, 2020).This would be sufficient ssDNA to allow G4 formation, or to nucleate iM formation. Although we observed consistent replisome stalling at G4 and iM-forming sequences, we never observed a complete block to all replication forks and only saw a proportion of replication forks stalling.This was true even in the presence of up to 16 consecutive G4forming sequences.Therefore, a major determinant of replication fork stalling is the likelihood of structure formation.This explains why we observed only a marginal increase in the proportion of replication forks stalling with an increasing number of consecutive G4s or iMs.In the presence of a larger number of structure-forming repeats, a structure is more likely to fold due to sequence availability, but once a single structure has formed it is sufficient to block the replisome, and additional structures downstream would have no additional effect on synthesis.Similarly, the fact that we consistently observed a greater proportion of replication forks stalling at poly (dG) 60 and poly(dC) 60 may be because a structure can form in any given window and is not constrained by loop sequences.This is consistent with previous biophysical characterisations of poly(dC) n sequences, which found that the optimum transitional pH peaks at poly(dC) 28 and gets lower as the number of cytosines increases (up to poly(dC) 40 ) (Fleming et al, 2017).This suggests that iM structures formed by longer tracts of cytosines are not inherently more stable, but rather are more likely to fold.However, once formed, this structure is a robust block to the replisome that cannot be resolved by any replicative polymerase.In addition, structures which fold more quickly may be more likely to fold and in turn stall replication.There are known differences in the kinetics of folding between the two different iM topologies, the 3 0 E and 5 0 E conformations, which are distinguished by the position of the outermost C:C + base pair.In the human telomeric iM structure, it has been shown that the 3 0 E conformation forms faster but this may not be necessarily applicable for all iMs and will depend on the sequences within the loop regions (Malliavin et al, 2003;Lieblein et al, 2013).Folding kinetics usually correlate with structure stability, which may also contribute to the fact that both structure stability and probability of formation play a role in inducing replication stalling.In addition to the dynamics of the structure itself, the dynamics of how tightly each replication fork is coupled will determine whether there is sufficient time and space for structures to form. In some cases, G4s and iMs can form on complementary strands of the same sequence.In a physiological context, a stable G4 requires four tracts of three guanines.However, the complementary C-rich sequence would not form a stable iM at a physiological pH.Therefore, a stable G4 does not necessarily equate to a stable iM on the opposite strand.However, a stable iM requires four tracts of five cytosines at a physiological pH (Wright et al, 2017).The complementary G-rich sequence would conform to the requirement of a stable G4, and as such it is more conceivable that a stable G4 structure would be found opposite an iM.This may explain why we observed an orientation-dependent stalling for G4s, but an orientationindependence for iMs. Similar to our previous work with repetitive sequences (Casas-Delucchi et al, 2022), we observe uncoupling between helicase unwinding and DNA synthesis in response to quadruplexes.This is consistent with previous work which demonstrates an enrichment of G4-forming sequences within 6 kb of uncoupled forks in tumour cells (Amparo et al, 2020).This response is akin to the response to leading strand DNA damage (Taylor & Yeeles, 2018) and leads to the exposure of ssDNA.In a physiological context, exposure of large amounts of ssDNA as a result of replication stress can lead to the activation of checkpoint pathways (MacDougall et al, 2007).Further studies are required to explore if such processes occur in response to G4s and iMs during replication.The ssDNA exposed during replisome uncoupling can also be a substrate for recombination and mutation events, which may explain why G4s and iMs are frequent mutation hotspots and undergo rearrangements (Lopes et al, 2011).A recent high-throughput primer extension assay using T7 DNA polymerase demonstrated that polymerase stalling at structure-prone sequences generates point mutations with a higher frequency than slippage events.Consistent with their higher propensity to inhibit synthesis, G4s displayed higher mutation rates than iMs and were often mutated in the loop regions (Murat et al, 2020).Whether this same phenomenon occurs with eukaryotic polymerases in the context of the complete replisome remains unclear.Many of these structure-forming sequences have been found at cancer mutation hotspots or breakpoints (De & Michor, 2011;Wang & Vasquez, 2017;Bacolla et al, 2019), and replication stalling could be a potential explanation for mutational events.Indeed, DNA breakage at iMs may directly contribute to their mutagenic potential in cancer. We have demonstrated that CMG is able to bypass a G4 or iM sequence and established that synthesis by polymerase ε is inhibited.Stabilisation of G4 structures using a small molecule inhibited unwinding by CMG, suggesting that CMG bypasses a G4 by dismantling the structure as opposed to 'hopping' over the structure and leaving it intact.This is in contrast with its ability to bypass intact DNA-protein cross-links (Sparks et al, 2019a), leading strand oxidative lesions (Guilliam & Yeeles, 2021) and lagging strand blocks (Langston et al, 2017).It remains to be seen whether replisome stalling occurs in the same position each time.Higher resolution studies could map the position of replisome stalling within the G4 or iM and decipher whether the replisome consistently stalls at the base of the structure, or if it is able to progress some distance through. It is unsurprising that we observe rescue of replication fork stalling at G4s by Pif1, given the breadth of data describing its role as a G4-unwinding helicase (Ribeyre et al, 2009;Lopes et al, 2011;Paeschke et al, 2011Paeschke et al, , 2013;;Byrd et al, 2018;Dahan et al, 2018;Sparks et al, 2019b;Maestroni et al, 2020).However, the fact that Pif1 is also able to resolve forks stalled at iMs points towards the broad specificity of Pif1 as a helicase.This is in line with its ability to also resolve forks stalled at hairpins (Casas-Delucchi et al, 2022).Our observation that rescue of replisome stalling was less efficient with an array of consecutive G4 and iM sequences may indicate that multiple structures are formed within these arrays, which may require more extensive helicase activities. Interestingly, none of the other helicases tested were able to rescue fork stalling at G4s or iMs, despite their demonstrated ability to unwind G4s (Huber et al, 2002;Wu et al, 2012;Rogers et al, 2017).One potential explanation could be unwinding polarity.Pif1 unwinds 5 0 to 3 0 , while some of the other helicases exhibit 3 0 to 5 0 activity.Our working model is that exposure of ssDNA downstream of the G4 or iM allows accessibility to helicases with 5 0 to 3 0 activity, but not to those with 3 0 to 5 0 .Another possible explanation could be preference for certain types of G4s.For example, Chl1 has been shown to have stronger unwinding activity on anti-parallel G4s (Wu et al, 2012), while the G4 sequences we tested all form parallel G4s, which is a more common conformation.Similarly, the fact that Rrm3 was unable to rescue stalling at G4s despite its relation to Pif1 may reflect the fact that these helicases often have different functions despite binding the same substrates (Bessler et al, 2001).This may also explain the large number of quadruplex-unwinding helicases that appear to have some level of redundancy, as they may each have a role in resolving structures in different scenarios.For example, BLM helicase has been shown to suppress recombination at G4s in transcribed genes (van Wietmarschen et al, 2018).The fact that only an accessory helicase was able to resolve stalls at G4s and iMs may also explain why mutations in helicases such as these lead to genome instability diseases, such as Bloom's Syndrome (reviewed in Cunniff et al, 2017), Werner's syndrome (Yu et al, 1996) and Fanconi Anaemia (reviewed in Brosh & Cantor, 2014). Recent advances in the field have provided the tools to reconstitute human DNA replication in vitro (Baris et al, 2022).Although the core replication machinery is conserved from budding yeast to human, it remains to be seen whether G4s and iMs have the same impact on progression of the human replisome.Using this system to study replication of structure-forming sequences would also enable one to study the roles of other human proteins in this process, such as an array of human helicases including RecQ helicases such as BLM (Sun et al, 1998) and WRN (Fry & Loeb, 1999), RTEL1 (Barber et al, 2008;Vannier et al, 2013) and the Fanconi Anaemia protein FANCJ (Wu et al, 2008). Altogether, we discovered that a range of physiological G4 and iM structures arising as a result of DNA replication can stall the eukaryotic replisome.This study provides further insight as to why these sequences pose a barrier to DNA replication and suggests a potential mechanism for structure formation and resolution during replication.Moreover, we have found that i-Motifs can directly cause DNA breakage.We therefore propose that endogenous DNA secondary structures are a source of replication stress, which may explain their genomic instability and mutation frequencies in cancer. Constructing repeat-containing replication substrates Substrates for replication assays were constructed by inserting repetitive sequences into a 9.8 kb parental plasmid (pGC542) containing a single synthetic yeast replication origin (ARS306), which has been previously described (Casas-Delucchi et al, 2022).Repetitive sequences were cloned into the MCS 3 kb downstream of the origin and expanded using a strategy that employs synthetic oligos and type IIS restriction enzymes (Scior et al, 2011).All oligos were ordered from Integrated DNA Technologies (IDT), the sequences of which can be found in Table EV1. Preparing templates for replication assays Plasmids (Table EV2) were transformed into NEB Stable Competent E. coli cells (#C3040I), which are ideal for propagation of repeat-containing plasmids.Cultures were grown at 30°C to reduce potential recombination and mutation events.Plasmids were purified using a QIAGEN HiSpeed Maxi Kit.Subsequently, supercoiled plasmids were isolated from nicked plasmids by PlasmidSelect clean up.DNA samples were diluted 6-fold in 100 mM Tris-HCl (pH 7.5), 10 mM EDTA and 3 M (NH 4 ) 2 SO 4 (final concentration = 2.5 M) before incubation with 300 μl Plas-midSelect Xtra slurry (pre-washed with 100 mM Tris-HCl (pH 7.5), 10 mM EDTA and 2.3 M (NH 4 ) 2 SO 4 ) for 30 min to bind.Following binding, nicked plasmids were eluted initially with 1 ml buffer consisting of 100 mM Tris-HCl (pH 7.5), 10 mM EDTA and 1.9 M (NH 4 ) 2 SO 4 .This was repeated twice.Then, 1 ml buffer was added to the beads and allowed to incubate for 10 min at RT. Subsequently, supercoiled plasmids were eluted with 100 mM Tris-HCl (pH 7.5), 10 mM EDTA and 1.5 M (NH 4 ) 2 SO 4 by incubation for 10 min at RT.This was repeated once.Samples were de-salted by dialysis against 0.1 × TE for 3 h and overnight.DNA was concentrated using a 100 kDa Amicon concentrator followed by ethanol precipitation and resuspended in 1xTE. CPD substrate The substrate containing a site-specific DNA damage (CPD lesion) was prepared as has been previously described (Casas-Delucchi et al, 2022). Protein purification All protein expression strains, and expression and purification steps were carried out as previously described (Casas-Delucchi et al, 2022). Pol ε-Δcat The DNA polymerase ε mutant with a deleted catalytic domain (Pol ε-Δcat) was expressed from the budding yeast strain yAJ25, which has been previously described (Yeeles et al, 2017), and purified as per wildtype pol ε. Rrm3 Purified recombinant yeast Rrm3 was expressed and purified as previously described (Deegan et al, 2019). Hrq1 Purified recombinant yeast Hrq1 and the catalytic mutant K318A was expressed and purified as previously described (Rogers et al, 2017). Chl1 purification Budding yeast cells overexpressing Chl1 were grown in YP medium containing 2% raffinose as the carbon source to an optical density of 1.0 at 30°C.2% galactose was then added to the culture to induce the protein expression, and the cells were further grown for 90 min.Cells were collected by centrifugation, washed with deionised water and suspended in Chl1 buffer (50 mM Tris-HCl pH 7, 10% glycerol, 2 mM MgCl 2 , 0.5 mM TCEP) containing 0.1% Triton X-100, 500 mM NaCl, 0.5 mM Pefabloc, as well as the cOmplete-EDTA protease inhibitor cocktails.The cell suspension was frozen in liquid nitrogen, then cells were broken in a cryogenic freezer mill.The cell powder was thawed on ice, and further Chl1 buffer containing 0.1% Triton X-100 and 500 mM NaCl, and protease inhibitors was added.The lysate was clarified by centrifugation at 20,000 g for 1 The clarified lysate was transferred to pre-equilibrated IgG agarose beads.8 μg/ml RNase A (Merck) was added, and incubated for 2 h.The resin was washed with Chl1 buffer containing 0.1% Triton X-100 and 500 mM NaCl and then incubated in Chl1 buffer containing 0.1% Triton X-100, 500 mM NaCl, 10 mM MgCl 2 and 1 mM ATP for 15 min.The resin was washed again with Chl1 buffer containing 0.1% Triton X-100 and 500 mM NaCl and incubated overnight in the same buffer containing 10 μg/ml PreScission protease.The eluate was collected, and Chl1 dilution buffer (50 mM Tris-HCl pH7, 10% glycerol, 2 mM MgCl 2 , 0.5 mM TCEP, 10 mM NaCl) was added to adjust the salt concentration to 160 mM NaCl.The diluted sample was loaded onto a HiTrap Heparin (Cytiva) column, equilibrated with Chl1 buffer containing 160 mM NaCl.The column was developed with a linear gradient from 160 mM to 1 M NaCl in Chl1 buffer.The peak fractions were pooled and loaded onto a Superdex 200 Increase (Cytiva) gel filtration column that was equilibrated and developed with Chl1 gel filtration buffer (20 mM Tris-HCl pH 7.5, 150 mM NaCl, 10% Glycerol, 0.5 mM TCEP).The peak fractions were concentrated by ultrafiltration. In vitro replication assays MCM loading was carried out for 10 min at 24°C on 3 nM circular DNA that was linearised during loading with 0.3 μl AhdI in a buffer containing 25 mM HEPES (pH 7.6), 10 mM magnesium acetate, 100 mM potassium glutamate, 1 mM DTT, 0.01% NP-40-S, 0.1 mg/ml BSA, 80 mM potassium chloride, 5 mM ATP, 20 nM ORC, 45 nM Cdc6, 75 nM Cdt1-Mcm2-7 and 50 nM DDK.Loading was stopped by the addition of 120 nM S-CDK for 5 min at 24°C.Following loading, samples were diluted in a buffer containing 25 mM HEPES (pH 7.6), 10 mM magnesium acetate, 100 mM potassium glutamate, 1 mM DTT, 0.01% NP-40-S, 0.1 mg/ml BSA to dilute the final contribution of chloride to 14 mM.A nucleotide mix was added to give final concentrations of 200 μM ATP, CTP, GTP and UTP; 30 μM dATP, dCTP, dGTP and dTTP; and 132 nM α-P 33 -dATP.Subsequently, to initiate replication, a master mix of proteins was added to give final concentrations of 100 nM GINS, 10 nM S-CDK, 10 nM Mcm10, 40 nM Csm3/Tof1, 20 nM Pol ε, 30 nM Dpb11, 40 nM Cdc45, 40 nM Mrc1, 60 nM RPA, 40 nM RFC, 120 nM PCNA, 5 nM Pol δ, 50 nM Pol α, 20 nM Sld3/7 and 20 nM Sld2.Reactions were incubated at 30°C for 40 min.For samples to be run on denaturing gels, 0.5 μl SmaI was added to each 10 μl reaction in the final 10 min of the reaction, which cleaves products approx.100 bp from the origin of replication.This removes heterogeneity in the length of leading strand products arising due to variability in the exact location where synthesis of leading strands begins, despite origin specificity (Taylor & Yeeles, 2018).Reactions were quenched by adding EDTA to a final concentration of 100 mM. Pulse-chase experiments were carried out as previously described (Casas-Delucchi et al, 2022).During the pulse, unlabelled deoxyribonucleotide concentrations were as follows: 30 μM dCTP, dTTP, dGTP and 2.5 μM dATP.To carry out the chase, unlabelled dATP was added to a final concentration of 400 μM, with the addition of 400 μM of dGTP, dCTP or dTTP, or 800 μM dCTP where indicated. Repriming experiments were carried out by the addition of 60 nM oligonucleotide after loading and before initiation of replication (Casas-Delucchi et al, 2022). Post-replication sample processing For denaturing gels, after quenching with 20 mM EDTA, 1/10 volumes of alkaline loading dye (0.5 M NaOH, 10% sucrose, xylene cyanol in water) was added to samples.Replication products were separated on 0.8% alkaline agarose gels in 30 mM NaOH, 2 mM EDTA at 32 V for 16 h.Subsequently, products were fixed on denaturing agarose gels by incubation in 5% TCA for 40 min at room temperature. For two-dimensional (2D) gel electrophoresis, samples were treated as per native gels and then split equally into two lanes on the same native gel as described above (one lane for analysis and one lane for the second dimension).One lane was excised from the native gel and soaked for 2 × 1 h in alkaline running buffer.Gel slices were then horizontally inserted into the top of a 0.8% alkaline agarose gel and run as per standard denaturing gels. Gels were dried before being exposed to a Storage Phosphor Screen (GE Healthcare, BAS-IP MS 2025) and imaged on a Typhoon scanner (Cytiva).Images were analysed and quantified in ImageJ.Quantifications of stalling intensities were calculated from an average of three replicates in separate experiments.To calculate intensities, the background for each lane was subtracted from each measurement.The 3 kb stalling intensity band was normalised to the intensity of 'leading strand 2' in each lane to account for variation in the efficiencies of reactions for each substrate. Preparing templates for helicase assays Forked DNA substrates were prepared as previously described (Batra et al, 2022).Briefly, dried oligos from IDT were resuspended to 10 μM in 10 mM Tris pH 8.0.The bottom strand of the substrate (GC340, SW051 or SW105) was end labelled with γ-32 P-ATP in a reaction containing 5 pmol oligo, 1× PNK buffer, 1 U of PNK enzyme (NEB, M0201S), and γ-32 P-ATP (0.03 mCi).The reaction was incubated for 1 h at 37°C, followed by heat inactivation of PNK for 20 min at 80°C and then 10 min at 90°C.Unincorporated radiolabelled nucleotides were removed by passing the sample through a G50 column (GE Healthcare, 2753002) equilibrated in 10 mM Tris pH 8.0.The concentration of K + was then adjusted to 50 mM by the addition of 1 M KOAc.To anneal the complementary top strand (GC339 or SW114), 1 μl of a 10 μM stock of oligonucleotide was added and the reaction was incubated at 95°C for 5 min, followed by a slow cooling to 10°C at a rate of À1°C/min.Annealed products were run on 10% TBE gels (ThermoFisher Scientific, EC62755BOX) in 0.5xTBE at 150 V for 45 min.Fully annealed substrates were isolated from the gel using a crush-soak method as described in Batra et al, 2022. Sequences of all oligonucleotides used to generate helicase unwinding substrates are detailed in Table EV1. Helicase assays CMG unwinding assays were carried out using 0.5 nM of labelled substrate and 20 nM of purified CMG.Reactions were assembled in a buffer containing 25 mM HEPES pH 7.6, 10 mM MgOAc, 30 mM NaCl, 0.1 mg/ml BSA and 0.1 mM AMP PNP.Reactions were incubated at 30°C for 30 min to allow CMG to pre-load onto the template, before addition of ATP to a final concentration of 2 mM to stimulate unwinding.At this point, 65 nM of the unlabelled version of the labelled oligo (GC340, SW051, SW069, SW105, SW129 or SW130) was added to trap any unwound oligos and prevent substrate re-annealing.Reactions were incubated at 30°C and time points taken as indicated.Reactions were stopped by the addition of 0.5% SDS and 200 mM EDTA, supplemented with Novex Hi-Density TBE Sample buffer (ThermoFisher Scientific, LC6678) and analysed on 10% Novex TBE gels (ThermoFisher Scientific, EC62755BOX) in 1× TBE at 90 V for 90 min.Gels were exposed to a Storage Phosphor Screen (GE Healthcare, BAS-IP MS 2025) and imaged in a Typhoon (Cytiva).Images were analysed and quantified in ImageJ. For Chl1, Sgs1 and Rrm3 unwinding assays, reactions were carried out using 0.5 nM of labelled duplex and 50 nM of purified recombinant protein as previously described (Casas-Delucchi et al, 2022).For Hrq1 unwinding assays, reactions were carried out in the same manner except 0.1 nM labelled duplex and 100 nM of purified recombinant protein was used and reactions were carried out at 37°C. Biophysical characterisation of G4 sequences DNA annealing step DNA sequences (Table 1) were purchased in their lyophilised form with standard desalting purification from Integrated DNA Technology (IDT) and re-dissolved in MilliQ water to reach a stock concentration of 100 μM.The sequences were then further diluted to 10 μM in the annealing buffer (25 mM HEPES, 10 mM MgCl 2 , 110 mM KCl/LiCl), heated to 95°C for 15 min and left to slowly cool down to room temperature overnight. Thermal difference spectra (TDS) 100 μl of each of the 10 μM DNA sequences (annealed as above) was transferred into a High Precision Cell (quartz glass) Light Path 10 mm (HellmaAnalytics) and covered with 200 μl of mineral oil to prevent evaporation.The cuvettes were sealed with a plastic lid and transferred into the Agilent Cary 3500 UV-Vis Multicell Peltier spectrometer.A first scan was run at 25°C (Scan range = 800-200 nm | Averaging time (s) = 0.02 | Data Interval (nm) = 1 | Scan rate (nm/ min) = 3,000 | Spectral bandwidth (nm) = 2).The samples were then heated to 95°C and left to equilibrate for 7 min before running a second scan.Each TDS curve was obtained by subtracting the absorbance spectra (25°C) by the absorbance spectra (95°C). UV-Vis melting curve 100 μl of each of the 10 μM DNA sequences (annealed as above) was transferred into a High Precision Cell (quartz glass) Light Path 10 mm (HellmaAnalytics) and covered with 200 μl of mineral oil to prevent evaporation.The cuvettes were sealed with a plastic lid and transferred into the Agilent Cary 3500 UV-Vis Multicell Peltier spectrometer.The melting curve was obtained by imputing the following parameters (Wavelengths: 295 nm | Averaging time (s) = 0.1 | Spectral bandwidth (nm) = 2) and the heating protocol in Table 3. Data were collected only for stages 5 and 6 for each run. T m was extrapolated for each sample by fitting the relative melting curve as shown by Mergny and Lacroix (Mergny & Lacroix, 2003) with the Python script provided by Giacomo Fabrini (Fabrini, 2022). Biophysical characterisation of i-Motif sequences Oligonucleotides DNA sequences (Table 2) were supplied by Eurogentec (Belgium), synthesised on a 1,000 nmol scale and purified by reverse phase HPLC.All DNA sequences were dissolved in ultra-pure water to give 100 μM final concentrations, which were confirmed using a Nanodrop.For all experiments, ODNs were diluted in buffer containing 10 mM sodium cacodylate and 100 mM potassium chloride at the indicated pH.DNA samples were thermally annealed by heating in a heat block at 95°C for 5 min and cooled slowly to room temperature overnight. Circular dichroism CD spectra were recorded on a Jasco J-1500 spectropolarimeter using a 1 mm path length quartz cuvette.ODNs were diluted to 10 μM (total volume: 100 μl) in buffer at pH increments of 0.25 or 0.5 pH units from 4.0 to 8.0, depending on the sequence.Spectra were recorded at 20°C between 200 and 320 nm.Data pitch was set to 0.5 nm and measurements were taken at a scanning speed of 200 nm/min, response time of 1 s, bandwidth of 2 nm and 100 mdeg sensitivity; each spectrum was the average of four scans.Samples containing only buffer were also scanned according to these parameters to allow for blank subtraction.Transitional pH (pH T ) for each iM was calculated from the inflexion point of fitted ellipticity at 288 nm. UV absorption spectroscopy UV spectroscopy experiments were performed on a Jasco J-750 equipped with a Julabo F-250Temperature Controller and recorded using low volume masked quartz cuvettes (1 cm path length).Annealed DNA samples (250 μl) were transferred to a cuvette and covered with a stopper to reduce evaporation of the sample.The absorbance of the DNA was measured at 295 nm and 260 nm as the temperature of the sample was held for 10 min at 4°C, heated to 95°C at a rate of 0.5°C per min, then held at 95°C for 10 min before the process was reversed; each melting/annealing process was repeated three times.Data were recorded every 1°C during both melting and annealing and melting temperatures (T m ) were determined using the first derivative method.TDS were obtained by subtracting the spectrum of the folded structure between 220 and 320 nm at 4°C from that of the unfolded structure at 95°C.The data were normalised and maximum change in absorption was set to +1 as previously described (Mergny et al, 2005). Data analysis Final analysis and presentation of the data was performed using GraphPad Prism version 9.0.All sets of data passed Shapiro-Wilk normality test, P-values were calculated by One way ANOVA followed by Holm-Sidak posthoc analysis for the melting temperature and thermodynamic data collected from the triplicate measurements for each oligonucleotide. Nanopore detection of structures Plasmids for nanopore analysis were pre-linearised at the origin of replication by digestion with SmaI.This generated linear templates with the structure-forming sequence positioned asymmetrically from the ends of the DNA (30% into the template, 70% from the other end of the template).Following digestion, DNA was extracted using phenol:chloroform:isoamyl alcohol 25:24:1 (Sigma-Aldrich, P2069) and subsequently ethanol precipitated and resuspended in ddH 2 O. Synthesis of control DNA molecules The linear ssDNA scaffold for our designs as the positive control and negative control was obtained by cutting circular M13mp18 DNA (New England Biolabs) at BamHI-HF and EcoRI-HF (New England Biolabs, 100 units/μl) restriction sites following our published protocol (Bell & Keyser, 2016).Staple 42 and staple 43 in the basic staple set (Bell & Keyser, 2016) were replaced by our customised strand G16 with four repeats of GGGT in the middle to form the G-quadruplex secondary structure in the presence of potassium cations.Another strand, cG16, was added to form a double helix with the scaffold at the position where the G-quadruplex formed and thus stabilised the structure.All the DNA oligonucleotides were purchased from Integrated DNA Technologies, Inc. (IDT).Detailed sequences of the customised oligonucleotides can be found in Table 4.After mixing the modified oligonucleotide set and the M13mp18 scaffold at a 5:1 stoichiometric ratio, the solution was heated to 70°C followed by a linear cooling ramp to 25°C over 50 min.Excess oligonucleotides were removed using Amicon Ultra 100 kDa filters.After quantification with Nanodrop 2000 spectrophotometer, samples were then kept in a freezer at À20°C for later measurements. Nanopore measurement Nanopores used in this project were fabricated by laser-assisted pulling (P-2000, Sutter Instrument) of quartz capillaries (outer diameter 0.5 mm and inner diameter 0.2 mm, Sutter Instrument) as in our previous work (Bell & Keyser, 2016), though a higher pulling temperature (HEAT = 500) were used to obtain smaller nanopores with diameters of about 6 nm and higher signal-to-noise ratio.Current-voltage characteristic curves from À600 mV to 600 mV were recorded to indicate the estimated sizes and the root-meansquare (RMS) noise of the ionic current through the nanopores.Details can be found in Table 5.Once functional nanopores had been identified, the central reservoir of our nanopore chip was filled with our DNA sample (0.2 nM in 200 mM KCl and 4 M LiCl buffer, except for 0.2 nM in 4 M LiCl buffer when measuring the negative control group).A positive voltage of 600 mV was then applied to drive negatively charged DNA molecules through the nanopore, creating characteristically transient changes in the ionic current trace.The EMBO Journal Sophie L. Williams et al Data analysis was performed with the LabVIEW software and selfwritten python programs.It is important to note that a small slope was often observed at the baseline, probably due to the slight change in the buffer concentration as the measurement was carried out.The current baseline was linearly fitted, and the slope was corrected in Figs 4A-C and EV4 for better presentation.When we drew a reference line at 0.15 nA below the current baseline, the two intersections of the reference line and the current trace of one event were taken as its start and end.The event duration Δt 0 is the timescale between the start and end, and Δt refers to the interval between any second-level peak beyond the first-level plateau and the start (Fig EV4A).The position of the peak is calculated as Position ¼ Δt Δt0 .The EMBO Journal Sophie L. Williams et al Figure1. Figure 4 . Figure 4. Secondary structures are not pre-formed and likely arise as a result of replication, leading to helicase-polymerase uncoupling. ▸Figure 5 . Figure 5. Replication products break at i-Motifs.ASchematic of replication products arising from replication of a control template (top), a control template digested with NotI post-replication to simulate 'broken products' (middle), and an iM template where potential product breakage occurs at the site of the iM (indicated by a red star) (bottom).B Analysis of control or iM-containing replication products on a native gel.'Broken products' was generated by replication of a control template and post-reaction digestion with NotI which cleaves the product at the site of the insert.C, D Two-dimensional (2D) gel electrophoresis of replication products of an iM-containing template (C) or a 'broken products' control (D) (as described in A).Replication products were run firstly in the native dimension and subsequently in the denaturing.Inset in (C) shows increased contrast of the region containing the broken products as indicated by the dashed box.Source data are available online for this figure. Table 1 . Sequences of G4 oligonucleotides and their biophysical properties. (Fig 1B, lane 1).In contrast, a faint but reproducible 3 kb band appeared in the presence of some of the G4-forming sequences tested, indicating replisome stalling (Fig 1B, lanes 2-8).The efficiency of replication reactions is inherently variable between different experiments and different templates.This can result in variability in the stalling intensities observed for any given sequence. Table 2 . Sequences of iM oligonucleotides and their biophysical properties.Melting and annealing temperature and their hysteresis measured via UV spectroscopy, data shown as Mean AE SD (n = 3), italic temperatures are less prominent, however, present.CD spectroscopy determined transitional pH, error AE 0.1.Thermal difference spectra profiles DNA structural signature. Table 4 . Detailed sequences of the customised oligonucleotides. Table 3 . Heating protocol for Uv-Vis melting curve of DNA sequences. Table 5 . Parameters of the nanopores used.
v3-fos-license
2020-04-09T22:32:59.288Z
2006-01-01T00:00:00.000
17127125
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/1477-7525-4-84", "pdf_hash": "d0c066bba267b19849f3f996778740b62c4683f1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1548", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d0c066bba267b19849f3f996778740b62c4683f1", "year": 2006 }
pes2o/s2orc
Gynecologic oncology patients' satisfaction and symptom severity during palliative chemotherapy Background Research on quality and satisfaction with care during palliative chemotherapy in oncology patients has been limited. The objective was to assess the association between patient's satisfaction with care and symptom severity and to evaluate test-retest of a satisfaction survey in this study population. Methods A prospective cohort of patients with recurrent gynecologic malignancies receiving chemotherapy were enrolled after a diagnosis of recurrent cancer. Patients completed the Quality of End-of-Life care and satisfaction with treatment scale (QUEST) once upon enrollment in an outpatient setting and again a week later. Patients also completed the Mini-Mental Status Exam, the Hospital Anxiety/Depression Scale, a symptom severity scale and a demographic survey. Student's t-test, correlation statistics and percent agreement were used for analysis. Results Data from 39 patients were analyzed. Mean (SD) quality of care summary score was 41.95 (2.75) for physicians and 42.23 (5.42) for nurses (maximum score was 45; p = 0.76 for difference in score between providers). Mean (SD) satisfaction of care summary score was 29.03 (1.92) for physicians and 29.28 (1.70) for nurses (maximum score was 30; p = 0.49 for difference between providers). Test-retest for 33 patients who completed both QUEST surveys had high percent agreement (74–100%), with the exception of the question regarding the provider arriving late (45 and 53%). There was no correlation between quality and satisfaction of care and symptom severity. Weakness was the most common symptom reported. Symptom severity correlated with depression (r = 0.577 p < 0.01). There was a trend towards a larger proportion of patients reporting pain who had three or more prior chemotherapy regimens (p = 0.075). Prior number of chemotherapy regimens or time since diagnosis was not correlated with symptom severity score. Anxiety and depression were correlated with each other (r = 0.711, p < 0.01). There was no difference in symptom severity score at enrollment between those patients who have since died (n = 19) versus those who are still alive. Conclusion The QUEST Survey has test-retest reliability when used as a written instrument in an outpatient setting. However, there was no correlation between this measure and symptom severity. Patient evaluation of care may be more closely related to the interpersonal aspects of the health care provider relationship than it is to physical symptoms. Background Understanding patient perceptions of technical and interpersonal care they receive and satisfaction with that care is essential. Assessments of quality and satisfaction of care in oncology have focused on patients' satisfaction with physicians or the health care system [1,2]. Research on satisfaction during palliative care and care at the end of life (EoL) of cancer patients has been limited [3]. Global measures of quality and satisfaction with care are not completely revealing, because they do not indicate on which issues, such as symptom management, the provider should focus on improving [4]. Patients' satisfaction with care may be significantly affected by their symptoms and the physician's response to these symptoms, particularly during the advanced stages of cancer. Gynecologic cancer symptoms are multifactorial in character as the primary cancer frequently metastasizes to other pelvic and abdominal organs. Women with ovarian cancer present with a constellation of symptoms including back pain, fatigue, abdominal pain and urinary symptoms [5]. Ferrell et al assessed patients with ovarian cancer post-diagnosis; pain, fatigue and gastrointestinal effects were the most problematic [6]. Sun et al revealed that fatigue was a significant problem including higher levels of distress in ovarian cancer patients with recurrent disease [7]. However, the extent to which satisfaction with care is related to perceptions of concern and efforts by providers or to underlying patient mood state, not just to symptoms, is not known [8]. The purpose of this study was to examine the relationship between patients' perception of quality and satisfaction with care and symptom severity during palliative chemotherapy for recurrent gynecologic malignancies. In 2004, the National Cancer Institute declared the importance of improving symptom management for cancer patients [9]. However, there is currently no information on the link between this population's symptoms, anxiety, depression and perception of the quality of cancer care directly influenced by clinicians. In addition, we wanted to evaluate the test-retest properties of the Quality of End-of-Life care and satisfaction with treatment scale (QUEST) Survey in this study population. Methods Prospective patients with gynecologic malignancies receiving chemotherapy were enrolled after a diagnosis of recurrent cancer in this IRB approved study. Patients were seen in the oncology clinic office by their treating gynecologic oncologist and chemotherapy nurse specialist and informed consent was obtained for participation in this study. Patients received a variety of chemotherapy agents depending on prior treatment and patient/physician preferences. Eligibility criteria included age of 18 or greater and a mini-mental status exam score of 12 or higher. Patients completed the QUEST survey regarding quality of care and satisfaction with care received from both their physicians and nurses. The survey was completed once upon enrollment in an outpatient setting and again a week later. Patient responses were placed in a sealed envelope and they were assured that their individual responses would not be revealed to their treating physician or nurse. Patients also completed the Mini-Mental Status Exam, the Hospital Anxiety/Depression, a symptom severity scale and a demographic survey. Patient charts were reviewed to obtain demographic and clinical variables. Measures The QUEST Survey contains fifteen items categorized into two sub-scales in which patient's rate the quality of the care they have received from their physicians and nurses separately, and their satisfaction with care. Quality (nine questions) was rated using a 5-point Likert scale to assess how often particular behaviors or styles of care were true of their health care providers. Ratings ranged from never to always. Similarly, satisfaction (six questions) was rated using a 5-point Likert scale ranging from "very dissatisfied" to "very satisfied". Items for each scale were summated to obtain an overall score for both quality and satisfaction with care [3]. Folstein et al developed a simplified, scored form to evaluate mental state. The Mini-mental status exam (MMSE) includes eleven questions (maximum score of 30), requires 5-10 minutes to administer and is practical for use serially and routinely. The MMS concentrates on the cognitive aspects of mental functions and has documented validity and reliability [10]. This evaluation tool was used to screen patients at enrollment for any mental deficiencies. In addition this tool has not been used before in this patient population. The Hospital Anxiety and Depression Scale (HAD) has been established as a convenient self-rating screening instrument for anxiety and depression [11,12]. The survey consists of fourteen multiple choice items that are scored on a scale of 0 to 3 and questions are categorized as measuring anxiety versus depression accordingly. A score of 8 or higher on either scale indicates the possibility that the patient may have an anxiety or depression disorder and should be evaluated further. Previous research in this population has indicated increased levels of anxiety/depression [7]. A symptom severity scale adapted from that of Mercadante et al was used to analyze frequency and severity of common gynecologic cancer symptoms [13]. Symptoms included pain, shortness of breath, nausea/vomiting, weakness and drowsiness and were included in a standard form and rated for severity (absent 0, mild 1, moderate 2, severe 3). A brief demographic survey regarding religious affiliation and educational level was completed by patients. Statistical analysis Patient demographics and clinical characteristics were summarized using descriptive statistics. Student's t-test was used to compare QUEST scores between physicians (MD) and nurses (RN) in all patients and correlation analysis was done between the HADS, symptom severity scale and QUEST surveys in order to determine if increased symptoms were association with increased depression or anxiety and decreased satisfaction score. Patients completing both QUEST surveys (n = 33) were used to compare scores between the initial survey and a second survey administered one week later. Percent agreement, correlations, and paired t-tests were used to compare scores for patients completing the survey at both time points. Symptom severity was analyzed in only ovarian cancer patients by correlation analysis and chi-square statistics. The other gynecologic malignancies were not included in this analysis as the number of cases were small and interpretation may not be applicable to other cancer types. Results Forty-four patients were approached regarding the study and 41 enrolled in this prospective study from September 2003 -March 2006. Two patients after enrollment refused to continue due to time constraints. Patient demographics and clinical characteristics of 39 patients with complete data are summarized in Table 1. The majority of patients were married, Caucasian and had some college or higher education (57%). Gynecologic cancers included 79% ovarian cancer, 18% endometrial cancer and 2.5% vaginal cancer. Mini-mental status exam scores were high for all patients (range 27-30) and no patients were excluded based on this exam. Mean scores for both physicians and nurses regarding quality of care and satisfaction with care received were high ( Table 2). There were no differences in scores between providers (MD versus RN). Thirty-three patients completed both QUEST surveys. Mean scores on surveys were compared and there were no differences between scores on the first and second survey. In addition, correlation coefficients were high (Table 3). Percent agreement between surveys for individual questions was calculated. With the exception of question #2 (provider arriving late for appointment), agreement between answers obtained on both occasions was high (Table 4). Thirteen patients (33%) had an anxiety score greater than 8 and 5 patients (13%) had a depression score of 8 or higher. Anxiety and depression were highly correlated with each other (r = 0.711, p < 0.01). Patients with increased scores were referred to a psychologist within our department for possible treatment. Symptom severity data, available only for ovarian cancer patients (n = 31), were used to examine patterns and relationships with satisfaction (Table 5). There was no correlation between quality of care and satisfaction scores on the QUEST with symptom severity (r = 0.085 and r = 0.009 respectively). Weakness was the most common symptom reported, and 10 patients (32%) reported no symptoms whatsoever. Symptom severity was correlated with depression (r = 0.577, p = 0.001), but not anxiety. Prior number of chemotherapy regimens or time since diagnosis was not correlated with overall symptom severity score. When patients were stratified based on number of prior chemotherapy regimens, there was a trend towards more frequent reports of pain in patients who had undergone more chemotherapy regimens. Five out of the nine patients (55%) who had undergone three or more chemotherapy regimens reported pain, compared to four out of the 18 patients (22%) with one or two regimens (p = 0.075). Nineteen patients have died since the study began, all of whom were enrolled during the years 2003-2004. There was no difference in symptom severity score at enrollment between patients who have died versus those still alive. Discussion In this prospective, observational study there was no correlation between perceptions of quality and satisfaction with care and symptom severity. Clinical variables, such as prior number of chemotherapy regimens or time since diagnosis, also were not related to the symptom severity score. There was a trend towards a larger proportion of patients who had multiple prior chemotherapy regimens reporting pain. Weakness was the most common symptom reported. Anxiety and depression were correlated with each other and symptom severity was correlated with depression. We also found the QUEST Survey to have testretest reliability when used as a written instrument in an outpatient setting. Quality cancer care includes provision of the most effective curative therapies, as well as excellent symptom management and sensitive end-of-life care. Symptom management, the core of palliative care, is an integral part of cancer care throughout the disease trajectory, while "end-of-life" care usually refers to care during the terminal phase or last few weeks or months of life. There is no objective dividing line between palliative care and end-oflife care and we use the terms interchangeably [6] in this paper to differentiate them from curative aspects of care. In that the majority of patients in our sample had ovarian cancer, which typically recurs after an initial remission, goals of palliative therapy include both prolonging survival as well as maintaining or improving quality of life. Weakness and fatigue are problematic in women with gynecologic cancer, especially ovarian cancer patients who receive multiple chemotherapy regimens. In an interesting research design, Ferrell et al. abstracted data from "Conversations!", a newsletter for those with ovarian cancer in which patients publish their commentary [6]. Data were abstracted from personal stationery, greeting cards, and e-mail. In the pre-diagnostic complaints, fatigue was secondary only to bloating/abdominal swelling. Sun et al. assessed 70 patients with ovarian cancer undergoing chemotherapy for primary or recurrent disease [7]. While nausea and vomiting were the most problematic, fatigue also was a significant problem and higher levels of distress were associated with recurrent disease. These data suggest that there may be a predictable progression of symptoms from the initial abdominal discomfort to progressive weakness and fatigue. Thus it may be helpful for clinicians to specifically assess for these symptoms and prepare patients for their occurrence. The QUEST survey focuses on the patient's perception of provider's time, access, and communication. In our study scores were consistently high and there were no differ-ences between nurses and doctors as health care providers. Patients were seen in the oncology clinic office usually by the same gynecologic oncologist and chemotherapy nurse specialist. Patients were assured that their responses to questionnaires would be kept anonymous. It is possible, however, that patients may have provided answers that they felt their health care provider expected to hear and did not feel as if they could express negative feelings. Sulmasey et al revealed differences with this survey between physicians and nurses. However, this instrument may not be sensitive enough to detect variations in clinics where patients receive consistent care from the same attending physicians and nurses [3]. It is also possible that this questionnaire is not sensitive enough to pick up small fluctuations in care. This study revealed no correlation between satisfaction with care and symptom severity. This may be a function of the limited variance in the quality of satisfaction measures. However, it also may suggest that patient evaluation of care is related more to the interpersonal aspects (trust, caring) of the physician-patient or nurse-patient relationship than it is to physical symptoms. If the patient feels confident in the health care providers and perceives them to be sincerely concerned, even if the symptom management is not completely effective, the patient remains satisfied. This reinforces the importance of providers focusing on interpersonal communication, as well as provision of technically competent care, to improve satisfaction with care. Weaknesses of the study include the limited sensitivity of the QUEST survey with this population of patients. Because of the multidimensional nature of quality of care, a single measure cannot provide a complete assessment of impact. Recent instruments to evaluate symptom severity and satisfaction with care have been developed and may be more appropriate for use in future studies [14,15]. Other options in quality of life (QOL) measures could include FACIT-Pal for palliative care [16], FACIT-TS-PS for treatment satisfaction [16], and possibly the Missoula-VITAS QOL index designed to measure QOL of patients with advance incurable diseases, weighing each dimension according to patient-reported importance [17]. An additional limitation was the brief measure of fatigue, which was the major symptom in this population. A detailed fatigue measure such as the FACIT-F should be administered to expand on the symptom evaluation for further interventions [16]. Many of the above tools were unavailable at the beginning of this study in 2003. Future directions include an ongoing intervention trial targeting symptom improvement in ovarian cancer patients during palliative chemotherapy. Conclusion The QUEST Survey does demonstrate adequate test-retest reliability when used as a written instrument in an outpatient setting with gynecologic oncology patients. High satisfaction and quality of care scores were obtained; however, it may be that a variety of research instruments should be used to evaluate this health care domain. In this pilot investigation of women receiving palliative chemotherapy the most common symptom was weakness. In addition, anxiety and/or depression were observed in over a third of patients in this study population. As patients' cancer progresses despite chemotherapy, they should be frequently assessed and offered interventions for cancer symptoms. Competing interests The author(s) declare that they have no competing interests. Authors' contributions VVG and AMR conceived of the study and design. VVG, HG, JH, EE, NF coordinated the study. VVG and HG were responsible for day to day conduct of the study and ana-
v3-fos-license
2020-02-27T09:35:18.329Z
2020-02-20T00:00:00.000
213238170
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.future-science.com/doi/pdf/10.2144/fsoa-2019-0121", "pdf_hash": "b89bfa00128adc082deffa271402c6899150a05c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1549", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "7da48ea5dca61c1cc7d97a9ac74a3dc5ec1d2841", "year": 2020 }
pes2o/s2orc
Antibiotic in myrrh from Commiphora molmol preferentially kills nongrowing bacteria Aim: To demonstrate that myrrh oil preferentially kills nongrowing bacteria and causes no resistance development. Method: Growth inhibition was determined on regular plates or plates without nutrients, which were later overlaid with soft agar containing nutrients to continue growth. Killing experiments were done in broth and in buffer without nutrients. Results: Bacterial cells were inhibited preferentially in the absence of nutrients or when growth was halted by a bacteriostatic antibiotic. After five passages in myrrh oil, surviving colonies showed no resistance to the antibiotic. Conclusion: Myrrh oil has the potential to be a commercially viable antibiotic that kills persister cells and causes no resistance development. This is a rare example of an antibiotic that can preferentially kill nongrowing bacteria. Most of the antibiotics in use today were discovered in the first five decades after the discovery of penicillin, the first commercial antibiotic. Very few new antibiotics have been developed in recent times and very few are in the pipeline. Meanwhile, antibiotics, which were once called miracle drugs, are no longer as predictably effective today. The main reason for this is that overuse and inappropriate use of antibiotics result in development of resistant bacteria [1]. Another reason for the ineffectiveness of antibiotics is their inability to kill nongrowing cells. Even nonresistant bacteria can withstand antibiotics for varying times, which is the reason why antibiotic treatments need to be continued for several days. Because of phenotypic variations of the infecting bacteria in host tissues, there are always some bacterial cells that are slow growing or nongrowing and less responsive to antibiotics, thus requiring extended treatment [2]. Some examples of infections by slow growing bacteria are urinary tract infections, tuberculosis and leprosy [3,4]. Discovery of most of the antibiotics in clinical use today involved testing their activities in vitro on exponentially growing cells. Thus, it is widely believed that most of these antibiotics are effective only on growing cells. In a classical study by Eng et al. [5] it was demonstrated that except for fluoroquinolones and ofloxacin, all antibiotics tested had very little activity on nongrowing cells. It is known that treatment of bacteria with bacteriostatic antibiotics, reduces the efficacy of those bactericidal antibiotics that are effective only on growing cells [6]. Bacteria may be nongrowing for several different reasons: they may enter a stationary phase due to lack of nutrients; in a population of growing cells there may be a genetically identical sub-population of nongrowing cells, which are known as persisters; they may be nongrowing or slowly growing when bound to solid surfaces such as biofilms on prosthetic devices or when inside phagocytes; and their growth may be stopped by treatment with bacteriostatic antibiotics, which inhibit growth of bacteria but do not kill them. In a recent study by McCall et al. [7] it was demonstrated that contrary to popular belief that most antibiotics cannot kill nongrowing cells, there are actually numerous antibiotics that are capable of killing both Gram-positive and Gram-negative cells that are not growing irrespective of the reason for their nonreplication. However, even in this expanded list of antibiotics, there are hardly any that can preferentially kill nongrowing compared with growing bacteria. In this study we report that the resin myrrh from the thorny plant, Commiphora molmol preferentially kills nongrowing bacteria at a fast rate, thus representing one of the first known examples of antibiotics with such property. Many of the top-selling drugs are natural products or their derivatives including those obtained from plants [8]. Commiphora molmol (aka Commiphora myrrha, common name: myrrh) is an aromatic plant belonging to the Burseraceae family, also known as the torchwood or incense family [9]. Two members of the family, frankincense and myrrh, have been used as perfume, incense and medicine dating back to thousands of years in all parts of the world and have found their place in most religious practices. Myrrh is naturally grown in India, East Africa and Saudi Arabia. The reddish gum resin (called myrrh) is the hardened form of the sap that is extracted by making longitudinal cuts in the tree trunk. Myrrh has been used in traditional medicine as a remedy for mouth injuries, colds and improvement of wounds [10]. However, systematic scientific study of its antibiotic activity is very limited. One possible reason for this lack of positive result is the traditional methodology for testing of antibiotic activity, which is always performed on growing bacterial cells. However, we have discovered that this antibiotic has the unique property of preferentially killing nongrowing bacteria. We present here a modified method to demonstrate the strong antibiotic activity of myrrh against nongrowing bacterial cells. This is of great significance since the inability to kill nongrowing cells is one of the main reasons for the ineffectiveness of most antibiotics currently in use. Isolation of myrrh extract Commiphora molmol (myrrh) resin was purchased from an herbal store in Riyadh, Saudi Arabia. Myrrh resin (2.0 g) was ground to fine pieces using a mortar and pestle and soaked in 5 ml of 95% ethanol three-times and decanted. The combined ethanol extract was centrifuged at 12,000 × g for 10 min and the supernatant was collected. The ethanol was evaporated under reduced pressure, which left 0.97 ml (0.762 g) of oil. Since the oil is insoluble in water, a 20% v/v (corresponding to 15.7 % w/v) solution of the oil in ethanol was used as a stock solution for all experiments. All percent concentrations in this report are expressed as w/v. Bacterial strains & culture conditions Escherichia coli (MV10) and Staphylococcus aureus (ATCC 25923) were grown in Luria Bertani (LB) medium at 37 • C. Nutrient-free phosphate plates were prepared by the same method as other plates except that sodium phosphate buffer pH 7.0 was added to a final concentration of 10 mM instead of LB. When needed, various amounts of 15.7% (w/v) myrrh oil stock solution were added prior to pouring on plates. Inhibition studies Because of the insolubility of myrrh oil in water, minimum inhibitory concentration (MIC) experiments could not be done in broth. So all MIC experiments in this study were done on plates. Cells were grown overnight and serial dilutions were spread on plates containing either LB or phosphate buffer and various concentrations of the myrrh oil. LB agar plates were incubated overnight and the colonies that grew were counted. Nutrient-free phosphate plates were first incubated for 1 h, during which time the cells could not grow because the plates contained no nutrients. Growth was resumed for the cells that survived antibiotic treatment by pouring 3.5 ml soft agar (0.6%) containing LB at 45 • C on top of the phosphate-agar plates. After incubating the plates for 24 h at 37 • C, colonies that grew were counted. For all experiments involving counting of colonies, weighted averages of the different serial dilution plates were calculated. MIC is defined as the concentration of the antibiotic that stopped the growth of >99.9% cells. Zones of inhibition were determined by Kirby Bauer disk diffusion method followed by background staining for greater visualization [11]. Rate of killing of cells by myrrh oil Rate of killing was determined for both growing and nongrowing cells. From an overnight culture of cells, 0.2 ml was centrifuged, the supernatant was discarded and the cells were resuspended in 2.0 ml of nutrient-free 0.02 M phosphate buffer (pH 7.0) and 1 ml each was distributed into two microfuge tubes. To the first tube, 40 μl of ethanol was added as a control. To the second tube, 40 μl of 15.7% myrrh oil was added to a final concentration of 0.60% in the absence of nutrients. The tubes were incubated at 37 • C. Serial dilutions were spread at indicated times on LB plates, which were then incubated overnight at 37 • C and colonies that grew were counted. To determine the rate of killing of growing cells, the same experiment was performed, except that phosphate buffer was substituted with LB medium. Myrrh oil preferentially inhibits nongrowing cells Initial attempts to demonstrate antibiotic activity of myrrh oil against E. coli on LB plates were disappointing because the extent of inhibition at 0.24% oil was only about 70% and did not increase at higher oil concentrations as shown in Figure 1. However, when the same experiment was done on nutrient-free phosphate plates as described in materials and methods, a dramatic increase in inhibition was observed. At 0.44% (4.4 mg/ml), all E. coli cells were prevented from growing. Note that a log colony-forming unit of 1, corresponding to 10 cells/ml, is the minimum limit of detection by this method since 0.1 ml cells were spread. This demonstrates that myrrh oil preferentially inhibits nongrowing cells on phosphate plates but not growing cells on LB plates. The same result was obtained with S. aureus for which there was very little inhibition on LB plates but all cells were inhibited from growing at 0.24% oil concentration on phosphate plates ( Figure 1). The data also demonstrate that myrrh oil had a much stronger activity on S. aureus (MIC 0.079%) than on E. coli (MIC 0.44%). Myrrh oil contains a bactericidal antibiotic The MIC experiment described above does not differentiate between bacteriostatic and bactericidal antibiotics. In order to demonstrate the bactericidal activity of myrrh oil, a rate of killing experiment was done with E. coli and S. aureus in LB medium, as well as in nutrient-free phosphate buffer. Control tubes contained equal volume of ethanol that is present in the stock solution of the oil. The results in Figure 2 demonstrate that myrrh oil preferentially kills nongrowing cells in buffer compared with growing cells in LB medium. In a nutrient-free buffer, 99.9% cells were killed in approximately 4 h for E. coli and in <3 h for S. aureus. There were no surviving cells remaining after 6 h for S. aureus, whereas for E. coli there were >200 surviving cells per ml even after 25 h. Thus, myrrh oil has stronger bactericidal activity on nongrowing cells of S. aureus than on E. coli. When the experiment was done in LB medium to allow cell growth, E. coli showed a slight initial increase in the number of cells due to presence of nutrients. This was followed by a very slow rate of killing of only 1.7 log in 25 h. On the contrary, S. aureus cells were killed by myrrh oil even in LB medium, although, at a much slower rate than nongrowing cells in a nutrient-free buffer. In LB medium it took 25 h to achieve the same extent of killing as was obtained in 6 h in a nutrient-free phosphate buffer ( Figure 2). The bactericidal activity of myrrh oil was found to be temperature dependent. All experiments described in this report were performed at 37 • C. If E. coli cells were exposed to myrrh oil in nutrient-free buffer at 25 or 4 • C, there was no significant loss of cell viability even after 24 h (data not shown). Increased bactericidal activity of myrrh oil in combination with a bacteriostatic antibiotic Inhibition studies in phosphate buffer demonstrated that myrrh oil has preferential activity on nongrowing cells (Figures 1 & 2). However, since a nutrition-free medium is not clinically relevant, it is desirable to have antibiotic activity even in the presence of nutrients. One possible way to have nongrowing cells even in a nutrition-rich medium is to use a bacteriostatic antibiotic, which can stop the growth of the bacteria. Chloramphenicol was selected as the bacteriostatic antibiotic to be used in combination with myrrh extract. First MIC of chloramphenicol against E. coli was determined in broth to be 3.5 μg/ml (data not shown). Antibiotic activity of a combination of the two antibiotics against E. coli was determined as follows. Overnight culture of E. coli was centrifuged and cells were resuspended in fresh LB plus either 0.60% myrrh oil or 10 μg/ml chloramphenicol or a combination of the two. Cell viability was determined at indicated times as described previously. Since the concentration of chloramphenicol used (10 μg/ml) is much higher than the MIC (3.5 μg/ml), it is expected that the cells will not be growing even in a nutrient-rich LB medium and thus can be killed by the myrrh oil. The results in Figure 3 demonstrate that neither chloramphenicol, nor myrrh oil alone, had any bactericidal effect in LB medium. However, a combination of the two had a strong bactericidal activity on E. coli cells. Once again this experiment reconfirms that myrrh oil has antibiotic activity on nongrowing cells only. Therefore, myrrh oil has the potential to be a clinically useful antibiotic especially when used in combination with a bacteriostatic antibiotic, even in a nutrient-rich medium. No resistance development against the antibiotic in myrrh oil Resistance development is a major concern for most antibiotics that are currently commercially available. Bacteria that survive antibiotic treatment are either persisters or have developed genetic mutations, which make them resistant to the antibiotic. E. coli cells were tested for their resistance development to myrrh. First an overnight culture of cells was centrifuged and the cell pellet was resuspended in nutrient-free phosphate buffer. The cells were then exposed to 0.60% myrrh oil in 1 ml nutrient-free phosphate buffer for 24 h and then 100 μl was spread on LB plate. This process was designated as the first passage. One of the colonies that grew was again grown overnight in broth and used to repeat the same experiment for the second passage. This way the experiment was continued for five passages. If the colonies that grow are resistant mutants, then in each successive passage the percentage of cells killed should decrease. The results in Table 1 show that even after five passages all cells were killed by the myrrh oil. Thus, there is no resistance development even after five passages. It is to be noted that no colonies were obtained after spreading 100 μl from the second passage. So the remaining 900 μl cells were used to inoculate a fresh culture, which eventually grew and was used to continue with the remaining passages. A similar conclusion was arrived at for S. aureus by doing zone of inhibition experiment. With 3 mg of the myrrh oil spotted on a disc, a 1.2 cm diameter zone of inhibition was obtained on LB plate (data not shown). Within the zone of inhibition, there were a few small colonies growing. When one of these small colonies was grown in broth and then used as the cells for a second zone of inhibition experiment, a similar size zone was again obtained (data not shown). This suggests that the few colonies growing within a zone of inhibition are persisters and not resistant colonies. Discussion Persister cells are a major cause for the lack of effectiveness of antibiotics. Their resistance is not due to any genetic mutation but is a result of their lack of metabolism. Later, if an antibiotic is removed, these persister cells can continue to grow [12]. In the resting state of the persisters, cell wall synthesis and protein biosynthesis are downregulated and thus these targets cannot be inhibited by antibiotics. Moreover, the cell envelope's thickness in persister cells often increases, making it difficult for antibiotics to get into the cell [13]. Several methods have been developed to kill persister cells [14]. Examples include using mitomycin C [15] and cisplatin [16], which can enter the cells without the need of active transport and thus can enter persister cells. Another approach to make antibiotics more effective is to wake persister cells by adding sugars [17] or cis-2-decenoic acid [18] so that they can then be killed by traditional antibiotics. A yet another approach reported was to re-engineer tobramycin by adding a 12 amino acid transporter sequence, which allowed it to spontaneously permeate membranes of persister cells [19]. Plant products have been shown to kill persister cells. Essential oils from several spices were found to have activity against the Borrelia burgdorferi stationary phase culture [20]. Treating lyme disease, which is caused by the bacteria, is often difficult due to the presence of persister cells of the bacteria. Killing effects of antibiotics are known to be dramatically reduced in the presence of foreign bodies such as sutures and implants because bacteria form biofilms on the foreign bodies [21]. For example, catheter associated urinary tract infections are of serious concern [22]. Cells in biofilms are slow growing and thus are more resistant to antibiotics. It was shown that Aggregatibacter actinomycetemcomitans produces the glycoside hydrolase, dispersin B to degrade its own biofilm [23]. The same enzyme can be potentially used to disrupt Pseudomonas aeruginosa biofilms and make the cells more susceptible to antibiotics [24]. Other approaches for making the cells in a biofilm more susceptible to antibiotics are disruption of the biofilm with human Dnase I [25], using diarylquinolines that can kill both planktonic cells as well as those growing in biofilms [26]; and using a combination of the acyldepsipeptide antibiotic, ADEP4 and rifampicin, which completely eradicated S. aureus biofilms [27]. Another approach to boost the bactericidal activity of antibiotics against persisters and bacteria in biofilms is by adding exogenous metabolites to stimulate their central metabolic pathways [28]. Strategies against methicillin-resistant S. aureus persisters have been reviewed [29]. In this study, our discovery of the action of myrrh resin extract on nongrowing cells opens up a different strategy for combating bacterial infection. Unlike other known antibiotics, myrrh oil will preferentially kill nongrowing cells and in combination with a bacteriostatic antibiotic such as chloramphenicol, all cells in the population could also be killed. Clinical situations in which bacteria can be nongrowing also ensure that the concentration of the bacterial cells will be low. Thus antibiotics that kill only nongrowing cells will have the added advantage that the amount of endotoxins released after their bactericidal activity will also be less. Since myrrh oil and chloramphenicol combination is able to kill both growing and nongrowing cells, it will not be necessary to continue the antibiotic treatment for many days. Thus bacteriostatic antibiotics such as chloramphenicol, whose use was once approved but later discontinued due to minor toxicity, can be reconsidered for use in combination with myrrh. The results in Table 1 determine that in spite of several passages, no resistant mutant could be obtained because after each passage the cells were equally sensitive to the antibiotic effect of myrrh oil, demonstrating close to 100% cell death. This may have great commercial significance since most of the currently used commercial antibiotics have become less effective due to the development of resistance [28]. It is to be noted that since resistant mutants obtained during growth of any bacteria can only be due to point mutations, the lack of resistant mutants in this experiment does not rule out the possibility that other strains of bacteria in nature may have evolved to form a resistance gene to counteract the effect of myrrh. Such a gene, if it exists, has not yet been discovered. The target of the antibiotic in myrrh is not known. However, it can be expected that the target will be a biochemical process taking place in dormant or nongrowing cells. One possible site of action could be the bacterial membranes or membrane-associated enzymes. Disrupting the bacterial membrane bilayer or proteins in the membrane of dormant bacteria is a strategy for treating persistent infections [30]. Daptomycin, a lipopeptide antibiotic that targets the cytoplasmic membrane of Gram-positive bacteria, was shown to have bactericidal activity on both growing and nongrowing cells and remained bactericidal against cold-arrested S. aureus [31]. The lack of antibiotic activity of myrrh at 25 and 4 • C may indicate that the membrane may be frozen at these temperatures and thus not allow transport of the antibiotic into or through the membrane. The facts that myrrh extract is an oil and that no resistant mutants could be obtained (Table 1) are both consistent with the possibility that the site of action of myrrh could be the membrane. A similar observation was made with membrane-targeting AM-0016, which kills mycobacterial persisters and shows low propensity for resistance development [4]. However, these observations cannot explain why the antibiotic in myrrh preferentially inhibits only nongrowing cells. Interestingly, this is similar to a report that P. aeruginosa cells in the exponential growth phase were resistant to the membrane acting macrolide antibiotic azithromycin, while cells in the stationary phase were susceptible [32]. Further research is needed to understand the mechanism of action of the antibiotic in myrrh. Today as we are going through an antibiotic crisis, scientists are increasingly looking into plant products for a solution [33]. Myrrh resin can be a promising source of a future antibiotic. Other uses of myrrh have also been reported. For example, it was shown that myrrh and vitamin C synergistically minimize the toxic effects of the macrolide antibiotic, tilmicosin, through their free-radical scavenging and potent antioxidant activities [34]. There is some confusion in the scientific community about whether plant products can be called antibiotics. The original definition of antibiotics as proposed by Selman Waksman >70 years ago, required that they have to be of microbial origin. However, that definition was too restrictive. Sulfa drugs, the first commercially marketed antibiotics that have saved millions of lives were synthetic drugs and not of microbial origin. In fact, the majority of successful antibiotics in use today are either synthetic or semi-synthetic and are actually better than the natural ones due to the reduction in resistance development. It is our opinion that the definition of an antibiotic should not be unnecessarily restrictive, but should rather be more inclusive. It should be based on function and not on the source of the antibiotic [1]. Another drawback of the original definition of antibiotic is that it did not address the concept of selectivity or toxicity. Although we have not tested the toxicity of myrrh oil, it is not expected to have significant toxicity since it has already been in use for centuries. This makes it an even more ideal antibiotic. Conclusion We have demonstrated strong antibiotic activity present in myrrh resin and there is no evidence of resistance development even after repeated passages. It preferentially kills nongrowing bacteria compared with growing bacteria, thus is different from most other antibiotics known and thus has the potential to be a commercially viable antibiotic that can kill persister cells. Future perspective With very few new antibiotics being developed, plant products represent promising sources of antimicrobial agents. Preferential killing of nongrowing bacteria by myrrh oil makes it a promising candidate for development as a future antibiotic. Although toxicity studies have not yet been done, its use for centuries in traditional medicine suggests that its toxicity will be low. Future studies will focus on purification and identification of the active component in myrrh oil, its mechanism of action and pharmacokinetic properties including toxicity. Summary points • Most antibiotics are unable to kill nongrowing bacteria, which is the reason why antibiotic treatments need to be continued for several days. • Although there are some antibiotics that have activity against both growing and nongrowing cells, there is almost no antibiotic that is specific for nongrowing bacteria. • We show here that myrrh oil from Commiphora molmol preferentially kills nongrowing cells. Cells are killed much faster in buffer without nutrients than in broth rich in nutrients. • Possible clinical significance of myrrh oil can be that it can also kill bacteria in nutrient-rich media provided growth of the bacteria is halted by addition of a bacteriostatic antibiotic such as chloramphenicol. • Another positive aspect of the use of myrrh oil as an antibiotic is that even after repeated use of the antibiotic there is no evidence of resistance development. This property is similar to that of membrane-acting antibiotics. Authors' contributions M K Bhattacharjee developed the concept, designed the experiments and wrote the manuscript. T Alenezi performed the experiments as a part of her Masters thesis. future science group www.future-science.com
v3-fos-license
2017-07-11T08:11:44.051Z
2017-06-05T00:00:00.000
20277313
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-017-2465-y", "pdf_hash": "44ed56a7cea94a4244c7189ea795ffb9c790b08d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1552", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "44ed56a7cea94a4244c7189ea795ffb9c790b08d", "year": 2017 }
pes2o/s2orc
Independent prognostic role of human papillomavirus genotype in cervical cancer Background Although the correlation of HPV genotype with cervical precursor lesions and invasive cancer has been confirmed, the role of HPV genotype in cervical cancer prognosis is less conclusive. This study aims to systematically investigate the independent prognostic role of HPV genotype in cervical cancer. Methods A total of 306 eligible patients provided cervical cell specimens for HPV genotyping before therapy and had a median follow-up time of 54 months after diagnosis. Survival times were measured from the date of diagnosis to the date of cervical cancer-related death (overall survival, OS) and from the date of diagnosis to the date of recurrence or metastasis (disease free survival, DFS). Log-rank tests and Cox proportional hazard models were performed to evaluate the association between HPV genotype and survival times. Results A total of 12 types of high-risk HPV were detected and the leading ten types belong to two species: alpha-9 and alpha-7. HPV16 and 18 were the two most common types, with the prevalence of 60.8% and 8.8%, respectively. In the univariate analysis, HPV16-positive cases were associated with better OS (P = 0.037) and HPV16-related species alpha-9 predicted better OS and DFS (both P < 0.01). After adjusting for age, FIGO stage, and therapy, HPV16 showed a hazard ratio (HR) of 0.36 (95% CI: 0.18, 0.74; P = 0.005) for OS, and alpha-9 resulted in a HR of 0.17 (95% CI: 0.08, 0.37; P < 0.001) for OS and 0.32 (95% CI: 0.17, 0.59; P < 0.001) for DFS. Conclusions HPV genotype poses differential prognoses for cervical cancer patients. The presence of HPV16 and its related species alpha-9 indicates an improved survival. Electronic supplementary material The online version of this article (doi:10.1186/s12879-017-2465-y) contains supplementary material, which is available to authorized users. Background Cervical cancer is the fourth most common malignancy in females worldwide, with an estimated 527,600 new cases and 265,700 deaths per year [1]. The etiological relationship has been well established between human papillomavirus (HPV) and cervical cancer. Recently, more than 170 HPV genotypes have been identified and classified according to their L1 open reading frame [2]. When HPVs have 60-70% genomic nucleotide similarity, they are clustered into the same species. Two HPV species, alpha-7 (HPV18, 39, 45, 59, 68, and 70) and alpha-9 (HPV16, 31, 33, 35, 52, 58, and 67), are responsible for over 80% of all cervical cancer cases [3]. Although there has been much evidence on the role of HPV genotype in cervical precursor lesions and invasive cancer, it remains unclear whether they affect prognosis of cervical cancer. Furthermore, existing results on the relationship of HPV genotype with survival are heterogeneous. For example, early evidence showed that HPV16 positivity predicted poor prognosis and was associated with histological features of prognostic significance such as squamous cell carcinomas, pelvic node metastases, and lymphatic space invasion [4]. But some studies reported that HPV18 positivity, rather than HPV16, is a poor prognostic factor [5,6]. Histologic type of adenocarcinomas, pelvic lymph node metastasis, and deeper stromal invasion was more common in HPV18-caused cervical cancer [6]. In addition, HPV31-related and HPV58-related types were found to be associated with better survival outcome [7,8]. However, no prognostic value of HPV type was reported by the other studies [9,10]. The inconsistency may be attributed to the significant differences in sample size, length of follow-up, assay methods, and adjustment for known prognostic factors. To better understand the role of HPV genotype in prognosis of patients with cervical cancer, we assessed the association of HPV genotype with overall survival (OS, the time between the date of diagnosis and the date of cervical cancer-related death) and disease free survival (DFS, the time between the date of diagnosis and the date of recurrence, distant metastasis, or the last followup) among 306 cases of cervical cancer from China. Patients Cervical cancer patients were consecutively recruited from Cancer Hospital, Chinese Academy of Medical Sciences from 2010 to 2012. We included patients who had a first diagnosis of histologically confirmed invasive cervical cancer, and the sampling of cervical exfoliated cells for HPV genotyping were taken by a gynecologist before therapy. Patients were excluded for the following criteria: a history of hysterectomy or conization, recurrent cervical cancer, other preexisting malignancies, and those with less than two months of survival after completing therapy. Patient's age, FIGO stage, tumor histology, and pathologic variables were retrieved from medical records. This study was approved by the ethics committees of National Cancer Centre/Cancer Hospital, Chinese Academy of Medical Sciences, and all patients provided informed written consent before study enrollment. HPV genotyping Genomic DNA was extracted from cervical cell specimens manually by using QIAamp DNA Mini Kit, according to the manufacture's protocol (Qiagen, Valencia, CA, USA). The quality of extracted DNA was assessed by PCR with a set of primers for the housekeeping gene β-actin (forward primer, 5′-GAAATCGTGCGTGACATTAA-3′; reverse primer 5′-AAGGAAGGCTGGAAGA. GTG-3′). All β-actin positive specimens were tested for HPV DNA by following the manufacturer's protocol of a HPV GenoArray Test Kit (HybriBio, Beijing, China), which is a Chinese FDA-approved assay for HPV genotyping. A total of 21 HPV types could be detected simultaneously, including 13 high-risk (HR) types (HPV16, 18 Treatment and follow up Treatment information was retrieved from medical records and was summarized and grouped as follows: surgery alone (radical hysterectomy and pelvic lymphadenectomy); surgery plus adjunctive chemotherapy (CT), radiotherapy (RT) or chemoradiotherapy (CRT); concurrent chemoradiotherapy (CCRT); CT or RT only. Each patient was followed up every 3 months in the first year and every 6 months in the next years by personal or family contacts, until June 2016. Hospital medical records were obtained in order to confirm the reported events. Only validated events were included in analysis. Overall survival (OS) was defined as the time between the date of diagnosis and the date of cervical cancer-related death or the last follow-up. Disease-free survival (DFS) was measured from the date of diagnosis to the date of recurrence, distant metastasis, or the last follow-up. Statistical analysis The data were analyzed by using the Stata version 11.0 (Stata Corporation, Texas, USA). To assess the potential of HPV type as a prognostic biomarker for cervical cancer patients with no matter single or multiple infections, all cases were included for HPV16 and 18 survival analysis. Multiple infections with only alpha-9 types were included for alpha-9 survival analysis. Survival curves were generated using the Kaplan-Meier method, and comparisons were performed using the log-rank test. Multivariate analyses of the factors associated with OS and DFS were done using Cox proportional hazard regression model. In the stratified analysis, the chisquare test-based Q-statistic was applied to test the heterogeneity between subgroups defined by age, FIGO stage, and treatment. All P values presented were twosided and were assumed significant as P < 0.05. Patient characteristics The demographic and clinical characteristics of cervical cancer patients are summarized in Table 1. This study included 306 women with a median age of 48 years (range: 26-71 years). The most common histological type was squamous cell carcinoma (96.7%) and the others were adenocarcinoma (AC) and adeno-squamous carcinoma (ASC). Most of patients were diagnosed with FIGO stage I-II (81.0%). Seventy-eight patients (25.5%) received surgery, and 21 (6.9%) surgery plus CT/RT/ CRT, 160 (52.3%) CCRT, 47 (15.4%) CT or RT only. Survival analysis The mean number of follow-up was 5 for each patient and the median time for these follow ups was 54 (range, 3-75) months. A total of 58 patients (19.0%) had experienced treatment failure, including 27 recurrences and 38 distant metastases (7 patients had both). In addition, 34 deaths (11.1%) were attributed to cervical cancer. The 5year OS rate for the entire cohort was 87.1% (95% CI: 82.1-90.8%), and the corresponding DFS rate was 78.3% (95% CI: 72.5-83.1%). In univariate analysis (Table 3), FIGO stage IV was significantly associated with poorer OS (P < 0.001) and DFS (P < 0.001), while primary surgical treatment was associated with a better OS (P = 0.004) and DFS (P = 0.019). Of note, patients infected with HPV16 had a better OS than those with any other types (P = 0.037) (Fig. 1a). HPV16-related species alpha-9 also posed a better OS (P < 0.001) and DFS (P = 0.005), compared to alpha-7 ( Fig. 1b and c). No significant association with prognosis was found for HR-HPV multiple infections, HPV18 and the other types. To better understand the effect of HPV genotype on cervical cancer survival, stratified analyses based on age, FIGO stage, and treatment were performed. Although the protective effects of HPV16 and alpha-9 were more evident among those with FIGO stage III/IV and those receiving primary RT and/or CT, no significant difference was detected between subgroups (homogeneity test P > 0.05 for all) (Additional file 1: Table S3). Discussion Despite recent progress in multimodal treatments, the clinical outcome of cervical cancer remains unfavorable. TNM or FIGO classification based on cervical pathology has insufficient predictive ability, because significant differences in survival are often observed for the same stage. Thus, it is highly necessary to explore additional biomarkers for the identification of a more effective therapeutic strategy against cervical cancer. In this study, we investigated prognostic value of HPV genotype for patients with cervical cancer. A total of 12 HR types were identified and HPV16 positivity was independently associated with lower risk of cervical cancer death than the group of the other 11 HR types. In addition, alpha-9 species including five HR types (16, 52, 33, 31, and 58) was a predictor of better survival compared with alpha-7 species group including the other five HR types (18, 39, 59, 68, and 45). Substantial differences in risk for high-grade cervical intraepithelial neoplasia (CIN) and cervical cancer have been revealed between HR HPV types, in which HPV16 and HPV18 confer the highest risk [11,12] However, the relationship between HPV genotype and cervical cancer CT chemotherapy, RT radiotherapy, CRT chemoradiotherapy, CCRT concurrent chemoradiotherapy prognosis has been controversial. Plich et al. identified HPV16 infection as a poor prognostic factor in 204 patients treated by primary radical hysterectomy and pelvic lymphadenectomy [4]. Conversely, another study showed that HPV16 positivity was significantly associated with improved prognosis in the whole series of cervical AC/ ASC and also in subgroup receiving primary RT/CCRT [13]. In our study, the results supported the hypothesis that HPV16 has a favorable impact on the prognosis of cervical cancer. Further, we demonstrated that HPV16related alpha-9 species significantly lowers the risk of cervical cancer-related death and recurrence/metastasis than the alpha-7 species, which was consistent with a previous study in patients undergoing primary radiotherapy [14]. Moreover, although several studies found that HPV18 positivity was associated with poorer prognosis of patients receiving primary surgery [5,15,16], other studies [17,18] and ours failed to support the relationship. Given the much lower prevalence of HPV18 in cervical cancer than HPV16, independent studies with large sample size are needed to assess the impact of HPV18 on patients' prognosis. In addition, because the HPV genotyping kit used in this study does not cover HPV67 and 70, which are high-risk types for cervical cancer, the impact of HPV67 and 70 on prognosis remains to be determined. The underlying mechanisms that result in the tumors caused by HPV16 and the alpha-9 species being less aggressive are still undetermined. Interestingly, HPV status has been recognized as a strong and independent factor for favorable survival of patients with oropharyngeal cancer (OPC) [19,20]. According to a systematic review, HPV prevalence was 35.6% (95% CI: 32.6-38.7%) in OPC specimens, and HPV16 accounted for a larger majority of HPV-positive OPC (86.7%; 95% CI, 82.6-90.1%) [21]. A better response to chemotherapy and radiation was observed for HPV-positive OPC [22][23][24]. In a worldwide survey of HPV genotype in cervix cancer, 61% of tumors were positive for HPV16 and 83% were positive for the alpha-9 species [3], similar to the data in our study. In vitro studies have revealed significant differences in biological behaviors between HPV types. For example, HPV16 is associated with a higher level of tumor apoptosis than HPV18, affording one possible explanation for more radiosensitive cervical cancer with HPV16 [25]. In addition, HR-HPV E6 proteins could interact with cellular PDZ domain-containing proteins to promote cell immortalization, invasion, and epithelial-tomesenchymal transition (EMT) characteristics [26,27]. There are significant differences in the interactions of HPV16 and HPV18 E6 with the PDZ domain-containing proteins, because a critical difference exists in the amino acid residue at the PDZ-binding motifs of the two E6 proteins [28]. This difference exists not only between HPV16 and HPV18, but also between the alpha-9 and alpha-7 species. Whether the variation in the PDZ domainbinding capacities determines the observed differential therapeutic response is worth additional exploration. Fig. 1 The role of HPV genotype in cervical cancer prognosis. a Kaplan-Meier overall survival (OS) curves for HPV16 and non-HPV16 types; (b) Kaplan-Meier OS curves for the alpha-7 and alpha-9 species; (c) Kaplan-Meier disease free survival (DFS) curves for the alpha-7 and alpha-9 species
v3-fos-license
2018-12-11T00:48:15.909Z
2015-08-31T00:00:00.000
143338057
{ "extfieldsofstudy": [ "Sociology", "Art" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jwsr.pitt.edu/ojs/jwsr/article/download/7/632", "pdf_hash": "476430cc3f7fca06cfdaf2010ecc6f15652df07a", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1554", "s2fieldsofstudy": [ "History" ], "sha1": "476430cc3f7fca06cfdaf2010ecc6f15652df07a", "year": 2015 }
pes2o/s2orc
The road from Mandalay to Wigan is a long one and the reasons for taking it aren ’ t immediately clear ” : A World-System George Orwell is one the best known and highly regarded writers of the twentieth century. In his adjective form— Orwellian—he has become a “Sartrean ‘singular universal,’ an individual whose “singular” experiences express the “universal” character of a historical moment. Orwell is a literary representation of the unease felt in the disenchanted, alienated, anomic world of the twentieth and twenty-first centuries. This towering cultural legacy obscures a more complex and interesting legacy. This world-system biography explains his contemporary relevance by retracing the road from Mandalay to Wigan that transformed Eric Blair, a disappointing-Etonianturned-imperial-policeman, into George Orwell, a contradictory and complex socialist and, later, literary icon. Orwell’s contradictory class position—between both ruling class and working class and nation and empire—and resultantly tense relationship to nationalism, empire, and the Left makes his work a particularly powerful exposition of the tension between comsopolitianism and radicalism, between the abstract concerns of intellectuals and the complex demands of local political action. Viewed in full, Orwell represents the “traumatic kernel” of our age of cynicism: the historic failure and inability of the left to find a revolutionary path forward between the “timid reformism” of social democrats and “comfortable martyrdom” of anachronistic and self-satisfied radicals. of the capitalist epoch with Thomas More's Utopia (1516), Johann Andrea's Christianopolis (1619) and Tommaso Campanella's City of the Sun (1623).These works are an early expression of the faith in humankind's individual and social progress that successively characterized Renaissance humanism in fifteenth and sixteenth centuries, Enlightenment philosophy in the seventeenth and eighteenth centuries, and nineteenth century socialism (Fromm 1961: 258).For much of capitalist history, this utopian thinking framed "a brilliant horizon visible to everyone around the world, shining with promises at certain times: modernity, rationality, progress, liberalism, nationalism, socialism" (Quijano 2002: 75).The twentieth century marks this horizon's eclipse: the senseless slaughter of the First World War, the horrors of fascism and Stalinism, the unrealized promises of decolonization and integration, the defeat of socialism, and the triumph of a cynical, unrestrained, and increasingly illiberal capitalism. While remembered for his anti-authoritarian satires, it is the road from Mandalay to Wigan that transformed Eric Blair, a disappointing-Etonian-turned-imperial-policeman, into George Orwell, a contradictory and complex socialist and, later, literary icon.Three generations previous, his family had married into the landed gentry but, by Orwell's time, the family's fates were mixed up in the dirty work of empire.Orwell followed his father's footsteps into the Indian Civil Service.Emerging out of his experience in Burma, Orwell's self-conscious reinvention as a socialist writer took on a uniquely important and enduring significance.Orwell's own contradictory class position-relative structural subordination in Britain and structural dominance in Burma-personified these dilemmas and led him to confront three unpleasant realities put that him at odds with the dominant currents of the early 20 th century Old Left: (1) the structural implication of the British Left and industrial working class in imperialism; (2) the tension between the cosmopolitan interests and identities of the Left intelligentsia and the more locally rooted concerns of "ordinary people"; and (3) the often retrogressive nature of "progress." Orwell was unable resolve to these structurally determined tensions.In the first instance, Orwell was a product of particular moment, the hegemonic decline of the United Kingdom, and a member of a particular intellectual formation, the Old Left in Britain.Orwell's contradictory class location and resultantly tense relationship to nationalism, empire, and the Left makes his work a particularly powerful exposition of the tension between comsopolitianism and radicalism, between the abstract concerns of intellectuals and the complex demands of local political action. In the second instance, he was an active agent in an important historical process, the mid-century collapse and fragmentation of the global Old Left.Orwell's development represents an internal critique with lasting implications for Left politics that are obscured by his enduring power as a cultural icon, the singular universal of our age of cynicism. Viewed in full, Orwell represents the "traumatic kernel" of our age of cynicism: the historic failure and inability of the Left to find a revolutionary path forward between the "timid reformism" of social democrats and "comfortable martyrdom" of anachronistic and self-satisfied radicals (Orwell 1941: 93-94).Rather than culminating development of a political position, Animal Farm and 1984 represent Orwell's personal failure to answer his own critiques and the recuperation of his concerns in the form of satire.Indeed, Orwell's politics, however guided by a few steadfast commitments, shifted in relation to personal and historical circumstances: his upper class socialization, his colonial experience, his self-conscious reinvention as a socialist writer, his service in the Spanish Civil War, and his increasingly visible participation in public debates before, during, and immediately after World War II.Orwell's politics are important not for their programmatic unity but for what they reveal about his own historical conjuncture and its continuing repercussions.While Orwell's life tells us the most about the exhaustion and fragmentation of the Old Left, his enduring popularity speaks to his continuing relevance to a world where imperialism, nationalism, and development have become more complicated but no less salient realities. Can there be a World System Biography? There is a large literature on Orwell.It includes a number of biographies (Woodcock 1966(Woodcock /2005;;Crick 1981, Shelden 1991;Meyers 2001;Brooker 2004;Taylor 2004;Colls 2013) and series of critical reflections, ranging from the more explicitly political and polemic (Williams 1971;Hitchens 2002;Lucas 2004) and the more academic and disinterested (Newsinger 1999a;Ingle 2006;Clarke 2007;Bounds 2009).Much of this writing is concerned with the institutionalization of Orwell in political discourse.He is alternatively praised as "the wintry conscious of a generation" (Meyers 2001) and reviled as a self-appointed "policeman of the left" (Lucas 2003;2004).To his defenders, he is an independent and insightful defender of democratic socialism (Crick 1981;Newsinger 1999a).To critics, he variously depicted as an ultra-Left dilettante (Williams 1971), an anti-feminist (Patai 1984) and "a sick counterrevolutionary" (Belllow 1970(Belllow /2004: 40): 40).His is claimed on the far Left by anarchists (Woodcock 1966(Woodcock /2005;;Richards 1998); while, on the far Right, neoconservatives maintain that Orwell anticipated their politics (Podhoretz 1983).In the last three decades, more dispassionate scholarship has separated Orwell from the polemics of those who claim or denounce him (Rodden 1989;Newsinger 1999a;Ingle 2006;Clarke 2007;Bounds 2009). Where most accounts of Orwell consider him within the British national context or wider world of Anglophone culture, the world-historical coordinates of Orwell holds unique significance that eludes conventional biographers.Specifically, world-system biography centers the analysis of the "complex triangulations" among social processes that animate different temporalities: the historical long term, the generational conjuncture and cacophony of events (Derluguian 2005: 82-83).On the level of historical structure, Orwell is unavoidably shaped by the decline of Britain as world hegemonic power.For the intermediate era of Orwell's generation, politics were forged in the crucible of sharp ideological debates around Stalinism, fascism, the threat of war and the hope for revolution.Orwell's political trajectory is also shaped by the immediate associations and events that defined his life on a more quotidian basis.While such decisions that look random or fateful, the product of chance or charisma, Orwell's position in the political and literary field structured his reaction to events. In the "dust" of events, world-system biography reverses the traditional methodological imperatives of historical social science.A focus on biography can reveal the way individual agents internalize and mediate long-enduring global structures, transforming the weight of history into resources practically mobilized in concrete struggles to reshape the world.The generational time of the conjuncture becomes paramount.Orwell's politics are only meaningful when considered in relation to his contemporaries.To this end, a series of "incorporating comparisons" (McMichael 1990) between Orwell and other intellectuals-"Auden & Co.," Raymond Williams, and James Burnham-can reconstruct the Anglo-American Old Left, its exhaustion, and its impact on New Left.These comparisons break the structural determinism that is seemingly implicit in world-systems analysis.They place Orwell-however conditioned by structural forces-in a shifting web of associations with contrasting political trajectories. True to the biographic form, this article proceeds chronologically.Finally, I argue that Orwell's continuing popularity is a result of the enduring contradictions of Left politics that he himself identified. Orwell's World-Systemic Position and the Old Left The long-term decline of British power formed the backdrop of Orwell's life.Born in 1903, Orwell's youth was bracketed by the signal and terminal crises of British hegemony: the Long Depression of 1873 to 1896 and the 1931 collapse of the British pound's link with the gold standard (Arrighi 1994: 179, 221).Orwell's family experienced the decay of British hegemony as a crisis of class reproduction.Eric Blair, the man who would adopt the penname George Orwell, was born into a downwardly mobile genteel family, what he described as "the lowerupper-middle class…a sort of mound of wreckage left behind when the tide of Victorian prosperity receded" (Orwell 1937(Orwell /1958: 121): 121).In the late 18 th century, Charles Blair (1743-1820), Orwell's great-grandfather, was an absentee landlord of Jamaican slave plantations that married into landed gentry.Orwell's branch of the family, however, received very little of this wealth (Crick 1981: 45-50). Thomas Richard Blair (1802-1865), Orwell's grandfather, the tenth-born son of Charles Blair, "was under the disagreeable obligation of having, as that last child, to earn his living" (Crick 1981: 46).Distant from a chance to inherit family fortune, Orwell's grandfather opted to for colonial service, a path both Orwell and his father would eventually follow.The Blairs thus joined "the superfluous men" that combined with "superfluous capital" or the "alliance between the mob and capital" that formed the social basis of imperialism (Arendt 1951(Arendt /1973: 147-157): 147-157).Unlike Arendt's iconic account, however, this turn to colonial service did not represent a release of "the reserve army of labor" through safety valve of empire.Instead, colonial service was an attempt to reproduce their privileged position in Britain's class hierarchy and reaffirm their status. This strategy worked for the first generation.(Stansky and Abrahams 1972: 5-12).By the time Orwell was born in 1903, the family had settled into a "contradictory class location," an ambiguous position in-between classes (Wright 1985).Within the national class structure of the UK, Orwell was part of the "dominated fraction of the dominant class," or the middle classes with aesthetic taste similar to the bourgeoisie but political and economic interests more congruent with the working classes (Bourdieu 1984).Wealthy enough to grasp at a fading gentility but too poor to make the claim enforceable, Orwell inherited a contempt for the poor, while, simultaneously, developing deep insincerities from his exposure to his upper-class peers (Stansky and Abrahams 1972: 23-79, 139-144). 2In his own description, he was "an odious little snob" in his youth (Orwell 1937(Orwell /1958: 137: 137).Orwell's contradictory position in Britain is further complicated by his inherited strategy of class reproduction through colonial service.As a result, Orwell sat between both ruling class and working class and nation and empire: relative structural subordination in Britain and structural dominance in Burma. Personally, Orwell felt British hegemonic decline and his own family's downward mobility as pressure to "make good" and re-establish the Blairs' flagging wealth and status. 3 Family connections and his academic performance secured him opportunities to study at prestigious schools, even though his family could not afford them.Orwell attended St. Cyprians, a private preparatory school, and later Eton College, the elite public school.These elite institutions of class reproduction inculcated students in habitus of the ruling class and the patriotism of "War, Empire and Kipling" (Crick 1981: 85).Orwell's time at Eton, however, was also the period of World War I, the Bolshevik Revolution, and intensified postwar labor unrest (the coal miners' strike of 1919).The "general revolt against orthodoxy and authority," was even felt at "Old Eton," where Orwell and his fellow "public schoolboys" idolized Lenin and "derided the [Officers Training Corps], the Christian religion and perhaps even compulsory games and the Royal Family."No doubt, there were limits to this moment of discontent.In his words, Orwell and his fellow Etonians "retained, basically, the snobbish outlook of our class, we took it for granted that we should continue to draw our dividends or tumble into soft jobs, but also it seemed natural to us be 'agin the government'" (Orwell 1937(Orwell /1958: 138-139: 138-139). Orwell's performance at Eton was disappointing.He missed his opportunity to study at Oxford or Cambridge and "tumble into [a] soft job."Instead, he followed his father's footsteps, joining the Indian Imperial Police.Here, Orwell departed from the standard, methodologicallynationalist accounts of contradictory class locations.As one of 90 police officers deployed to Burma, Orwell was "overseeing life-and-death matters for [Moulmein/Mawlamyine, a city with] 2 Orwell's autobiographically detailed these experiences in the posthumously published essay "Such, such were the joys" (Orwell 1952); although the veracity of his claims about public school life has been disputed (Buddicom 1974;Pearce 1992). 3The term "make good" comes from Orwell's self-parodying novel, Keep the Aspidistra Flying.Gordon Comstock, the protagonist, shares Orwell's predicament as the heir of respectable family whose wealth has disintegrated.An aspiring but struggling poet, Comstock leaves his position at an advertising agency to pursue his artistic ambitions.Comstock feels pressure to "make good" from his family, girlfriend and, in a convoluted way, his own neurotic and petty war against "the money god" of commodity fetishism and middle class respectability (Orwell 1956). a population which was equal to that of medium-sized European city" (Shelden 1992: 105;Newsinger 1999a: 3-6).In the in 1920s, revolutionary nationalism in Burma escalated from student protests to tax strikes.British rule was contested and Orwell was a visible authority figure."I was hated by large number of people," he later reflected (Orwell 1936(Orwell /2000: 235): 235). After five years in the Indian Imperial Police, Orwell resigned and, in a self-conscious act of penitence, submerged himself into the world of poverty in Paris, London and elsewhere in Southern England.From 1932 to 1935, he worked a series of jobs including teaching at prep school and clerking in a book store.Thereafter, writing provided his primary means of income. During this period, Orwell wrote many essays, Down and Out in London andParis (1933/1961), a roman à clef concerning urban poverty, and three novels, of which Burmese Days (1934Days ( /2000) ) is the best known.From 1936 to 1948, Orwell published his best known political works: The Road to Wigan Pier (1937Pier ( /1958)), an exposé on coal mining and an idiosyncratic diatribe against the Old Left; Homage to Catalonia (1938Catalonia ( /1969)), an anti-Stalinist account of the Spanish Civil War; The Lion and The Unicorn: Socialism and the English Genius (1941), his audacious call to turn war mobilization into revolution; and his two best known works, Animal Farm (1945Farm ( /1995Farm ( ) and 1984Farm ( (1950Farm ( /1961). Orwell's position at the intersection of the literary and political fields, however, was never a The main debates were of the Left in the UK were structured by the dominance of these parties.They centered on the struggles of organized labor, the fight to form a Labour government, and questions about the nature of the Soviet Union.These parties were surrounded by a series of smaller, more radical organizations.Orwell was closest to one of these, the Independent Labor Party (ILP), a loose party founded in 1893 that lacked any strong theoretical orientations and Orwell's Double Vision, Tory Anarchism, and British Hegemonic Decline For Raymond Williams (1971), Orwell's rejection of imperialism and reinvention as a socialist writer did not overcome the contradictions of his position.While Orwell's contradictory class location invested him with a "double vision, rooted in the simultaneous positions of dominator and dominated" (18), it also left him alienated and unable to express meaningful solidarity with oppressed peoples.He always saw "other people…as an undifferentiated mass beyond."His double vision made him a contradictory and confused socialist: "When, however, in any positive way, he has to affirm liberty, he is forced to deny its inevitable social basis."A closeted liberal, "the only dissent" Orwell offers "comes from a rebel intellectual" (Williams 1983: 310-313).For Williams, Orwell is an ultra-Left dilettante who eventually "reverts to type."The terminus of this return to form is 1984, in which "all modern forms of repression and authoritarian control" were attributed to a single political tendency, socialism, which Orwell misrepresents" (77).In contrast, I argue that a world-system biography views Orwell's double vision as the embodiment of the structural contradictions that defined his social position.Instead of a "return to form"via a methodological nationalism-it is the point of departure for a life of political engagement. Williams is correct in his insistence that Orwell is a paradoxical and contradictory figure, yet he refuses to acknowledge much creativity or conflict in Orwell's politics and work.Instead of reproducing inherited biases, Orwell adopted a dissenting position that, however shaped by his place and time, was not crudely reducible to his social position.Orwell is a "Tory anarchist" or a "cultural dissent[er], out step with and opposition to many features of the modern world." 5 Tory anarchists share common values and practices: "the use of satire…artistic ambition…respect for privacy and the liberty of the individual; a fear of the state…a nostalgic and melancholy temper…criticism of social conformism and a pervasive sense of pessimism" (Wilkin 2013: 199).Structurally, Tory anarchism is "a reaction to profound changes in Britain's place in the modern world-system."Personally, it is rooted in "the experiences of a group of relatively drifted between the social democracy of the Labour Party to more libertarian wings of the radical Left.By 1930s, when Orwell was a fellow of traveler of the ILP, the party had taken a "Quasi-Trotskyist path" (Bounds 2009: 24).All these parties, however, shared a common two-step strategy: seize the state apparatus and use it to complete the transitions to socialism.In this way, they can be usefully called "Old Left" parties to contrast from the post-1968 "New Left," which rejected both its predecessor's vertical forms of organization and tactical focus on attaining state power (Arrighi, Hopkins & Wallerstein 1991). 5Wilkin comes to the term Tory anarchist from Orwell's own writings and biography.Orwell (1946Orwell ( /2000a) used the term to describe Jonathan Swift: "He is a Tory anarchist, despising authority while disbelieving in liberty and preserving the aristocratic outlook while seeing clearly that the existing aristocracy is degenerate and contemptible" (216). privileged men who have been coming to terms with the loss of…power and wealth" associated with the decline of Britain as a world-hegemonic power (Ibid: 200). Peter Wilkin defines Tory anarchism as a particularly English counter-hegemonic practice, what Raymond Williams (1977) in his elaboration of hegemony, would call the production of "traditions" or the "shaping a past and pre-shaped present, which is then powerfully operative in the process of social and cultural definition and identification" (113).This attempt at counterhegemonic tradition-making is clear in Orwell's writing on working class decency and his wartime efforts to recuperate English patriotism as the basis of a revolutionary movement (Clarke 2007: 13-62;98-145).In The Road to the Wigan Pier, Orwell (1937Orwell ( /1958) romanticized the proletarian household as reaching "perfect symmetry" (117, 178).In the practices of the working class, Orwell saw the true worth of socialism-"justice and common decency."Revolutionary appeals need to be rooted in "a vision of present society with the worse abuses left out, and with interests centering around the same thing as at present-family life, the pub, football and local politics" (Ibid: 176-177).In The Lion and the Unicorn (1941), "Notes on Nationalism," (1945/2000) "The English People" (1947/2000), Orwell explored the specificities of English national character to identify a revolutionary way forward for the UK.While "the English intelligentsia…t[ook] their cookery from Paris and their opinions from Moscow," Orwell worked to define a "specifically English Socialist movement" (Orwell 1941, 48, 111; See also past and reinvented himself along with his politics, "renouncing his youthful empiricism in exchange for ideology and social theory, his inherited Welshness and acquired Englishness for Europeanness and his young man's Bevanite socialism for New Left Marxism" (Rodden 1989: 197-198).In this way, Williams serves as Orwell's foil.The son of a railway worker, Williams Orwell's Politics While the questions Orwell approached were the questions of his time, the answers he offered were rooted in his contradictory class location, I argue.This included structural domination in Burma and relative structural subordination in Britain.His position separated Orwell from many of his colleagues and forced him to reckon with three unpleasant realities: (1) the structural implication of the British left and working class in imperialism; (2) the tension between the cosmopolitan interests and identities of the Left intelligentsia and the more locally rooted concerns of "ordinary people"; (3) the often retrogressive nature of "progress."Orwell thus advocated for a politics that was both socialist and libertarian, both anti-colonialist and anticommunist, global in ambitions but sensitive to "traditional loyalties" of specific cultural systems. Orwell's political development began with a rejection of his experience in Burma.As such, his first works on colonialism mainly contain an individualist perspective. 6In his later political writings, however, Orwell confronted class and empire more systematically.The Road to Wigan Pier (1937Pier ( /1958) ) represents the maturation of his politics and his embrace of socialism.The result of a two month study of coalminers in Lancashire and Yorkshire, The Road to Wigan Pier was an important political polemic written in the mid-1930s when the Miners Federation was struggling against the scab Spencer Union in an effort to improve wages and secure national bargaining (Taylor 1996).With 44,000 sold, it was the most successful title of the Left Book Club, a socialist publishing group that peaked in the popular front period (Rodden & Rossi 2012: 56).The book is comprised of two parts: the first, a detailed study of conditions of coalminers and, the second, an idiosyncratic diatribe against the Labour Party, the world Communist Movement, and Left intelligentsia. Orwell's double vision led him to approach coal mining from two perspectives.As member of the "lower-upper-middle class," he focused on the arduousness of mining and position of miners in Britain.Not only is their work "so exaggeratedly awful," coal mining was "vitally necessary and yet so remote from our experience, so invisible, as it were, that we are capable of forgetting it as we forget the blood in our veins" (34)(35). 7Yet as a former servant of Empire, he also took a global view: For in the last resort, the only important question is, Do you want the British Empire to hold together or do you want it to disintegrate?And at the bottom of his heart no Englishman…does want it to disintegrate.For apart from any other consideration, the high standard of life we enjoy in England depends upon keeping a tight hold on the Empire…Under the 6 "A Hanging" (1931/2000), "Shooting an Elephant" (1936/2000), and Burmese Days (1934/1962) all condemn colonialism as a dehumanizing system that debases both the oppressor and oppressed and allows only the most ruthless to prosper.Similarly, Down and Out in London andParis (1933/1961) was successful as "sympathetic portrayal of itinerant poor" that "expos[ed] the iniquities of the workhouse systems" but it failed to "measure up to scale of the economic crisis."Focused on poverty at the individual level, the work showed no awareness of scope of the problem.At the time, three million workers were officially unemployed in Britain.While moving toward socialism, early Orwell wrote in the tradition of reformist liberalism (Newsinger 1999a: 30-31). 7He elaborates further: "In a way it is even humiliating to watch coal miners working.It raises in you a momentary doubt about your own status as an 'intellectual' and a superior person generally.For it is brought home to you, at least while you are watching, that is it is only because miners sweat their guts out that superior persons can remain superior.You and I and the editor of the Times Lit. Supp., and the Nancy poets and the Archbishop of Canterbury and Comrade X, author of Marxism for Infants-all of us really owe the comparative decency of our lives to poor drudges underground, blackened to the eyes, with their throats full of coal dust, driving their shovels forward with arms and belly muscles of steel" (Orwell 1937(Orwell /1958: 34-35: 34-35, original emphasis). capitalist system, in order that England may live in comparative comfort, a hundred million Indians must live on the verge of starvation-an evil state of affairs, but you acquiesce in it every time you step into a taxi or eat a plate of strawberries and cream.The alternative is throw the Empire overboard and reduce England to a cold and unimportant little island where we should all have to work very hard and live mainly on herrings and potatoes.That is the last thing that any left-winger wants.Yet the left-winger continues to feel that he has no moral responsibility for imperialism.He is perfectly ready to accept the products of Empire and to save his soul by sneering at the people hold the Empire together (Orwell 1937(Orwell /1958: 159-160): 159-160). From this point onward, Orwell's writing on socialist strategy would be uniquely characterized by his global approach to class that led him to conclude that the British working class and Left was implicated maintenance of the colonial system.In Adelphi in 1939, he argued that "that the overwhelming bulk of the British proletariat does not live in Britain but in Asia and Africa."On these grounds, he criticized the abandonment of anti-imperialism during the Popular Front period, equating it with opportunistic political posturing: "Quakers shouting for a bigger army, Communists waving Union Jacks, Winston Churchill posing as democrat" (1939/2000: 394, 397).In a later reflection on the UK's postwar Labour government, Orwell (1948) identified an "unsolved contradiction that dwells at the heart of the Socialist movement."Socialism promises both "better material conditions for the white proletariat" and "liberation for the exploited coloured peoples.But the two aims, at least temporarily, are incompatible" (Orwell 1948: 346). In his Tory anarchist view, Orwell's politics also show a deep skepticism of the notion of progress and the Old Left's assumption that socialist modernization would deliver human liberation.Instead, Orwell insisted that "machine civilization" removes the aesthetic and emotive aspects of life."If a man cannot enjoy the return of spring," he asked "why should he be happy in a labour-saving Utopia?" (Orwell 1946(Orwell /2000b: 144): 144).For Orwell, endorsement of "the idea of mechanical progress, not merely as a necessary development but as an end in itself" common to both fascists and communists was replacing traditional social norms but failing to create a new humanism."In a healthy world," he writes in The Road to Wigan Pier, "there would be no demand for tinned food, aspirins, gramophones, gaspipe chairs, machine guns, daily newspapers, telephones, motor-cars" (Orwell 1937(Orwell /1958: 205): 205).Four years later, he wrote that to accept the contemporary world was to accept "concentration camps, rubber truncheons, Hitler, Stalin, bombs, aeroplanes, tinned food, machine guns, putsches, purges, slogans, Bedaux belts, gas masks, submarines, spies, provocateurs, press censorship, secret prisons, aspirins, Hollywood films, and political murders."For Orwell, technical development was retrogressive and dehumanizing: "Progress and reaction have both turned out to be swindles" (Orwell 1940(Orwell /2000: 500, 527: 500, 527, original emphasis). Orwell's world-historical imagination and related anti-modernism, however, were not simply limited to critique.Orwell also envisioned socialism as a world order implicitly based in common sense and notions of basic decency: And all the while everyone who uses his brain knows that Socialism, as a world-system and wholeheartedly applied, is a way out.It would at least ensure our getting enough to eat even if it deprived us of everything else. Indeed, from one point of view, Socialism is such elementary common sense that I am sometimes amazed that it has not established itself already.The world is a raft sailing through space with, potentially, plenty of provisions for everybody; the idea that we must all cooperate and see to it that every-one does his fair share of the work and gets his fair share of the provisions seems so blatantly obvious that one would say that no one could possibly fail to accept it unless he had some corrupt motive for clinging to the present system.Yet the fact that we have got to face is that Socialism is not establishing itself.Instead of going forward, the cause of Socialism is visibly going back (Orwell 1937(Orwell /1958: 171): 171). His anti-modernism translated politically into a call to humanize socialism.He realized that the formation of a professional middle class, mass mediation and mass consumerism in a changing capitalism was making proletarian revolution an anachronistic strategy in place like Britain. 8As 8 "After twenty years of stagnation and unemployment, the entire English Socialist movement was unable to produce a version of Socialism which the mass of the people could even find desirable.The Labour Party stood for a timid reformism, the Marxists were looking at the modern world through nineteenthcentury spectacles.Both ignored agriculture and imperial problems, and both antagonized the middle classes.The suffocating stupidity of left-wing propaganda had frightened away whole classes of necessary people, factory managers, airmen, naval officers, farmers, white-collar workers, shopkeepers, policemen.All of these people had been taught to think of Socialism as something which menaced their livelihood, or as something seditious, alien, 'anti-British' as they would have called it.Only the intellectuals, the least useful section of the middle class, gravitated towards the movement. A Socialist Party which genuinely wished to achieve anything would have started by facing several facts which to this day are considered unmentionable in left-wing circles.It would have recognized that England is more united than most countries, that the British workers have a great deal to lose besides their chains, and that the differences in outlook and habits between class and class are rapidly diminishing.In general, it would have recognized that the old-fashioned 'proletarian revolution' is an impossibility…Labour Party politics had become a variant of Conservatism, 'revolutionary' politics had become a game of make-believe" (Orwell 1941, 93-95). such, he repeatedly condemned the sectarianism and abstruse theorizing of communist factions as counter-productive blustering.He called on the Left to stop antagonizing the "sinking middle class" before they turn to Fascism."The job of the thinking person," he concluded "is not to reject Socialism but to make up his mind to humanise it" (Orwell 1937(Orwell /1958: 219): 219). While Orwell's politics evolved in relation to events, he had developed a humanist conception of socialism by the mid-1930s that would remain constant throughout his career.As an item of faith, Orwell believed that "ordinary people" could understand and act upon the world outside of the parameters set by the state, party, media or other powerful institutions.As Stephen Ingle notes, "for Orwell, reality, the external world, could be discerned by the undeceived intelligence of the ordinary individual…ready to do battle with the collective state over the issue of truth" (Ingle 2006: 128).This attitude led him seek a revolutionary politics that was built on the prevailing notions of decency found among the working classes."[T]rue values are not to be created nor old values 'transfigured' by the revolution or a in a new revolutionary consciousness."Instead they can be found "already in the decency, fraternity, mutual aid, sociability, tolerance and skepticism towards authority of the working class" (Crick 1981: 33). These ideas are very attractive.Indeed, they foreshadow both the humanism of the New Left and the affirmation of subaltern identities seen in "new social movements."Like these later movements, however, Orwell also found it exceedingly difficult to turn humanist notions of "common decency" into a workable revolutionary program. Orwell and the Agonies of the Left Orwell was more than an individualist dissenter in the mode of a "Tory anarchist."Rather, he was part of a discrete intellectual tendency.showing a broad sympathy for Trotsky's ideas, but eschewing any organisational commitment" (Newsinger 1999b: 25).In the 1930s, this formation centered on the Trotskyist wings of the communist movement but, by the late-1940s, it had evolved into what became known as "the non-communist left." Orwell's immersion in this intellectual formation shaped his both work and political development.While The Road to Wigan Pier represents Orwell's decisive move to the Left and his full embrace of socialism, the work "must be seen as an outgrowth from the whole complex of argument and acquaintance within The Adelphi-ILP Left" (Sedgwick 1969).Practically, Orwell's connections to The Adelphi and ILP gave Orwell the access to make his research possible.Intellectually, this milieu was the most dynamic intellectual force of the British Old Left: In several directions its concerns closely foreshadowed those of the British New Left of 1957-60: working-class culture and community (this time as an actuality rather than a nostalgia), a broad Socialism scornful of pro-Russian and pro-Labour cant, anti-'literary' literary criticism, an ethical, early-Marxian 'Socialist humanism.'(Ibid). Like his colleagues across the Atlantic who formed the American Workers Party in 1933 as "an authentic American party rooted in the American revolutionary tradition" to rival the CPUSA (Hook 1987: 191), Orwell was part of a larger group that presaged the New Left. As a line of critique internal to the Old Left, this intellectual formation developed in relation to the agonies of the Left.During Orwell's years as a writer from 1927 to 1950, the Comintern moved from the aggressively sectarian "class-against-class" politics of to the Third Period (1929)(1930)(1931)(1932)(1933) to the broad anti-fascist coalitions of the "popular front" (1934)(1935)(1936)(1937)(1938)(1939).The Molotov-Ribbentrop pact brought a period of revolutionary defeatism (1939)(1940)(1941) before war with Germany brought renewed collaboration with progressives and liberals.After the war, the reformed Cominform (1947) directed communist parties to again return to a popular front strategy in order to co-opt nationalist and anti-American sentiment against the dollar diplomacy of the Marshall Plan (Claudin 1975).During the sectarian Third Period, the membership of the Communist Party of Great Britain contracted from over 10,000 to only to 2,555.Membership rebounded to 6,000 during the popular front period, reached 16,000 at the start of the war and peaked at 56,000 in 1945 (Bounds 2009: 8).During these years, many communist and fellow travelers found it difficult to hold the party line and began to go different ways.For some, the Moscow show trials or Molotov-Ribbentrop pact was the point of divergence.For Orwell, the Spanish Civil War marked a clear break with the communist movement. Whereas The Road to Wigan Pier marked Orwell's decisive move into the socialist camp, his service in Spain and his account of the episode, Homage to Catalonia (1938Catalonia ( /1969)), signaled his increasingly ardent anti-communism.Through his connections to New Adelphi and ILP, Orwell fought in the militia of the Trotskyist Workers Party of Marxist Unification (POUM). Although his time on the frontlines was uneventful, Orwell defended POUM positions in street fighting during the sectarian Barcelona May Days.In May 1937, Communist Party of Spain and its Catalan wing, the Unified Socialist Party of Catalonia, systematically uprooted the institutions of autonomous working class power that effectively governed Barcelona, dispersing revolutionary committees and disarming the National Confederation of Labor-controlled revolutionary police.The communists captured and executed many, including Andres Nin, POUM general secretary (Bolloten 1991: 489-511).After the POUM was banned in June 1937, Orwell and Eileen O'Shaughnessy, his first wife, narrowly escaped Spain, wanted as "known Trotskyists…linking agents of the ILP and POUM" (Shelden 1991: 295). In the period between his service in Spain and his move toward the Labour Left in 1943, As the Atlee administration failed to destroy institutions of class privilege (e.g., 9 He later realized that he "over-emphasised the anti-Fascist character of the war, exaggerated the social changes that were actually occurring and underrated the enormous strength of the forces of reaction" (Orwell 1944(Orwell /2000: 297): 297).More generally, Orwell was amazed by what he felt was the subdued reaction of the British people to the tremendous changes happening around them: "In the face of terrifying dangers and golden political opportunities, people just keep on keeping on, in a sort of twilight sleep in which they are conscious of nothing except the daily round of work, family life, darts at the pub, exercising the dog, mowing the lawn, bringing home the beer, etc" (Orwell 1945(Orwell /2000: 384): 384). the House of Lords, the public schools and titles), undertook a measured program of nationalization with considerable compensation to owners, and increased the exploitation in the colonies to offset postwar depression (despite independence for India, Pakistan and Sri Lanka), solution to this dilemma-revolutionary patriotism drawn from the decency of the working class-is fraught with contradictions.While Orwell was correct in identifying the power of "traditional loyalties" over the cosmopolitan values preferred by the left (democracy, human rights and social justice), it is unclear whether "the family" or "the nation" can be the basis of revolutionary politics.Orwell, for his part, was uncritical of the proletarian and middle class families to which he looked for justice and common decency, the basis of his socialism.Feminist criticism of The Road to Wigan Pier rightfully rebukes Orwell's for ignoring women's reproductive labor and describing the existing gender relations as reaching "perfect symmetry" 10 Here, the demographics of Occupy Wall Street are instructive.The Occupy Research working group based in the original Occupy encampment in Manhattan surveyed occupiers and found that 80 percent were white and half identified their class position as middle class or higher (Occupy Research 2012).The Joseph F. Murphy Institute for Worker Education and Labor Studies at the City University of New York provide independent confirmation of the same basic picture.Researchers surveyed the participants at a joint Occupy-labor movement May Day rally in New York City.They found that two-thirds of those who described themselves as "actively involved" in Occupy Wall Street were white, while 80 percent had a bachelor's degree or higher (Milkman et al 2013).With this class and racial basis, Occupy's problems expanding from downwardly mobile middle class are not surprising.Emahunn Raheem Ali Campbell, in an essay titled "A Critique of the Occupy Movement from a Black Occupier" argued the movement alienated people of color because it did not challenge white privileged and remained a movement organized by "white people have now decided to rail against capitalism as it currently functions only when it has proven adverse for their financial security" (Campbell 2011, 42, original emphasis).(Campbell 1984). 11More pervasively, Daphne Patai (1984) finds a gendered framework in Orwell's writing.While Patai's ultimate charge that "Orwell cares more for his continuing privileges as a male than he does for the abstractions of justice, decency and truth on behalf of which he claims to be writing" is an overreach (266), Orwell did not take feminism seriously.In this way, Patai's often vitriolic critique of Orwell "identifies the extent to which his texts encode, and indeed, reinforce a polarised model of gender" (Clarke 2007: 97).This matter is not simply an academic concern.The question remains: what is the basis of a revolutionary movement?The New Right's successful use of the "family" and "nation" to enlist the white proletarian in the work of his own domination raises the extent to which these identities can be divorced from relations of domination that have formed them.Finally, there is Orwell's concern for authoritarianism and his anti-modernism.In the 1950s and 1960s, convergence theory, a derivation of modernization theory, argued that the seemingly opposing development strategies of the Soviet Union and United States were converging along the lines of other advanced states (Suny 2006: 19-20).Today, the Soviet Union and the United States have converged but, instead of integrating a socialist emphasis on social rights and a liberal emphasis on political rights, the opposite has occurred: the neoliberal attack on the social state has eviscerated the institutional accomplishments of the Old Left; meanwhile, as the 'left hand' the state (social security) retreats, 'right hand' (national security and fiscal discipline) advances (Bourdieu 1998).From the perspective of the current moment, then, the broader antimodernism embedded in Orwell's anti-authoritarianism is easier to appreciate and resonates more strongly in the post-Cold War world.While the Global Left still struggles to find the illusive third path between the "timid reformism" of social democracy and the "comfortable martyrdom" of anachronistic radicals, the Orwell of 1984 endures as the Sartrean singular universal of our cynical world. First, I explain Orwell's position in global power relations and his reinvention as socialist in light of his contradictory position between both ruling class and working class and nation and empire.I develop an incorporating comparison between Orwell and Auden and Co. to identify Orwell's lasting politicization and subsequent role as an internal critic of the Old Left.Second, I reinterpret Raymond Williams' influential analysis, comparing Orwell and Williams' modes of dissent within the specificities of their times in order to parse out the tensions between cosmopolitanism and radicalism that complicate the work of Left intellectuals and confound the work of revolutionary transformation.Third, I develop Orwell's thought in relation to his contradictory class location and detail the content of Orwell's critique of the Old Left.Fourth, I explain Orwell's political development in relation to wider shifts in the Old Left, and use James Burnham put in in relief Orwell's anti-communism and ambiguous drift toward the Right. comfortable one.As a writer, Orwell always remained apart from "the dominant literary movement of the 1930s… 'Auden & Co.': W.H. Auden, Stephen Spender, C. Day Lewis, Louis MacNiece, Christopher Isherwood, John Lehmann, Rex Warner and Edward Upward" (Stansky & Abrahams 1972: 180-182).Where "Auden & Co." spent their early twenties ensconced in the university, Orwell had a different education in the periphery of the world-system.His time in Burma radicalized him in a unique way."Auden and Co." drifted toward the Left along with a wider generational movement that saw fundamental political shifts in Britain: Lloyd George's fall from power in 1922 and the subsequent decline of the Liberal Party; the first, brief Labour government in 1924; and the May 1926 General Strike.Whereas "Auden and Co." were mere carriers of the Old Left, who drifted in and out of the movement in step with the fashions of the time, Orwell's enduring commitment to socialist transformation was life long and, through his reflections on his position and experience, he articulated an important internal critique that made him a producer of the exhaustion and fragmentation of the Old Left. 4 4 During Orwell's life, the Old Left in UK was dominated by the social democratic Labour Party and the Stalinist Communist Party of Great Britain. Clarke 2007: 98-146).Williams could not acknowledge Orwell's counter-hegemonic character arguably because Orwell's Tory anarchism speaks to deeper a dilemma that left intellectuals, Williams included, have to confront.Tory anarchism is a culturally and temporally specific accommodation to the competing pressures of "embourgeoisement and radicalization" or the tension between the cosmopolitan ideas of the global Left and the local and immediate concerns of actually existing communities.Specifically, Orwell's Tory anarchism meant a rejection of dominant theories of revolution that saw modernization-understood generally as industrial development and scientific progress-as means to liberation.This also meant an ambiguous-and a potentially reactionary-embrace of nationalism and existing working class culture.Orwell thus tried to avoid embourgeoisement by abandoning the effort to reverse his family's downslide from nobility into the new emerging professional middle class.At the same time, his radicalism was grounded in a nostalgic effort to recuperate English nationalism as the basis of revolution.Where Orwell's politicization is well known, Williams' own development is a less dramatic version of Orwell's transformation.As John Rodden explains: "For just as Orwell dropped 'Eric Blair' to become the writer and democratic socialist 'Orwell,' Williams dropped his childhood nickname 'Jim' at Cambridge to become 'Raymond,' the Left Leavisite and then moved steadily left in the 1960s and 1970s-further away from 'Jim.'"Like Orwell, Williams broke with his became politicized at Trinity College, Cambridge, where he joined the Communist Party of Great Britain.Academia was the primary site of his politics.For Williams, radicalization and embourgeoisement were complementary movements.For Orwell, they were contradictory, and produced the creative tension at the heart of his work.These transformations are responses to the problem of affiliation that confronts Left intellectuals.Through both their individual process of political becoming and their social integration into specific fields of cultural production and political struggle, Left intellectuals attempt to move away from their particular position and articulate a universal discourse for human liberation.In his political writings, Orwell consistently discussed the way class distinctions complicated the relations among the Left intelligentsia, the rising professional middle class, the industrial working class, and the "down and outers."Indeed, "George Orwell" was more than a penname.It was a political project, "the vehicle through which Blair could hone and develop his democratic socialist ambitions, his struggle against the class prejudices of his upbringing"(Wilkin 2013: 201).Today, such class divisions are again as sharp as they were during the 1930s.While changes in capitalism have altered the specific dynamics of class formation, the cultural coordinates of class and the symbolic violence associated with the naturalization of class distinctions still confront Left intellectuals concerned with the legitimacy of their voice.Orwell is a powerful literary exposition of this dilemma. Orwell entered the political and literary fields through communist cultural circles.His work was first printed in The Adelphi, the unofficial organ of the Independent Labour Party (ILP), an ideologically diverse left party that drifted toward Trotskyism during Orwell's time, and Monde, connected to the Communist Party of France.On the other side of the Atlantic, Orwell wrote for The Partisan Review, a publication that started as the John Reed Clubs' challenge to New Masses, the official publication of the Communist Party USA.After Stalin ordered the John Reed Clubs to disband in 1935, The Partisan Review "became the vehicle for the 'literary Trotskyism' of the New York intellectuals, Orwell put forward his most radical and programmatic political positions in an attempt reconcile English patriotism with revolutionary socialism.In The Lion and the Unicorn: Socialism and the English Genius (1941), he put forward a program to this end: nationalization of industries with compensation, limits on income inequality, democratic educational reform, and "positive imperial policy [that] aim[ed] at transforming the Empire into a federation of Socialist states, like a looser and freer version of the Union of Soviet Republics" (90).Most immediately, he looked to transform the Home Guard into a popular militia and turn the British war effort into revolutionary warfare.It was Orwell's boldest political position, an attempt to navigate a course between the "timid reformism" of a Labour Party that never attempted "any fundamental change" and the outmoded "the nineteenth century doctrine of class war" still upheld by the radical left (92-94).The Lion and the Unicorn represents synthesis of The Road to Wigan Pier's critique of the Left with his positive experience in anarchist Barcelona.It was audacious and, of course, totally unrealistic. 9From 1943 to his death in 1950, Orwell's politics moderated and his overall outlook darkened.American intervention and the shifting fortunes of the conflict compelled Orwell to moderate his ultra-left call to transform war mobilization into revolution.In November 1943, he became literary editor of Tribune, a position he held until 1945.Founded in 1937 by Stafford Cripps, Attlee's eventual president of the broad of trade, Tribune was unofficially tied to the Labour Party. Orwell resigned himself to pragmatic support of Labour as the best Britain could achieve given the constraints of the time.After 1947, Orwell moved away from the immediate politics of the Labour Left and retreated to the isolated island of Jura in the Inner Hebrides to write.His growing obsession with totalitarianism was related to his interpretation of contemporary events.Having abandoned the optimism of his ultra-left phase and disillusioned with the tepid reforms of the Attlee administration, Orwell became convinced that democracy was increasingly under threat.He feared that capitalism and liberal democracy would be replaced not by a libertarian and democratic socialism but state-capitalism and authoritarianism.By this point, his politics had moved away from proto-New Left idealism and toward the cynicism of the nascent Non-Communist Left.In 1945, Orwell founded the Freedom Defence Committee, a civil liberties organization formed to rival the communist-dominated National Council for Civil Liberties.The organization was a clear precursor to the CIA-financed Congress for Cultural Freedom, formed After the war, he misunderstood the nature of Labour's reform, thinking that social democracy could deliver on its potential to be a parliamentary road to socialism rather than providing a state intervention needed to stabilize capitalism(Newsinger 1999a: xi, 141).In 1946, he failed to appreciate the way mounting Cold War tensions had eliminated the space for independent Leftist politics.Through his political failures, however, Orwell's politics illuminate the structural tensions that still characterize Left politics in the capitalist core today.On a global level, the Left in the capitalist core is torn between a desire for revolutionary transformation and the more immediate demands and expectation of population accustomed to life as a labor aristocracy.Like Orwell's individual class position, Left politics in the capitalist core are complicated by their own implication in world-systemic relations of exploitation.To take a contemporary example, then, the Occupy Movement represents the return of class politics from the national perspective of the United States or the UK.From the perspective of the global South, however, Occupy is not a moment of unambiguous revolutionary ferment but, at least partially, the resentful protest of aspiring members of the professional middle class, whose prospects for class reproduction have been undercut by "the Great Recession." 10While Orwell's work represents a compelling exposition of this enduring tension, Orwell's (IRD), the arm of the Foreign Office concerned with anticommunist propaganda.The event underscores the extent to which former radicals found themselves collaborating with the state in the name of anti-Stalinism.Two of Orwell's associates had come to work for IRD: T.R. Fyvel, revolution.
v3-fos-license
2016-05-04T20:20:58.661Z
2012-06-15T00:00:00.000
6590733
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.ijic.org/articles/10.5334/ijic.964/galley/1746/download/", "pdf_hash": "9724c0a0d1109754fa692fcb8bcc9d929cab6b92", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1555", "s2fieldsofstudy": [ "Psychology" ], "sha1": "9724c0a0d1109754fa692fcb8bcc9d929cab6b92", "year": 2012 }
pes2o/s2orc
Children’s perspectives on integrated services: every child matters in policy and practice ‘Children’s perspectives on integrated services’ tackles a paramount issue which is still disregarded in many countries because the key proponents do not have a strong voice in our societies: children and youth. Mary Kellett compensates for that by describing the measures taken in England to create integrated services for children and young people, encompassing not only health and social care, but every area children come in contact with during their lives. In order to underline the inclusive approach of the author, every chapter is supplemented by children’s perspectives and research findings reported by children and youth. As such, this book expands the scope of integrated care for children and young people by acknowledging that it takes more than health and education to develop fully, in accordance with one’s talents and preferences. In the first part of the book, the historical and theoretical background is delineated. Chapter 2 gives a comprehensive overview of the development of child services in the UK from the first Poor Law in 1388 to the 'Every Child Matters' (ECM) approach and the Children's Act 2004. Even though the focus is on the UK and England, the story will read similar for many European countries: beginning with the establishment of the modern health and social care systems in the 20 th century, ever more agencies and organizations were tasked with the care for children which led to mismanagement, poor coordination, lack of communication and inadequately trained professionals. In turn, these failings caused the tragic death of individual children, which became the motive and cause for reforms and reports to improve the services. In the UK, this ultimately culminated in the Children's Act 2004 and in the 'Every Child Matters' (ECM) approach. The success of and experiences with this ECM approach is hence the main focus of the book. The historical evolution of child services is complemented by a brief description of different schools of thought, namely from economics, sociology and psychology on the roles and responsibilities of children while growing up. This discourse is dominated by different views on power and by an evolution of discourses from the needs, via the rights to the quality of life of children. The author sets these theoretical concepts into the wider context of the political landscape from the post-WWII era via Thatcher and New Labour to the present day ambiguity. While the historical background gives a comprehensive insight into the develop ment of child services, the political and theoretical underpinnings stay behind their potential to explore the influence of scientific concepts on political and societal decisions. Part II encompasses the wide array of services and aspects to be considered when dealing with children, detailing the latest changes and reforms that have been introduced for the various professions and service providers: from education, health and social services to safeguarding, supporting families and considering child ren as active participants of society. Focusing on the child's perspectives and wishes and actively involving them in the decision-making about their care needs is the underlying rationale that runs throughout the reform efforts and the newly established integrated services under the ECM concept and the Children's Act 2004. The ECM concept established five principles which are to be followed and incorporated into the services organi zed around the child: being healthy, staying safe, enjoying and achieving, making a positive contribution, and achieving economic well-being. Chapters 4-6 cover social work, education and the health services, respectively, always including excerpts of children's views. With the establishment of the integrated child centers and the lead professional, the necessity for multi-agency cooperation and inte rdisciplinary teamwork arose. This in turn necessitated International Journal of Integrated Care -Volume 12, 15 June -URN:NBN:NL:UI:10-1-113105 / ijic2012-125 -http://www.ijic.org/ a better and more coordinated professional training and education for the service providers involved. A major challenge was the abolition of a distinction between the different areas of care offered to children (education, childcare, social services and health) and the creation of a centralized organization starting with the Department for Children, Schools and Families on the national level in 2007 down to the Children's Trusts on local level. The idea was to reduce the number of people involved in servicing children and make it easier for them and their families to know their contact persons. ECM also propagated the active involvement of children in the decision-making process about their care, which is exemplified by simply asking them about their experiences and wishes or the Pupil Voice initiative. By building the service centers around the child, however, the risk arose that the time organized for children and the already tight schedules extended even into early childhood and, via the extended schools model, into the leisure time as well. ECM is intended for every child in the UK, however, special focus is laid on those most vulnerable: children with disabilities or chronic diseases, children being poor, being abused or otherwise marginalized, children with migrational backgrounds or asylum-seekers, and looked-after children (Chapters 4-10). Most of the developed programmes targeting these children hence also take a look at the family situation, offering support and assistance to the parents as well. The rationale is that prevention is better than cure and that a safe and stable familial environment will produce healthy, safe and happy children. Key components often are educational measures, organizing leisure time activities and creating a platform for exchange for children and parents with similar needs. Many of these activities are provided by third sector organizations (Chapter 9), which were actively included by New Labour into the ECM concept. The idea was that the third sector is more flexible and trusted and its value-driven ethos may provide easier and access to local communities than state agencies. Commissioning services from the third sector hence became a vital part of ECM. However, a third sector organization does not per se deliver better services than a private or public sector one and the shift to actively commissioning third sector services via short-term contracts made them more dependent and vulnerable. Finally, part III of the book ties the knot, describing methods of active involvement of children and children as researchers. Throughout the book, emphasis is laid on presenting children's views and in Chapter 12, it is described, how these views were collected. Together with children the author developed research methods, based on the scientific principles of reproducibility and evidence-generation, which were suitable for children to conduct: including interviews, surveys and data analyses, as well as more creative ones, using photography, building stones or drawings. Additionally, ECM itself has created various tools to activate children and make their opinions heard. Kellett makes a strong point for these forms of active participation of children to give their voices more strength and credibility. After all, we still live in societies where children's voices usually are regarded as a nuisance or as not being taken seriously at best. Albeit, as the author doesn't tire to point out, the UN Children's Rights Charter (UNCRC) requires governments and societies to grant children the same rights as adults, along with the necessary protection (Chapter 11). The book gives a comprehensive and impressive overview of the reorganization and reform process initiated by ECM and makes a strong point to integrate all services concerned with children. At the same time, it doesn't fall short of describing the pitfalls and dangers of the approach. Integration of services here means much more than merely inter-sector cooperation or the creation of integrated structures. It propagates an understanding that one does not work without the other: it takes a healthy, safe and supportive environment to enable children to learn and grow up to be responsible and self-respecting adults. There is no health without education, no development without encouragement and respect. Even though the book describes the English situation, lessons can be learned for other countries and all professions, and one can only hope that as many professionals and decision-makers as possible read this book. Institute of Social Medicine, Centre for Public Health, Medical University of Vienna, Rooseveltplatz 3, A-1090 Vienna, Austria E-mail: katharina.v.stein@meduniwien.ac.at
v3-fos-license
2019-03-18T21:34:14.387Z
2019-03-18T00:00:00.000
81980434
{ "extfieldsofstudy": [ "Medicine", "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.42541", "pdf_hash": "bbfdc655fa31ffb294027b12f6d04e8f685ef59c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1557", "s2fieldsofstudy": [ "Psychology" ], "sha1": "bbfdc655fa31ffb294027b12f6d04e8f685ef59c", "year": 2019 }
pes2o/s2orc
Behavioural and neural signatures of perceptual decision-making are modulated by pupil-linked arousal The timing and accuracy of perceptual decision-making is exquisitely sensitive to fluctuations in arousal. Although extensive research has highlighted the role of various neural processing stages in forming decisions, our understanding of how arousal impacts these processes remains limited. Here we isolated electrophysiological signatures of decision-making alongside signals reflecting target selection, attentional engagement and motor output and examined their modulation as a function of tonic and phasic arousal, indexed by baseline and task-evoked pupil diameter, respectively. Reaction times were shorter on trials with lower tonic, and higher phasic arousal. Additionally, these two pupil measures were predictive of a unique set of EEG signatures that together represent multiple information processing steps of decision-making. Finally, behavioural variability associated with fluctuations in tonic and phasic arousal, indicative of neuromodulators acting on multiple timescales, was mediated by its effects on the EEG markers of attentional engagement, sensory processing and the variability in decision processing. Introduction The speed and accuracy with which humans, as well as non-human animals, respond to a stimulus depends not only on the characteristics of the stimulus, but also on the cognitive state of the subject. When drowsy, a subject will respond more slowly to the same stimulus compared to when she is attentive and alert. Central arousal also fluctuates across a smaller range during quiet wakefulness, when the subject is neither drowsy or inattentive, nor overly excited or distractible. Although these trial-to-trial fluctuations can impact on behavioural performance during decision-making tasks (Aston-Jones and Cohen, 2005), it is largely unknown how arousal modulates the underlying processes that support decision formation. Perceptual decision-making depends on multiple neural processing stages that represent and select sensory information, those that process and accumulate sensory evidence, and those that prepare and execute motor commands. Variability in central arousal could affect any one or potentially all of these processing stages, which in turn could influence behavioural performance. The neuromodulatory systems that control central arousal state, such as the noradrenergic (NA) locus coeruleus (LC) and the cholinergic basal forebrain (BF), have also been suggested to drive fluctuations in endogenous activity linked to changes in cortical (de)synchronization, that is cortical state (Harris and Thiele, 2011;Lee and Dan, 2012), and are linked to cognitive functions such as attention (Thiele and Bellgrove, 2018), both known to affect information processing and behavioural performance. These modulatory systems have both tonic and phasic firing patterns that are recruited on different timescales and support different functional roles (Aston-Jones and Cohen, 2005;Dayan and Yu, 2006;Parikh et al., 2007;Parikh and Sarter, 2008;Sarter et al., 2016). Tonic changes in neuromodulator activity occur over longer timescales that can span multiple trials, whereas fast (task-evoked) recruitment through phasic activation occurs on short enough timescales to influence neural activity and behavioural decisions within the same trial (Aston-Jones and Cohen, 2005;Bouret and Sara, 2005;Dayan and Yu, 2006;Parikh et al., 2007). Pupil diameter correlates strongly with a variety of measurements of cortical state and behavioural arousal (Eldar et al., 2013;Reimer et al., 2014;McGinley et al., 2015b;McGinley et al., 2015a;Vinck et al., 2015;Engel et al., 2016), and can thus be considered a reliable proxy of central arousal state. Indeed, there is a strong correlation between pupil size and activity in various neuromodulatory centres that control arousal (Aston-Jones and Cohen, 2005;Gilzenrat et al., 2010;Murphy et al., 2014a;Varazzani et al., 2015;Joshi et al., 2016;Reimer et al., 2016;de Gee et al., 2017). Both baseline pupil diameter, reflecting tonic activity levels in neuromodulatory centres (tonic arousal), and task-evoked pupil diameter changes (phasic arousal), have been related to specific neural processing stages of perceptual decision making. Baseline pupil diameter correlates with sensory sensitivity (McGinley et al., 2015a;McGinley et al., 2015b) and is predictive of behavioural performance during elementary detection tasks (Murphy et al., 2011;McGinley et al., 2015a). Pupil diameter also changes phasically in the course of a single decision (Beatty, 1982a; de Gee eLife digest Driving along a busy street requires you to constantly monitor the behavior of other road users. You need to be able to spot and avoid the car that suddenly changes lane, or the pedestrian who steps out in front of you. How fast you can react to such events depends in part on your brain's level of alertness, or 'arousal'. This in turn depends on chemicals within the brain called neuromodulators. Neuromodulators are a type of neurotransmitter. But whereas other neurotransmitters enable brain cells to signal to each other, neuromodulators turn the volume of these signals up or down. The activity of brain regions that produce neuromodulators varies over time, leading to changes in brain arousal. These changes take place over different time scales. Sudden unexpected events, such as those on the busy street above, trigger sub-second changes in arousal. But arousal levels also show spontaneous fluctuations over minutes to hours. We can follow these changes in real-time by looking into a participant's eyes. This is because the brain regions that produce neuromodulators also control pupil size. Van Kempen et al. have now combined measurements of pupil size with recordings of electrical brain activity. Healthy volunteers learned to press a button as soon as a target appeared on a screen. The larger a volunteer's pupils were before the target appeared, the more slowly the volunteer responded on that trial. Large baseline pupil size is thought to indicate a high baseline level of brain arousal. By contrast, the larger the increase in pupil size in response to the target, the faster the volunteer responded on that trial. This increase in pupil size is thought to reflect an increase in brain arousal. The recordings of brain activity provided clues to the underlying mechanisms. In trials with large baseline pupil size -and therefore high baseline arousal -the volunteers' brains showed more variable responses to the target. But in trials with a large increase in pupil size -and a large increase in arousal -the volunteers' brains showed less variable responses, as well as stronger signals related to attention. Neuromodulators thus act on different timescales to influence different aspects of cognitive performance, including attention and target detection. Fluctuating levels of neuromodulator activity may help explain the variability in our behavior. Monitoring pupil size is one way to gain insights into the mechanisms that bring about these changes in neuromodulator activity. de Gee et al., 2017;Lempert et al., 2015;Murphy et al., 2016;Urai et al., 2017), and has been related to specific elements of the decision making process, such as decision bias (de Gee et al., 2014;de Gee et al., 2017), uncertainty (Urai et al., 2017), and urgency (Murphy et al., 2016). This suggests that these neuromodulatory systems do not only dictate network states (through tonic activity changes), but that they are recruited throughout the decision making process (Cheadle et al., 2014;de Gee et al., 2014;de Gee et al., 2017). Although both baseline pupil diameter and the phasic pupil response have been associated with specific aspects of decision-making, the relationship between pupil-linked arousal and the electrophysiological correlates of decision-making are largely unknown. Recently developed behavioural paradigms have made it possible to non-invasively study the individual electroencephalographic (EEG) signatures of perceptual decision-making described above (O'Connell et al., 2012;Kelly and O'Connell, 2013;Loughnane et al., 2016;Loughnane et al., 2018;Newman et al., 2017). In these paradigms, participants are required to continuously monitor (multiple) stimuli for subtle changes in a feature. Because stimuli are presented continuously, target onset times (and locations) are unpredictable, and sudden stimulus onsets are absent, eliminating sensory evoked deflections in the EEG traces. These characteristics allow for the investigation of the gradual development of build-to-threshold decision variables as well as signals that code for the selection of relevant information from multiple competing stimuli, a critical feature of visuospatial attentional orienting that impact evidence accumulation processes . Here, we asked how arousal influences EEG signals that relate to each of the separate processing stages described above. Specifically, we tested the effects of pupil-linked arousal on pre-target preparatory parieto-occipital a-band activity, associated with fluctuations in the allocation of attentional resources (Kelly and O'Connell, 2013); early target selection signals measured over contra-and ipsilateral occipital cortex, the N2c and N2i ; perceptual evidence accumulation signals measured as the centroparietal positivity (CPP), which is a build-to-threshold decision variable demonstrated to scale with the strength of sensory evidence and predictive of reaction time (RT) (O'Connell et al., 2012;Kelly and O'Connell, 2013); and motor-preparation signals measured via contralateral b-band activity (Donner et al., 2009;O'Connell et al., 2012). Of these signals, we extracted specific characteristics such as the latency, build-up rate and amplitude, and tested whether these were affected by pupil-linked arousal. Additionally, because the variance and response reliability of the membrane potential of sensory neurons varies with pupil diameter (Reimer et al., 2014;McGinley et al., 2015a), we also investigated whether arousal affected the inter-trial phase coherence (ITPC), a measure of across trial consistency in the EEG signal, of the N2 and the CPP. We found that both baseline pupil diameter as well as the pupil response were predictive of behavioural performance, and that this relationship was best described by non-monotonic, but not U-shaped, second-order polynomial model fits. Furthermore, we found that both tonic and phasic arousal bore a predictive relationship with the neural signals coding for baseline attentional engagement, early target selection, decision processing as well as the preparatory motor response. Although neural activity representing all these stages varied with changes in arousal, unique variability in task performance due to tonic arousal (baseline pupil diameter) could only be explained by the amplitude of target selection signals and the consistency of the CPP, reflecting decision processing. In contrast, variability due to phasic arousal (pupil response) was explained by pre-target a-band activity as well as the consistency of the CPP. Results 80 subjects performed a continuous version of the random dot motion task in which they were asked to report temporally and spatially unpredictable periods of coherent motion within either of two streams of random motion ( Figure 1A). We investigated whether the trial-to-trial fluctuations in behavioural performance and EEG signatures of perceptual decision making could, in part, be explained by trial-to-trial differences in the size of the baseline pupil diameter (reflecting tonic arousal) and the post-target pupil response (reflecting phasic arousal). We quantified this relationship by allocating data into five bins based on the size of either the baseline pupil diameter or the phasic pupil diameter response ( Figure 1B & C). Baseline pupil diameter was computed as the average pupil diameter over the 100 ms preceding target onset. The phasic pupillary response was estimated using a single trial general linear model (GLM) approach (Materials and methods). We first assessed the neural input to the peripheral pupil system by applying multiple models with onset and response components as well as various different shapes for the sustained component (Murphy et al., 2016) across all trials for each subject (Figure 1-figure supplement 1). Next we applied the grand average best-fitting model (linear up-ramp) on individual trials (Bach et al., 2018). This provided us with a trial-by-trial estimate of the amplitude of each temporal component. Comparison against several other measures of the pupil diameter response (Figure 1-figure supplement 2), controlling for variance inflation factors (Figure 1-figure supplement 3) and applying the same model across bins of trials, or orthogonalizing the predictors (Figure 1-figure supplement 4) provided support for the reliability of the estimated amplitude of the pupillary response. Here we present the relationship of the amplitude of the target onset component to the behavioural and EEG signatures of perceptual decision-making. We then used sequential multilevel model analyses and maximum likelihood ratio tests to test for fixed effects of pupil bin. We determined whether a linear fit was better than a constant fit and subsequently whether the fit of a second-order polynomial, indicating a non-monotonic relationship between pupil diameter and behaviour/EEG, was superior to a linear fit. We furthermore used a variant of the 'two-lines' approach (Simonsohn, 2017) to test whether any non-monotonic relationship was best described by an (inverted) U-shape. Both tonic and phasic arousal are predictive of task performance We first investigated the relationship between trial-by-trial pupil dynamics and behavioural performance. As stimuli were presented well above perceptual threshold, our subjects performed at ceiling (mean, 98.7%; range: 92-100%, Newman et al., 2017). We therefore focused on RT and the RT coefficient of variation (RTcv), a measure of performance variability calculated by dividing the standard deviation in RT by the mean (Bellgrove et al., 2004), rather than accuracy. We found that baseline pupil diameter displayed a non-monotonic relationship with both measures of behavioural performance (RT c 2 (1) =8.84, p=0.003; RTcv c 2 (1) =4.43, p=0.035). Neither effects were, however, significantly U-shaped ( Figure 1B). Rather, RT was slower on trials with higher baseline arousal levels. The pupil diameter response, on the other hand, displayed a non-monotonic (but not U-shaped) relationship with RT (c 2 (1) =51.89, p<0.001) and an inverse linear relationship with RTcv (c 2 (1) =45.94, p<0.001). For both measures, best performance was found on trials with the largest pupil responses ( Figure 1C). This relationship remained very similar when trial-by-trial fluctuations in the pupil response that are due to variability in the amplitude or phase of the baseline pupil diameter were not removed (Figure 1-figure supplement 5). We furthermore repeated the sequential regression analysis in single-trial, non-binned data, in which we additionally controlled for time-on-task effects, confirming that these effects were not dependent on the binning procedure (Supplementary file 1). Additionally, we noticed that when we band-pass filtered the pupil diameter, rather than low-pass filtered, the relationship between baseline pupil diameter and task performance was not significant ( Figure 1-figure supplement 6). This suggests that slow fluctuations in baseline pupil diameter (<0.01 Hz) are driving the effect on task performance. Having established a relationship between task performance and both tonic and phasic modes of central arousal state, we next focused on the relationship between these pupil dynamics and the neural signatures underpinning target detection on this perceptual decision making task Newman et al., 2017). Phasic arousal has an approximately linear relationship with decision processing During decision making, perceptual evidence has to be accumulated over time. This accumulation process has long been related to build-to-threshold activity in single neurons in parietal cortex (Gold and Shadlen, 2007); but see Latimer et al., 2015, Latimer et al., 2016Shadlen et al., 2016). The centro-parietal positivity (CPP) measured from scalp EEG exhibits many of these same properties, including a representation of the accumulation of sensory evidence towards a decision bound (O'Connell et al., 2012;O'Connell et al., 2018;Kelly and O'Connell, 2013). Because in this study we used relatively strong sensory evidence (50% coherence), it is possible that subjects may not have relied upon any temporal integration of this motion signal to reach a decision. Rather, variability in RT could be brought about by variation in the onset transient of target selection due to the temporal and spatial uncertainty of the target stimulus. On single trials, decision formation could be a step-like signal that averaged across trials looks like an accumulate-to-bound signal (Latimer et al., 2015). Although we cannot discount this possibility, aligning the visual early target selection signals (N2c) to response reveals a much lower signal amplitude compared to aligning it to target onset ( Figure 2-figure supplement 1). This indicates that there is no fixed delay between target selection and the response, and that there is variability in the duration of the sustained period of this task. This variation could indicate different trial-to-trial strategies (e.g. comparing motion in one stimulus against the stimulus on the other side of the screen), or in addition some variability in accumulation rate. Because of this uncertainty, we refer to the functional significance of the CPP as decision processing. Here we tested the relationship between the pupil diameter response and the onset, build-up rate, amplitude and consistency (ITPC) of the CPP (Figure 2). We found that the onset latency of the CPP, defined as the first time point that showed a significant difference from zero for 15 consecutive time points, displayed an inverse monotonic relationship with the size of the pupil response (c 2 (1) =5.60, p=0.018), such that the fastest onsets were found for the largest pupil response bins ( Figure 2A). Likewise, the build-up rate (c 2 (1) =4.45, p=0.035), but not the amplitude (p=0.15), of the CPP varied with the pupil response, displaying the steepest slope on trials with the largest pupil dilations. Because the membrane potential of sensory neurons shows the least variance and highest response reliability at intermediate baseline pupil diameter (McGinley et al., 2015a), we additionally investigated the ITPC, a measure of across trial consistency, of the CPP. We computed ITPC with a single-taper spectral analysis in a 512 ms sliding window computed at 50 ms intervals, with a frequency resolution of 1.95 Hz (Materials and methods). Based on the stimulus-locked grand average time-frequency spectrum, we selected a time (300-550 ms) and frequency window (<4 Hz) for further statistical analyses ( Figure 2C). We found an approximately linear relationship between pupil diameter response and the consistency of the CPP signal (c 2 (1) =41.79, p<0.001), indicating that the CPP signal is less variable for larger pupil response bins ( Figure 2D). This relationship was also present when we aligned the CPP to the response (Figure 2-figure supplement 2), indicating that this effect is unlikely to solely reflect variability in the onset of the CPP. Thus, we found that the size of the pupillary response was predictive of both the onset latency, build-up rate as well as the ITPC of the CPP. Moreover, the relationship with the neural parameters of the CPP resembled the relationship between the pupil response and behavioural performance ( Figure 1C). Large pupil dilations were predictive of faster responses, earlier CPP onset latencies, as well as steeper build-up rates and more consistent CPP. Next, we asked whether other stages of information processing underpinning perceptual decision making also varied with the pupil response. The phasic pupil response relates monotonically to spectral measures of baseline attentional engagement, but not motor output We next investigated pre-target preparatory a-band power (8-13 Hz), a sensitive index of attentional deployment that has been shown to vary with behavioural performance. Specifically, previous studies have found higher pre-target a-band power preceding trials with longer RT, and suggested that fluctuations in a-power may reflect an attentional influence on variability in task performance (Ergenoglu et al., 2004;van Dijk et al., 2008;O'Connell et al., 2009;Kelly and O'Connell, 2013). We first verified the relationship between a-band power and behavioural performance by binning the data into five bins according to a-band power and performing the same sequential regression analysis as described above ( Figure 3A). We replicated previous findings (Kelly and O'Connell, 2013) and found an approximately linear relationship between a-band power and RT (c 2 (1) =25.27, p<0.001) but not RTcv (p=0.48). In line with previous research (Hong et al., 2014), pupil diameter response was inversely related to a-band power ( Figure 3B), displaying an approximately linear relationship (c 2 (1) =28.24, p<0.001), suggesting that pre-target attentional engagement is related to phasic arousal. We next focused on response-related motor activity in the form of left hemispheric b-power (LHB). LHB decreases before a button press and has been shown to reflect the motor-output stage of perceptual decision making, but also to trace decision formation, reflecting the build-up of choice selective activity (Donner et al., 2009). Here we investigated the LHB amplitude and build-up rate preceding response ( Figure 3C). We found that neither LHB amplitude (p=0.63) nor LHB slope Target selection signals do not correlate with the phasic pupil response Next we investigated the N2 ( Figure 3D-F), a stimulus-locked early target selection signal that has been shown to predict behavioural performance and modulate the onset and build-up rate of the CPP . Because of the spatial nature of the task, we analysed the negative deflection over both the contra-(N2c) and ipsi-lateral (N2i) hemisphere, relative to the target location. The pupil response was not predictive of any aspect of the N2. Specifically, phasic arousal was not predictive of N2c latency (p=0.82) or amplitude (p=0.64), nor did we find any relationship between the pupil response and the N2c ITPC (p=0.14). Likewise, the pupil response was not predictive of N2i latency (p=0.64), amplitude (p=0.11) or ITPC (p=0.87). The impact of phasic arousal on task performance is mainly mediated by the consistency in decision processing We found that pupil-linked phasic arousal was predictive of specific neural signals at multiple information processing stages of perceptual decision making. To test which of these signals explained unique variability in behavioural performance across the five pupil response bins and subjects, the neural signals were added to a linear mixed effects model predicting either RT or RTcv with their order of entry determined hierarchically by their temporal order in the decision-making process. This allowed us to test whether each successive stage of neural processing would improve the fit of the model to the behavioural data, over and above the fit of the previous stage. Compared to the baseline model predicting RT with pupil bin, the addition of pre-target a-power significantly improved the model fit (c 2 (1) =10.30, p<0.001). None of the measures of early target selection improved the fit of the model; neither N2c latency (c 2 (1) =0.14, p=0.70) or amplitude (c 2 (1) =0.94, p=0.33), nor N2i latency (c 2 (1) =2.39, p=0.12) or amplitude (c 2 (1) =2.39, p=0.12). We found that both the addition of CPP onset (c 2 (1) =8.24, p=0.004) as well as the build-up rate (c 2 (1) =4.90, p=0.027) significantly improved the model fit. Whereas the addition of CPP amplitude did not (c 2 (1) =1.43, p=0.23), the addition of CPP ITPC substantially improved the fit of the model (c 2 (1) =19.25, p<0.001). Neither the LHB build-up rate nor amplitude improved the fit of the model (LHB build-up rate c 2 (1) =0.02, p=0.88; amplitude c 2 (1) =0.64, p=0.42). Overall, this model suggested that pre-target a-power, CPP onset, build-up rate and ITPC exert partially independent influences on RT. Because some variables were highly correlated (e.g. CPP onset and ITPC) we used an algorithm for forward/backward stepwise model selection (Venables and Ripley, 2002) to test whether each neural signal indeed explained independent variability that is not explained by any of the other signals. This procedure eliminated CPP onset (F (1) = 0.06, p=0.80) and build-up rate (F (1) = 1.86, p=0.17) from the final model. Thus, only pre-target a-power and CPP ITPC significantly improved the model fit for predicting RT. These two variables were forced into one linear mixed effects model predicting RT (Statistical analyses), and comparison to a baseline model revealed a good fit (c 2 (2) =38.61, p<0.001). The fixed effects of the model (the neural signals) explained 15.8% of the variability in RT (marginal r 2 ) across the five pupil response bins, and together with the random effects (across subject variability) it explained 92.6% of the variability (conditional r 2 ). We performed the same hierarchical regression analysis to see which neural signals explained variability in RTcv. We summarised the results of this analysis in Supplementary file 2, and report the most important results here. The hierarchical regression analysis revealed that both CPP onset and CPP ITPC improved the model fit, but eliminated CPP onset after the forward/backward model selection. Consequently, CPP ITPC was the only variable that exerted independent influence on RTcv. Comparison against a baseline model revealed a significant fit (c 2 (1) =15.36, p<0.001) that had a marginal r 2 of 16.0% and a conditional r 2 of 45.9%. To test whether our assumptions about the temporal order of the neural signals influenced these results, we fitted a model in which all EEG signatures were added at the same time and investigated their coefficients. This analysis did not identify any additional neural components to those that were found using the hierarchical regression analysis (Supplementary file 3). Table 1 shows the final parameter estimates for the neural signals that significantly predicted variability in RT or RTcv that is due to variability in phasic arousal. From this analysis we can conclude that CPP ITPC was the strongest predictor for RT and the only predictor for RTcv. These results provide novel insight into the mechanism by which the neuromodulators that control arousal can influence behaviour. The impact of these modulators on decision-making is thus mainly mediated by their effects on the consistency in decision formation. Next, we turn to tonic arousal and its relationship to these same EEG components of perceptual decision-making. Baseline pupil diameter is inversely related to the consistency of decision processing Figure 4 illustrates the relationship between baseline pupil diameter and the CPP. Unlike the pupil response, baseline pupil diameter was not predictive of the onset (p=0.20) or build-up rate (p=0.12), but it displayed an inverse relationship with both the amplitude (c 2 (1) =7.09, p=0.01) and the consistency of the CPP (c 2 (1) =9.34, p=0.002). In line with previous research that revealed increased variability in the rate of evidence accumulation during periods with larger baseline pupil diameter (Murphy et al., 2014b), we found an inverse, approximately linear, relationship in which higher baseline pupil diameter displayed lower EEG signal consistency ( Figure 4D). Thus, states of higher Table 1. Parameter estimates for the final linear mixed effect model of RT and RTcv binned by the pupil diameter response or baseline. The only parameters included are the neural signals that significantly improved the model fit. arousal are characterized by less consistency, that is more variability, in decision processing. Additionally, states of higher tonic arousal also display lower task performance ( Figure 1C), indicating that higher variability in decision processing (due to higher tonic arousal) can affect behavioural performance. Baseline pupil diameter relates to spectral measures of baseline attentional engagement and motor output as well as early target selection signals We found a relationship between baseline pupil diameter and specific characteristics of multiple neural processing stages of perceptual decision-making. Specifically, as observed before (Hong et al., 2014), pre-target alpha power ( Figure 5A) varied with baseline pupil diameter in a non-monotonic, but not inverted-U shaped, manner (c 2 (1) =4.49, p=0.034). This suggests that with higher tonic arousal, alpha activity is higher (or less desynchronised). Next, we tested whether baseline pupil diameter was predictive of EEG characteristics representing motor output ( Figure 5B). We found an inverse relationship with LHB build-up rate (c 2 (1) =10.99, p<0.001), decreasing with larger baseline pupil diameter, but we did not find a relationship with LHB amplitude (p=0.34). Lastly, we investigated whether baseline pupil diameter affected early target selection signals, the N2 ( Figure 5C-D). Previous studies have revealed that baseline pupil diameter affected the size and variability of neural responses to visual and auditory stimuli (Reimer et al., 2014;McGinley et al., 2015a). Here we found that baseline pupil diameter was not predictive of the peak latency of the N2c (p=0.75), but that it did display a monotonic relationship with the N2c amplitude (c 2 (1) =13.72, p<0.001). Trials with larger baseline pupil diameter displayed smaller N2c amplitudes, suggesting that higher arousal has a negative impact on sensory encoding. N2c ITPC did not vary with baseline pupil diameter (p=0.25), and nor did N2i ITPC (p=0.33), N2i latency (p=0.78) or amplitude (p=0.06). We thus found that, similar to the phasic pupil diameter response, baseline pupil diameter is predictive of specific characteristics of each of the processing stages of perceptual decision-making. Next, we investigated which of these components explained unique variance in task performance across pupil size bins. N2c amplitude and CPP ITPC are predictive of variability in task performance due to tonic arousal We again performed the same hierarchical regression analysis as described above, to see which of the neural signals explained unique variability in task performance associated with tonic arousal. The full results of this analysis are summarised in Supplementary file 4. Here we discuss the main findings. After the application of a forward/backward model selection algorithm (Venables and Ripley, 2002), N2c amplitude and CPP ITPC were the only parameters that were predictive of RT (Table 1). These variables were forced into one regression model predicting RT, and comparison against a baseline model with baseline pupil diameter as a factor revealed a significant fit (c 2 (2) =31.6, p<0.001) with a marginal (conditional) r 2 of 4.1% (94.4%). This same hierarchical regression procedure revealed that CPP ITPC was the only EEG component that explained unique variability in RTcv (Table 1). Comparison against a baseline model also led to a significant fit (c 2 (1) =26.83, p<0.001), with a marginal (conditional) r 2 of 11.9% (44.5%). None of the other EEG parameters that were excluded from the final model due to potential false assumptions about their temporal order revealed significant coefficients in a multilevel model analysis in which all components were added simultaneously (Supplementary file 5). Thus, additional to an effect of N2c amplitude on RT, the consistency of the CPP was the only stage of information processing that explained unique within and across-subject variability in task performance associated with changes in baseline pupil diameter. Discussion Here we investigated whether behavioural and neural correlates of decision-making varied as a function of baseline or task-evoked pupil diameter, indexing tonic and phasic arousal, respectively. The perceptual decision-making paradigm employed ( Figure 1A) allowed us to monitor the relationship between pupil diameter and independent measures of attentional engagement, early target selection, decision formation and motor output. We found that the trial-by-trial variability in both tonic and phasic arousal were predictive of behavioural performance ( Figure 1B & C). For tonic arousal, this relationship was best described by a non-monotonic polynomial fit with slower RT for higher baseline pupil diameter. Higher phasic arousal, on the other hand, was predictive of better task performance. We furthermore established that both tonic and phasic arousal were predictive of a subset of EEG signatures, together reflecting discrete aspects of information processing underpinning perceptual decision-making. A hierarchical regression analysis allowed us to determine which of these processing stages exerted an independent influence on behavioural performance associated with central arousal. We found that pre-target a power, indexing baseline attentional engagement, and the consistency of the CPP, reflecting the decision formation, each explained unique variability in task performance that was due to variability in phasic arousal. Variability in task performance due to fluctuations in tonic arousal was explained by the amplitude of the target selection signal N2c and the consistency of the CPP. We thus revealed a direct relationship between both tonic and phasic measures of arousal, and a distinct but overlapping set of EEG signatures of perceptual decisionmaking, and in particular the CPP. The functional significance of the CPP during perceptual decisionmaking Although the CPP has previously been found to reflect the accumulation of evidence (O'Connell et al., 2012;Kelly and O'Connell, 2013;Loughnane et al., 2016;Loughnane et al., 2018;Newman et al., 2017), as discussed in the results section, our task design does not allow us to unequivocally relate the CPP to a specific characteristic of the decision making process such as evidence accumulation. Because of the temporal and spatial uncertainty of the target stimulus, rather than accumulating evidence over an extended period of time on trials with slower RT, target onset transients could be delayed or subjects could be employing different strategies for motion detection on different trials (e.g. verifying the presence of coherent motion in one stimulus versus the other stimulus). Decision signals such as the CPP or LHB could on single trials behave analogously to a step-like signal that across trials seems to be accumulating to a threshold (Latimer et al., 2015; but see Shadlen et al., 2016), potentially supported by neural mechanisms in V4 that increase their activity transiently in response to changes in motion coherence (Costagli et al., 2014). Although we cannot discount that subjects use different strategies on different trials, previous studies in which subjects were required to monitor either one or multiple dot kinematograms revealed no differences in either RT or hit rate and both the early target selection signals and the CPP scaled with the percentage of coherently moving dots . We additionally showed here that there is no fixed delay between target selection and response (Figure 2-figure supplement 1) and that there is thus variability in the duration of the sustained period of the task. Any relationship between arousal and the CPP is therefore not solely the result of fluctuations in the latency of the target onset transient. Large phasic pupil responses are predictive of better task performance We estimated the variability in phasic arousal using the amplitude of the task-evoked pupil diameter. Because of the sluggish nature of the pupil diameter response, pupil dilation after target onset likely reflects a combination of specific aspects of phasic arousal such as a response to target onset, decision formation as well as a motor response. Here we aimed to disentangle these different components by applying a general linear model on a single trial basis. First we determined the fit for various models for each subject across trials (Figure 1-figure supplement 1), after which we applied the best (across subject) fitting model to each individual trial. We addressed the reliability of the estimation of each of the temporal components by comparing their relationship to behavioural performance to those of other measures of the amplitude of the pupil diameter response (Figure 1figure supplement 2), excluding trials with high VIF values, and orthogonalizing the predictors (Figure 1-figure supplement 3), and comparing the results from the single trial parameter estimation to those of groups of trials binned by RT (Figure 1-figure supplement 4). These results revealed that we can reliably estimate the target onset component, but that the estimation of the sustained component might not be as straightforward (Figure 1-figure supplement 3). Although the current measure of the pupil response to target onset is unlikely to be completely independent of the estimation of the sustained component, inclusion of this predictor increased the fit of the model and captured variability in the pupil time course likely to reflect the influence of phasic arousal specific to decision formation. This reduced the influence of this sustained part of the arousal response on the estimation of the target onset component (Figure 1-figure supplement 3). To the extent that we could reliably estimate the amplitude of the target onset component, we investigated its relationship to the behavioural and neural signatures of perceptual decision-making. Larger target onset responses, presumably reflecting a phasic response in neuromodulatory brainstem centres, were predictive of faster and less variable RT ( Figure 1C), faster onset, larger build-up rates and higher consistency of the CPP (Figure 2), as well as lower pre-target occipital apower ( Figure 3). These results can be interpreted in light of the relationship between pupil dilations and the activity in brain areas such as the LC or BF , 1997;Rajkowski et al., 1994;Rajkowski et al., 2004). Likewise, cue detection is enhanced on trials with a larger cholinergic response (Parikh et al., 2007), and previous studies have found that large pupil responses were predictive of higher behavioural performance (Beatty, 1982b; but see Kristjansson et al., 2009), and decreased decision bias (de Gee et al., 2017). Additionally, poor performance upon pupil constrictions is in line with studies showing that sensory target detection is suboptimal when a transient LC or BF response is absent Rajkowski et al., 1994;Parikh et al., 2007;Gritton et al., 2016). Moreover, naturally occurring pupillary constrictions are preceded by transient activity decreases in the LC (Joshi et al., 2016), and are associated with increased synchronization of cortical activity, a signature of cortical down states, as well as suboptimal processing of visual stimuli (Reimer et al., 2014). Our results suggest that event-related pupillary constrictions could be associated with similar neural mechanisms. Trials with large pupil responses, and better task performance, were preceded by lower pre-target occipital a-power, that is more a desynchronization (Figure 3). In line with these results and previous studies (Kelly and O'Connell, 2013), lower pre-target a-power itself was predictive of higher task performance. Fluctuations in a synchronization have previously been related to variation in both arousal and attentional deployment (Ergenoglu et al., 2004;van Dijk et al., 2008;O'Connell et al., 2009;Kelly and O'Connell, 2013;Newman et al., 2016), often interpreted as a neurophysiological correlate of cortical excitability. Here, on trials with both higher phasic arousal and more a desynchronization, behavioural performance was better. This could indicate that fluctuations in phasic arousal and attentional engagement rely on similar neuromodulatory mechanisms. We additionally found that larger pupil responses were predictive of earlier onset latencies, faster build-up and higher consistency of the CPP signal ( Figure 2). Thus the effects of the fluctuations in phasic arousal and attentional deployment on task performance are likely mediated by their effect on decision signals, and insofar as the CPP represents evidence accumulation (see above), these fluctuations could influence the build-to-threshold dynamics during perceptual decision-making. Large baseline pupil diameter is predictive of relatively poorer task performance We found a non-monotonic relationship between baseline pupil diameter and task performance ( Figure 1B). This relationship was, however, not significantly U-shaped, but rather we found slower RT with higher baseline pupil diameter. This effect was moreover only observed when the pupil diameter data was not high-pass filtered (Figure 1-figure supplement 6), indicating that slow changes (<0.01 Hz) in pupil diameter are driving the effects on task performance. In line with previous research (Hong et al., 2014), out of all the investigated EEG components, only pre-target a power displayed a small non-monotonic relationship with baseline pupil diameter. Approximately linear relationships were found with N2c amplitude, LHB build-up rate, as well as an inverse relationship with CPP amplitude and ITPC. Of these, only N2c amplitude and CPP ITPC explained within and across subject variability in task performance ( Table 1). It thus seems that the effects of tonic arousal on task performance are mainly driven by an approximately linear relationship with target selection and consistency of decision formation. These results appear at odds with a U-shaped relationship as predicted by the adaptive gain theory (Aston-Jones and Cohen, 2005), and found during auditory target detection tasks (Murphy et al., 2011;McGinley et al., 2015a). One potential reason that we did not find a U-shaped relationship with task performance, is that we might not have observed the full range of possible baseline pupil diameter values, and thus not the full range of possible tonic arousal levels. Trials were presented in blocks of 18, after which subjects were allowed to take a short break, preventing them from becoming overly drowsy or too distracted. However, depending on the behavioural paradigm and task demands, the relationship between central arousal, performance and neural activity may take different forms (McGinley et al., 2015b). Membrane potential recordings from sensory and association areas, as well as direct electrophysiological recordings from neuromodulatory brainstem centres during decision-making tasks, are needed to gain further insight in the exact mechanisms that drive the relationship between cortical state, sensory encoding, decision formation and task performance. Variability in task performance due to pupil-linked arousal is best predicted by the consistency in decision formation During epochs of quiet wakefulness, membrane potential fluctuations of neurons in visual, somatosensory and auditory cortex are closely tracked by baseline pupil diameter (Reimer et al., 2014;McGinley et al., 2015a). These fluctuations in subthreshold membrane potential are characteristic of changing cortical state. Small pupil diameter is characterized by prominent low-frequency (2-10 Hz) and nearly absent high-frequency oscillations (30-80 Hz), whereas larger pupil diameter is characterized by reduced low-frequency, but increased high-frequency oscillations (McGinley et al., 2015a;McGinley et al., 2015b). Thus, the average subthreshold membrane potential is most stable during intermediate pupil diameter, when neither low nor high-frequency components predominate. States of lower variability are furthermore characterized by more reliable sensory responses, higher spike rates, increased neural gain and better behavioural performance (Reimer et al., 2014;McGinley et al., 2015a;McGinley et al., 2015b). In addition to activity in early sensory areas, there is some evidence that activity in higher-order association areas is also more reliable with intermediate arousal. During auditory target detection, human subjects displayed the least variable RT at intermediate baseline pupil diameter, as well as the highest amplitudes of the P3 component elicited by task-relevant stimuli (Murphy et al., 2011). Here we found that the consistency of the CPP was the main EEG predictor of variability in task performance associated with both tonic and phasic arousal. For tonic arousal, our findings are largely in line with modelling studies which suggested that higher arousal is specifically predictive of more variability in evidence accumulation (Murphy et al., 2014b). For phasic arousal, higher consistency, and thus less variability, was found for larger pupil bins, which also displayed the best behavioural performance. These results suggest that similar neural mechanisms of cortical state described for sensory cortex (Reimer et al., 2014;McGinley et al., 2015b;McGinley et al., 2015a;Vinck et al., 2015) might also affect neurons in higher-order association areas (e.g. parietal cortex) and thereby influence evidence accumulation and task performance. Simultaneous pupil diameter and membrane potential recordings in parietal cortex during decision-making are needed to confirm this hypothesis. Target selection signal amplitude is modulated by pupil-linked arousal In the present study, we used a paradigm in which two stimuli were continuously presented and target occurrence was both spatially and temporally unpredictable. Successful target detection thus relied on locating and selecting sensory evidence from multiple sources of information. Loughnane et al. (2016) have shown that early target selection signals, which occur contralateral to the target stimulus (N2c), modulate sensory evidence accumulation and behavioural performance. Although previous studies have characterised the dependence of the quality of sensory responses on fluctuations in cortical state, as measured by baseline pupil diameter (Reimer et al., 2014;McGinley et al., 2015a;Vinck et al., 2015), to the best of our knowledge, the influence of pupillinked arousal on target selection signals has not been described before. Here, we showed that early target selection signals are modulated by tonic arousal such that larger baseline pupil diameter was predictive of smaller N2c amplitudes ( Figure 5C). Moreover, the amplitude of the N2c also explained unique variability in task performance across pupil bins and subjects ( Table 1). At first glance it seems counterintuitive that target selection signal amplitudes are decreased, whereas visual encoding in early visual cortex is enhanced on trials with larger baseline pupil diameter (Vinck et al., 2015), or during pupil dilation (Reimer et al., 2014). These differences could be due to differences in the nature of the recordings, as these previous studies used invasive electrophysiology and calcium imaging whereas we used scalp EEG, limiting especially the spatial resolution of our analyses that might be necessary to elucidate these effects (e.g. single neuron orientation tuning). Alternatively, they could constitute differential effects of arousal on visual encoding and target selection. More likely, however, they are due to specific task demands, in particular our use of multiple simultaneously presented competing stimuli. Indeed, there is some evidence that an increase in arousal, as measured by pupil diameter, can increase the ability of a distractor to disrupt performance on a Go/No-Go task in non-human primates (Ebitz et al., 2014). At high arousal levels, performance might thus be negatively affected when the task requires the successful suppression of distracting information, that is with higher arousal it is more difficult to focus on the task at hand (Aston-Jones and Cohen, 2005;McGinley et al., 2015b). On the current task, it might thus be more difficult to select and process information from one of the two competing stimuli during states of high arousal, leading to reduced N2c amplitude as well as reduced performance. The overlap and dissociation between baseline pupil diameter and the pupil response As in previous studies (Gilzenrat et al., 2010;Murphy et al., 2011;de Gee et al., 2014), we found a negative correlation between baseline pupil diameter and the size of the pupillary response. Both measures were predictive of task performance as well as a unique, but overlapping, set of EEG signatures of perceptual decision-making. Because of the overlap in their effects on these EEG markers, in particular pre-target a power and CPP ITPC, it is possible that both (in part) reflect the same component of central arousal state. Although we removed (via linear regression) the variance in the pupil response that is due to fluctuations in the amplitude and the phase of the baseline pupil diameter, some variability in the baseline pupil diameter might not be fully dissociable from the pupil response, and both might thus reflect a noisy measure of tonic arousal. This interpretation is further supported by the finding that the relationship between the pupil response and task performance did not substantially change regardless of whether variability in the pupil response due to fluctuations in baseline amplitude and/or phase was removed or not (Figure 1-figure supplement 5). Importantly, however, the dissociation in the effect of baseline pupil diameter and the pupil response on these EEG markers, such as the effect on N2c amplitude, indicates that these measures also capture independent variability in central arousal (tonic and phasic) predictive of distinct information processing stages of decision-making. Concluding remarks In this study we investigated the relationship between measures of tonic and phasic pupil-linked arousal and behavioural and EEG measures of perceptual decision-making. We found that trial-totrial variability in both tonic and phasic arousal accounted for variability in task performance and were predictive of a unique, but overlapping, set of neural metrics of perceptual decision-making. Specifically, tonic arousal exerted its influence on task performance through its effects on early target selection signals and the consistency of decision formation. Phasic arousal, on the other hand, affected behaviour through its relation with attentional engagement as well as the consistency of decision formation. These results indicate that during decision-making both tonic and phasic activity in the (network of) neuromodulatory centres that control central arousal can affect behaviour during perceptual decision-making. Thus, fluctuations in central arousal, mediated by neuromodulatory brainstem centres, act on multiple timescales to influence task performance through its effects on attentional engagement, sensory processing as well as decision formation. Task procedures Subjects (n = 80) and methods are largely overlapping with the details and procedures described elsewhere (Newman et al., 2017). Here we summarise details necessary to understand this study, and we also describe procedures that differ from the previous study. Participants were seated in a darkened room, 56 cm from the stimulus display (21 inch CRT monitor, 85 Hz 1024 Â 768 resolution), asked to perform a continuous bilateral variant (O'Connell et al., 2012;Kelly and O'Connell, 2013) of the random dot motion task (Newsome et al., 1989;Britten et al., 1992). Subjects fixated on a central dot while monitoring two peripheral patches of continuously presented randomly moving dots ( Figure 1A). At pseudorandom times, an intermittent period of coherent downward motion (50%) occurred in either the left or the right hemifield. Upon detection of coherent motion, participants responded with a speeded right-handed button press. A total of 288 trials were presented over 16 blocks (18 trials per block). Data were collected under identical experimental procedures at either Monash University, Australia, or Trinity College Dublin, Ireland. The experimental protocol was approved by each University's human research ethics committee before testing (Project number Monash University: 3658, Trinity College: SPREC012014-1), and carried out in accordance with approved guidelines. Informed consent was obtained from all participants before testing. Data acquisition and preprocessing Electroencephalogram (EEG) was recorded from 64 electrodes using an ActiveTwo (Biosemi, 512 Hz) system at Trinity College Dublin, Ireland or a BrainAmp DC (Brainproducts, 500 Hz) at Monash University, Australia. Data were processed using both custom written scripts and EEGLAB functions (Delorme and Makeig, 2004) in Matlab (MathWorks). Noisy channels were interpolated after which the data were notch filtered between 49-51 Hz, band-pass filtered (0.1-35 Hz), and rereferenced to the average reference. Data recorded using the Biosemi system were resampled to 500 Hz and combined with the data recorded with the Brainproducts system. Epochs were extracted from À800 to 2800 ms around target onset and baselined with respect to À100 to 0 ms before target onset. To minimize volume conduction and increase spatial specificity, for specific analyses the data were converted to current source density (Kayser and Tenke, 2006). We rejected trials from analyses if the reaction times were <150 or>1700 ms after coherent motion onset, or if either the EEG on any channel exceeded 100 mV, or if the subject broke fixation or blinked (Pupillometry) during the analysis period of the trial, the 500 ms preceding target onset (26.59 ± 2.94) for pre-target a power activity or the interval of 100 ms before target onset to 200 ms after the response (33.66 ± 3.95). Pre-target a-band power (8-13 Hz), N2 amplitude and latency, CPP onset and build-up rate and response related b-power amplitude and build-up rate were computed largely in the same way as in Newman et al., 2017. Briefly, a-band power was computed over the 500 ms preceding target onset using temporal spectral evolution (TSE) methods (Thut et al., 2006), and pooled over two symmetrical parietal regions of interest, using channels O1, O2, PO3, PO4, PO7 and PO8. The N2 components were measured at electrodes P7 and P8, ipsi-and contralateral to the target location Newman et al., 2017), and the CPP was measured at central electrode Pz. These signals were aggregated to an average waveform for each pupil bin and each participant. We determined the latency of the N2c/N2i as the time point with the most negative amplitude value in the stimulus-locked waveform between 150-400/200-450 ms, while N2c/N2i amplitude was measured as the mean amplitude inside a 100 ms window centered on the stimulus-locked grand average peak (266/340 ms) Newman et al., 2017). Onset latency of the CPP was measured by performing running sample point by sample point t-tests against zero across each participant's stimulus-locked CPP waveforms. CPP onset was defined as the first point at which the amplitude reached significance at the 0.05 level for ! 15 consecutive points. Because we decreased our statistical power by binning the trials into five bins (see pupillometry), we did not find an onset for every bin for a subset of subjects (baseline pupil diameter: 13 bins over 11 subjects, pupil response: 16 bins over 12 subjects). Because of our use of linear mixed effect analyses, these subjects could still be included in the analysis, with only the missing values being omitted. Both CPP build-up rate and amplitude were computed using the response-locked waveform of the CSD transformed data to minimize influence from negative-going fronto-central scalp potentials (Kelly and O'Connell, 2013). Build-up rate was defined as the slope of a straight line fitted to this signal in the window from À250 ms to À50 ms before response. CPP amplitude was defined as the mean amplitude within the 100 ms around the response. Response related left hemisphere b-power (LHB, 20-35 Hz) was measured over the left motor cortex at electrode C3 using short-time Fourier transform (STFT) with a 286 ms window size and 20 ms step size (O'Connell et al., 2012;Newman et al., 2017). We baselined LHB using an across-trial baseline for each subject. LHB amplitude was measured from the response-locked waveform in the window from À130 to À70 ms preceding the response, whereas the LHB build-up rate was defined as the slope of a straight line fitted to this same waveform in the 300 ms before the response. Inter-trial phase coherence (ITPC) was estimated using single-taper spectral methods from the Chronux toolbox (Bokil et al., 2010) and adapted scripts. We used a 256 sample (512 ms) sliding short time window, with a step size of 25 samples (50 ms). This gave us a half bandwidth (W) of 1.95 Hz: W = (K + 1)/2T, with K being the number of data tapers, K = 1, and T (s) being the length of the time window. Frequencies were estimated from 0.1 to 35 Hz. Pupillometry Eye movements and pupil data were recorded using an SR Research EyeLink eye tracker (Eye-Link version 2.04, SR Research/SMI). Automatically identified blinks and saccades were linearly interpolated from 100 ms before to 100 ms after the event, the interpolated pupil data was then low-pass (<6 Hz) or band-pass (0.01-6 Hz) filtered (second order butterworth). The instantaneous phase of the pupil diameter was calculated by taking the angle of the analytic signal acquired by using the Hilbert transform of the filtered data. Epochs were extracted from À800 to 4800 ms around coherent motion onset. Trials in which fixation errors or blinks occurred within the analysis period, from 100 ms before target onset to 200 ms after response, were excluded from analysis. Fixation errors were defined as gaze deviations of more than 3˚. The pupil diameter was normalized by dividing by the maximum pupil diameter on any trial in the analysis window from 100 ms before target onset to 200 ms after the response for each subject, and baselined on a single trial basis. We computed the baseline pupil diameter by averaging the pupil diameter in the 100 ms before target onset, and the baseline phase was calculated as the average phase angle in the 100 ms preceding target onset. We identified the shape of the neural input to the pupil system by applying various general linear models (GLM) to the pupil time course (Hoeks and Levelt, 1993;de Gee et al., 2014;Murphy et al., 2016) with two temporal components corresponding to target and response onset (all models), and a third sustained component (models 2-9) for which the shape varied across eight candidate models tested previously ( Figure 5 in Murphy et al., 2016). In model 1, only the stimulus and response onset were modelled. The sustained component in the remaining models took the shape of: (model 2) a boxcar component with a constant amplitude throughout the decision interval; (model 3) a linear up-ramp that grew in amplitude with increasing decision time; (model 4) a rampto-threshold; (model 5) a linear decay with a starting amplitude that was larger for slower RTs but whose amplitude always terminated at zero; (model 6) a linear decay-to-threshold which began at a fixed amplitude and terminated at zero; (model 7-9) versions of the boxcar, up-ramp and downramp models in which the sustained component was normalized by the number of the samples in that trial's decision interval. We convolved these onset, response and/or sustained temporal components with a pupil impulse response function (IRF): where w is the width (10.1) and t max is the time-to-peak (930 ms) of the IRF (Hoeks and Levelt, 1993;de Gee et al., 2014;Murphy et al., 2016). Each model was regressed onto the concatenated bandpass filtered pupil diameter time series (from -800 ms before target onset to 2500 ms after the response). Bayes information criterion (BIC) was used to assess model fit: where n is the number of samples, SSR is the residual sum of squares, and k is the number of free parameters. The goodness of fit between any two models was assessed non-parametrically by applying Wilcoxon signed rank tests to their difference score. We found that the linear up-ramp model (model 3) provided the best fit to the data. Figure 1-figure supplement 1 illustrates the relative goodness-of-fit of each model, compared to the best fitting linear up-ramp model, as well as the effect size of each of the components of the linear up-ramp model. To investigate the relationship between pupil-linked arousal and behavioural performance during decision-making, we binned our behavioural and EEG data according to either the baseline pupil diameter or the post target pupil response (see below) into five equally sized bins (mean 49.63 ± SEM 0.81 trials, minimum bin size = 20 trials) ( Figure 1B & C). The division into five bins allowed us to investigate possible quadratic trends in the data. We used linear regression to remove the trial-by-trial fluctuations in single-trial pupil amplitudes that could be due to inter-trial interval, target side, as well as baseline pupil diameter amplitude or phase, all factors that are known to influence either the post target pupil response and/or behavioural response times (Kristjansson et al., 2009;de Gee et al., 2014;Kloosterman et al., 2015;Newman et al., 2017). To partial out the effect of phase, a circular variable, we used the sine and cosine of the phase as orthogonal, linear predictor variables (Fisher, 1993). To verify the (absence of) correlation between pupil baseline phase and response before and after the regression, we made use of the circstat toolbox (Berens, 2009). We estimated the task-evoked phasic arousal response according to various single-trial scalar measurements of the amplitude of the pupil response (Figure 1-figure supplement 2). The relationship between the average pupil diameter around RT and behavioural performance was best described by a non-monotonic, U-shaped, relationship (Figure 1-figure supplement 2A). Because of the temporal low-pass characteristics of the peripheral pupil system (Hoeks and Levelt, 1993), trial-to-trial variation in RT can affect the measurement of the size of the pupil response. To remove the trial-to-trial fluctuations in pupil responses due to variations in RT, we removed these components via linear regression (de Gee et al., 2017;Urai et al., 2017). After the elimination of the contribution of RT to the pupil response, we still observed a U-shaped relationship with behavioural performance (Figure 1-figure supplement 2B). This measure of the pupil response, however, likely reflects both the transient response to target onset as well as any activity that occurs thereafter (e.g. during decision formation). Therefore, we aimed to isolate activity specific to the phasic response to target onset. To this end, we computed the mean, slope and linear projection (de Gee et al., 2014;Kloosterman et al., 2015) over a 400 ms time window around the peak of the derivative of the pupil IRF (636 ms using the canonical IRF). A time-window in which activity occurring after the target onset transient is, presumably, not yet reflected. We found an inverse relationship between each of these measures and behavioural performance (Figure 1-figure supplement 2C-E), with better behavioural performance for larger pupil responses. Although these results suggest that measurements of the pupil response in this time-window reflect a different component of the neural input to the pupil system than the measurements of the amplitude around RT (Figure 1-figure supplement 2A & B), the use of any specific time window can be interpreted as arbitrary. To further disentangle the pupil response into separate temporal components, we applied the best-fitting GLM, the linear up-ramp model (Figure 1-figure supplement 1), to individual trials by considering each individual trial as a separate condition (Bach et al., 2018). Because we reduced the amount of data used for the regression analysis by applying it to single-trial data, we tested whether this led to collinearity amongst the temporal components by computing the variance inflation factor (VIF). Although large VIF values do not necessarily imply that no conclusions can be drawn from regression analysis (O'brien, 2007), as a rule of thumb, VIF values larger than 5 or 10 indicate that predictors are collinear (Sheather, 2009;James et al., 2017). When applying the GLM across all trials, the average VIF values are within the range of collinearity (Figure 1-figure supplement 3A-B). When we apply the same model to single trial data, however, the average VIF values are substantially higher (Figure 1-figure supplement 3C-D). It seemed particularly problematic to reliably estimate the sustained and the response component as their VIF scores are larger than 10. The target onset component, on the other hand, has an average VIF score of approximately 5. Single-trial VIF estimates larger than five for target onset (39.34 ± 2.84% of trials) were mainly found on trials with short RT (Figure 1-figure supplement 3E), revealing that it is difficult to distinguish between these temporal components on short trials. The overall results were, however, not affected by these trials. Repeating the analysis when excluding trials with VIF values larger than five revealed the same relationship pattern between pupil response amplitude and behavioural performance (Figure 1-figure supplement 3F&G). Sorting the pupil diameter according to the estimate of the amplitude of the sustained component, revealed that the largest sustained component occurred on trials with a small (or absent) response to target onset (Figure 1-figure supplement 3H). Rather than solely reflecting phasic arousal during decision formation, the presence of the sustained component could, for instance, indicate a compensatory mechanism for the absence of an early target onset transient. As the relationship with behavioural performance followed a downward trend when plotted against the target onset component ( Figure 1C), and an upward trend when plotted against the sustained component ( Figure 1-figure supplement 3I), together these effects could explain the U-shaped relationship between behavioural performance and the pupil response when measured as the average pupil diameter around RT (Figure 1-figure supplement 2A-B). Although a target-response onset only model was the worst fitting model across trials (Figure 1figure supplement 1), we tested whether a target-response only model could reliably estimate the single-trial target-onset response amplitude. The relationship between this component and behavioural performance (Figure 1-figure supplement 3K&L), however, strongly resembled the U-shaped relationship between behaviour and the pupil response amplitude when calculated as the mean amplitude around RT (Figure 1-figure supplement 2A), a measure likely to be confounded by both RT and the neural input that occurs after the target onset transient. This supports the notion that the inclusion of a sustained component can make the estimation of the target onset component (amongst others) more accurate, despite the potential collinearity of these predictors. Indeed, the difference in model fit (R 2 ) is significantly larger than 0 for each individual subject (one-sided Wilcoxon signed rank test, data not shown). Figure 1-figure supplement 3J illustrates the average difference in R 2 values between singletrial models with and without the sustained component. Lastly, Figure 1-figure supplement 3M&N illustrate the actual pupil diameter time course and the single trial fitted pupil diameter, revealing that this model is able to capture considerable variability in the pupil diameter trace. Next, we applied the same linear up-ramp model to five subsets of trials, binned by RT (average bin size: 50.01 ± 0.82 trials). This analysis revealed that the relationship between RT bin and the estimated amplitude of the target component (Figure 1-figure supplement 4A) follows a pattern that is highly similar to the relationship between single-trial estimates of the phasic pupil response to target onset and RT ( Figure 1C), further supporting the notion that the single-trial GLM approach can accurately estimate the target onset transient. We again investigated the VIF values for each of the temporal components of the model applied to the binned data. Although the sustained and response components displayed relatively large values, the target onset component was smaller than 5. Again, large VIF values by themselves are not necessarily cause for concern, if a regression coefficient is statistically significant, even when its VIF value is large, it is significant 'in the face of that collinearity' (O'brien, 2007). To further exclude the possibility that large VIF values brought about these results, we repeated this analysis using the data binned according to RT in three or two bins (Figure 1-figure supplement 4B-C). These analysis also revealed smaller target onset component coefficients for larger RT, with progressively lower VIF values. Finally, we investigated the relationship between RT and the target onset component after Gram-Schmidt orthogonalization of the predictors (Figure 1-figure supplement 4D-E), which eliminated collinearity amongst the temporal components. After orthogonalization, we again found that the estimate of the b weights of the target onset component was inversely related to RT, both when estimated across bins of trials (Figure 1-figure supplement 4D) as well as when estimating this component on a single trial basis (Figure 1-figure supplement 4E). Altogether, these analyses reveal that although the estimation of different temporal components contributing to a single-trial pupil diameter time course has to be done with caution, in the context of the various measures of the phasic pupil response (Figure 1-figure supplement 2) and the interpretation of VIF factors (Figure 1-figure supplement 3 & Figure 1-figure supplement 4), it is possible (in this dataset) to extract meaningful estimates of the target onset component. Statistical analyses We used RStudio (RStudio Team (2016). RStudio: Integrated Development for R. RStudio, Inc., Boston, MA URL http://www.rstudio.com) with the package lme4 (Bates et al., 2015) to perform a linear mixed effects analysis of the relationship between baseline pupil diameter or the pupil response and behavioural measures and EEG signatures of detection. As fixed effects, we entered pupil bin (see Pupillometry) into the model. As random effects, we had separate intercepts for subjects, accounting for the repeated measurements within each subject. We sequentially tested the fit of a monotonic relationship (first-order polynomial) against a baseline model (zero-order polynomial), and a nonmonotonic (second-order polynomial) against the monotonic fit by means of maximum likelihood ratio tests, using orthogonal polynomial contrast attributes. The behavioural or EEG measure y was modelled as a linear combination of polynomial basis functions of the pupil bins (X): with b as the polynomial coefficients. This multilevel approach was preferred over a standard repeated measures analysis of variance (ANOVA), because it allowed us to test for first and secondorder polynomial relationships, as well as to account for missing values in the CPP onset estimation. We used a variant of the 'two-lines' approach (Simonsohn, 2017), to test for the presence of (inverted) U-shape relationships when a second-order polynomial best fit the data. Using the same multilevel model, we fit two straight lines to the first and last set of two/three bins. For a non-monotonic relationship to be classified as U-shaped, both components needed to have significant coefficients of opposite sign. We iteratively tested the first 3 against the last 2, the first 2 against the last 3 or the first 2 against the last 2 bins (omitting the middle bin), stopping if both criteria were met (p < 0.05, Bonferroni corrected). To verify that the relationship between pupil diameter and task performance was not dependent on the binning procedure, we ran another regression analysis wherein we predicted single trial RT by sequentially adding the linear and quadratic coefficients for baseline pupil diameter (BPD) and pupil response (PR): with b as the polynomial coefficients. We compared the first model to a random-intercept-only model including subject ID, inter-trial interval, stimulus side, as well as the trial and block number (to control for potential time on task effects), and tested the fit of subsequent models to the previous model fit. This analysis revealed a significant improvement for each step of the sequential analysis, for which the results and parameters estimates are shown in Supplementary file 1. These analyses confirm that both the size of the baseline pupil diameter and the pupil response are predictive of task performance on a single trial basis. This relationship moreover follows a non-monotonic, quadratic, function. After testing the relationship between behavioural and neural signatures of decision-making and pupillometric measures individually, the neural signals were added sequentially into consecutive regression models predicting RT and RTcv. This model had both a random intercept for each subject, allowing for different baseline-levels of behavioural performance, as well as a random slope of pupil bin for each subject, which allowed for across-subject variation in the effect of pupil bin on behavioural performance. The hierarchical entry of the predictors allowed us to model the individual differences in behavioural performance, as a function of the EEG signals representing each temporal stage of neural processing. Starting with preparatory signals (a-power), to early target selection signals (N2), to evidence accumulation (CPP), to motor preparation (LHB). The hierarchical addition of the predictors informed us whether each of the EEG signals reflecting successive stages of neural processing improved the fit of the model predicting behavioural data. The signals that explained unique variance were then simultaneously forced into a simplified model predicting RT or RTcv, which made it possible to obtain accurate parameter estimates not contaminated by signals that were shown not to improve model fits. Note that only subjects for which we could determine the CPP onset latency for all bins were included in this hierarchical model. For this final model, all behavioural and neural variables were scaled between 0 and 1 across subjects according to the formula: where y i is the scaled variable, x i is the variable to be scaled. This scaling procedure did not change the relationship of the variable within or across subjects, but scaled all predictor variables to the same range. Again, significance values were obtained by means of maximum likelihood ratio tests. Data plotted in all figures are the mean and the standard error of the mean (SEM) across subjects. Linear fits are plotted when first-order fits were superior to the zero-order (constant) fit, quadratic fits are plotted when second-order fits were superior to the first-order fit.
v3-fos-license
2017-06-21T06:15:48.321Z
2010-09-01T00:00:00.000
29064728
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://content.sciendo.com/downloadpdf/journals/raon/44/3/article-p199.pdf", "pdf_hash": "b7753d2cfd8ace3a8574f55095a00fdd0ba683ba", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1559", "s2fieldsofstudy": [ "Engineering", "Medicine", "Physics" ], "sha1": "b7753d2cfd8ace3a8574f55095a00fdd0ba683ba", "year": 2010 }
pes2o/s2orc
Linear array measurements of enhanced dynamic wedge and treatment planning system (TPS) calculation for 15 MV photon beam and comparison with electronic portal imaging device (EPID) measurements Introduction. Enhanced dynamic wedges (EDW) are known to increase drastically the radiation therapy treatment efficiency. This paper has the aim to compare linear array measurements of EDW with the calculations of treatment planning system (TPS) and the electronic portal imaging device (EPID) for 15 MV photon energy. Materials and methods. The range of different field sizes and wedge angles (for 15 MV photon beam) were measured by the linear chamber array CA 24 in Blue water phantom. The measurement conditions were applied to the calculations of the commercial treatment planning system XIO CMS v.4.2.0 using convolution algorithm. EPID measurements were done on EPID-focus distance of 100 cm, and beam parameters being the same as for CA24 measurements. Results Both depth doses and profiles were measured. EDW linear array measurements of profiles to XIO CMS TPS calculation differ around 0.5%. Profiles in non-wedged direction and open field profiles practically do not differ. Percentage depth doses (PDDs) for all EDW measurements show the difference of not more than 0.2%, while the open field PDD is almost the same as EDW PDD. Wedge factors for 60 deg wedge angle were also examined, and the difference is up to 4%. EPID to linear array differs up to 5%. Conclusions The implementation of EDW in radiation therapy treatments provides clinicians with an effective tool for the conformal radiotherapy treatment planning. If modelling of EDW beam in TPS is done correctly, a very good agreement between measurements and calculation is obtained, but EPID cannot be used for reference measurements. Introduction Mechanical wedge filters (hard wedges) are often used in the treatment planning as compensators of dose inhomogeneities in the photon therapy. Nowadays, they are often replaced by Enhanced Dynamic Wedge (EDW). EDW is a technical solution of Varian Medical Systems, but also other manufactureres have solutions which achieve the same result (Elekta-omni wedge, Siemens-virtual wedge). The EDW technique achieves wedgeshaped dose distributions by the computer-controlled movement of one of the collimator jaws under the simultaneous adjustment of dose rate and speed of the moving jaw. The relationship between the number of delivered monitor units and the position of the moving jaw is governed by lookup tables referred to as "Segmented Treatment Tables" The implementation of dynamic wedges in the various radiation therapy planning (RTP) systems has already been described. 1,2 As with any other commissioning activity, great care must be taken to ensure that enhanced dynamic wedges are correctly modelled in the treatment planning system. To directly verify the computational accuracy of a treatment planning system, measurements need to be made with the accelerator setup to the same identical specifications as already planned. 3 This work was aimed to verify EDW (described in details in literature) 4 in the treatment planning system (TPS) and use patient set up equipment to compare dosimetrical and calculation results with electronic portal imaging device (EPID) measurements. In addition, comparison with hard wedges was also presented. The electronic portal imaging device is very sophisticated gadget, accessory at the stand of the accelerator, which has an amorphous silicon detector remaining resistant to irradiation after the application of very high doses, and has certain dosimetrical characteristics which were also investigated here but also well described in literature. 5 Linear array CA24 measurements The measurement of enhanced dynamic wedge profiles using a linear chamber array requires the integration of the dose during the entire exposure at each point of measurement. It was done by the CA 24 Scanditronix Welhofer, and two electrometers, MD 240 and CU 500E, connected to the PC and OmniPro 6.2A software. The linear array CA 24 consists of 23 ionization chambers, the volume of each is 0.147 cm 3 , diameter 0.6 cm and active length 0.33 cm. The each two neighbouring chambers are placed on 2 cm distance, and their long axes are parallel to the central axis of the beam. They are mounted to the holder of the Blue water phantom. The main feature of this linear array is that the profiles are measured directly in the water, under the same conditions as measurements of the open field profiles or mechanical wedged field profiles. The beam data was collected according to the guidelines provided by Varian 5,6 . This consists of measurements of cross profiles and depth dose curves for the maximum (60°) and at least one intermediate wedge angle, in addition to measurements of the output factors. The calculated percentage depth dose curves (PDDs) and profiles were compared with measured data for 15 MV photons at a Varian Clinac 2100C. Square field sizes ranging from 4x4 cm 2 to 20x20 cm 2 were evaluated with measurements of PDDs and profile curves on few depths (build up, 5 cm, 10 cm, and 20 cm). EPID measurements The features of EPID are described well in the literature. [7][8][9][10][11][12][13] Portal imager aS1000 was positioned on a source to skin distance (source-EPID surface distance-SSD) 100 cm (not on standard 140 cm). The standard calibration procedure was then applied under this condition. The EDW fields of 4 cm x 4 cm, 10 cm x10 cm, 15 cm x 15 cm, and 20 cm x 20 cm were imaged (with the usage of EPID portal dosimetry mode) for the wedge angles of 15 deg, 30 deg, 45 deg and 60 deg, with the collimator orientation and movement as for CA 24 measurements. The collimator orientation for all measurements was 90 degrees and Y1-IN wedge orientation (Y1 being the dynamic jaw). Linearity of the pixel response with dose was checked, followed by field measurements. The image acquired by EPID, which results from each EDW field irradiation, is 2D image, with the different pixel values and is closely related to the intensity map of the EDW field. The pixel values carry information about the intensity of the signal within the pixel area. Pixels lying on lines crossing the central axis pixel are creating in plane and cross plane profiles. One profile is in the direction of the moving jaw, creating the wedged distribution, and another one is the perpendicular to the direction of the moving jaw. Other pixels are lying off axis, and can be used to create 3D image of a wedged field. In order to extract useful information about the profiles, the central axis pixel value is assigned value 100. All other pixels got then a relative value, depending on the ratio of the original pixel value on central axis, and elsewhere in plane and cross plane profiles. The series of relative pixel values on both profiles creates profiles comparable to other methods of measurements. External beam treatment planning calculations The treatment planning system used for this purpose was XIO CMS v. 4.2.0, convolution algorithm. Virtual phantom of the size of the big Blue phantom (used for measurements in water), was defined in the TPS, and the electron density of water assigned to the inner space of the phantom. The EDW beam was created with the collimator and gantry orientation as in water and EPID measurements, and appropriate field size, wedge angle, weight point definition, normalization, etc, imitating the measurements under real conditions in water. The resulting calculated plan was analyzed taking into consideration the depth dose curve and profiles on determined depths (build up, 5 cm, 10 cm and 20 cm). Dose values were read from the Dose Profile in the menu of the treatment planning space of XIO, on 5 mm distance along the profile of the field. These calculated profiles, as well as the profiles obtained by CA 24, and EPID, were compared to the profiles of hard wedges obtained using Blue phantom and CC13 ionization chambers, collected upon commissioning and acceptance tests of this linear accelerator. Hard wedges measurements and open field measurements The measured data of open fields and for hard wedges, collected during commissioning and acceptance tests of the Varian 2100C linac were used for this study. Only additional measurements for the field 4x4 cm 2 were collected during this survey for all wedge angles and depths, since Varian recommendations for commissioning do not include this field size as mandatory. Percentage depth doses The percentage depth dose curves of the open fields (measured by CC13 chambers), hard wedged fields (measured also by CC13 chambers), EDW fields (measured by linear array CA 24-PDD values extracted from profiles) and calculated by XIO, were compared. Generally speaking, the PDDs of open fields and EDW fields do not differ more than 0.5%. PDDs of open fields have a higher surface dose than the PDDs of hard wedged fields (dose extrapolated to the surface of water-0 cm depth) ( Figure 1). This comes from the beam hardening under the mechanical wedge. The beam hardening effect is also clearly visible on the tail of the PDD curve of the mechanical wedge and gives the difference of around 2%. PDDs generated from profiles measured by CA 24 and calculated by XIO are practically identical (result of modelling the EDW in TPS). The PDDs with EPID could not be obtained at this stage, since only measurements in build up were possible. EPID profiles in build up compared to linear array measurements in build up Profiles were obtained in direction of the moving jaw, showing the wedged shaped distribution. hard EDW profiles obtained by EPID in comparison with the same measured by the linear array differ around 1%, max up to 2%, within the field (Table 1). At the edges of the fields, the EPID profiles were having a larger gradient (dose fall down) than the profiles obtained by other methods. This applies to all wedge angles. A dose measured by EPID outside the field (peripheral dose) was much larger than the one measured by CA 24 linear array. This is characteristic for all angles and for all field sizes. (Figure 2) Profiles measured by EPID in comparison to open beam profiles measured by ionization chamber EDW profiles imaged by EPID in the perpendicular direction to the movement of the jaw, were also examined, and compared to the open field profiles, which were measured during commissioning of the machine, by CC13 ionization chambers. A very good agreement was found (Table 2, Figure 3). This is not the case with the profiles of hard wedged fields, measured also in the non-wedged direction, where the interaction of the beam with the material of the hard wedge (beam hardening effect), influences the shape of the profile (a hard wedged profile demonstrates a decrease in dose at the field edges in comparison with the EDW and open field profile in non wedged direction). Profiles of EDW field measured by linear array in comparison of hard wedges profiles measured by ionization chambers EDW linear array profiles to hard wedges do differ more in all cases, but that was expected due to the physical differences of two techniques (Figure 4) Profiles of EDW field measured by linear array in comparison to the calculation of XIO CMS TPS In most cases, the dose values on profiles differ around 0.5%, within the field, while outside the field it seems that XIO underestimates the peripheral doses by factor of 2. EDW wedge factors EDW wedge factors are the strong functions of the field size. This is proved by the measurements of wedge factors of EDW fields, and by the calculation of WF in the treatment planning system. This, of course, does not apply to the hard wedge whose dependence of the field size is almost negligible. This is due to the fact that mechanical wedges are always placed in the same position on the tray of the accelerator, and because the central beam always passes through the same thickness of the wedge, it does not matter what the field size is actually set (Table 3). Discussion For the quality assurance (QA) in radiotherapy we can use in vivo or in vitro methods with phantoms. 14 The second one can be used for for routine QA or for reference measurements. The basic conclusion of our study would be that EPID aS1000 can be used for the routine QA and for EDW verification, but not for commissioning, only for regular QA checks. The conclusion would also be that the implemented dose calculation algorithm well describes the EDW treatment. The peripheral dose of EDW field is half the dose of the hard wedged field. The reason for that lies in scatter outside the hard wedged field, due to the interaction of the beam with the material of the mechanical wedge. Clinically, this is an advantage of EDW wedged field. The wedge angle is better preserved for EDW than for hard wedges at all depths. The profile dose measured by EPID outside the field (peripheral dose) was much larger than the one measured by the CA 24 linear array. This is characteristic of all angles and for all field sizes. The reason for that as explained in the literature, might be due to the difference in absorption of low energy photons which appears in the material of the high Z. Spectrum of the photons is changed with the distance from the central axis, and region outside the field has only a scatter radiation. That is why the difference in profiles outside the field can be assumed to come from the difference of low energy photons of other dosimetrical methods and sensitive material of EPID detectors. Practically, all measurement techniques of EDW give very satisfactory results in terms of the agreement within PDDs and profiles ( Figure 5). Still, standard dosimetric measurements cannot be underestimated, and EPID implemented as verification tool in terms of implementation of a new technique in the department.
v3-fos-license
2021-10-20T16:00:28.323Z
2021-09-15T00:00:00.000
240542268
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-1444/12/9/768/pdf", "pdf_hash": "6d284f862ec231847209786195990c8010033ae6", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1560", "s2fieldsofstudy": [ "Art" ], "sha1": "a737c9bf121091bc3ba27dca04e1a0d368312e5f", "year": 2021 }
pes2o/s2orc
Finding Wholes in the Metaverse: Posthuman Mystics as Agents of Evolutionary Contextualization : The Metaverse is a pervasive expression of technological culture whose impact will be global. First, through knowledge, then through social, and now through geo-spatial, AI (the foundation of the Metaverse) will connect all entities on Earth through digital means thereby creating a three-dimensional informational and experiential layer across the world dubbed the Metaverse. The Metaverse has four characteristics: augmented reality, lifelogging, mirror worlds, and virtual reality. From the standpoint of Christian cultural engagement, a contextual theology has yet to be developed. In the work that follows, the Metaverse is engaged through a combination of contextualization and wholemaking from the standpoint of posthumanism and mysticism. The study focuses on evolutionary wholemaking as identified by Teilhard/Delio, while being guided by Bevans’ five (early) models of contextualization. The method of contextual wholemaking enables new ways of seeing, embracing, communing, complexifying, and creating within the four spheres of the Metaverse. After exploring the nature of the Metaverse in the first half of the paper, insights were gathered from the dialogue between contextual theology and culture and discussed in the second half of the paper. Introduction The Metaverse is a pervasive expression of technological culture whose impact will be global (Hermann and Browning 2021). From the standpoint of Christian cultural engagement, a contextual theology has yet to be developed. In the work that follows, I will engage the Metaverse primarily through contextualization (influenced by Teilhard) to begin to identify what a fruitful engagement between traditions might entail. A Metaverse (Smart et al. 2008, pp. 1-28) may be understood as a three-dimensional transparent space or sphere that will encompass the entire Earth. Humans will live inside it: it will always be on, always sense human presence, always ready to answer any question, always ready to conduct business, to play, or to arrange a meet-up with friends (Ball 2020). In the Metaverse, everyday life will switch back and forth between virtual and augmented realities seamlessly. Contextualization brings together the experience of the past-namely, the biblical tradition as it has been lived in Scripture and throughout history, with the experience of the present, i.e., a particular context or culture (Bevans 2018, p. 2). Contextualization (or inculturation, local theology, or contextual theology) puts these two experiences in dialogue. These dialogues are the means through which Christian faith has renewed itself in its repeated novel expressions in cultures throughout history. In contextual work, it is prudent to lean into the commonalities between the two dialogue partners while honestly noting differences. Consequently, for our study here, what streams of the Christian tradition might be most closely aligned with the culture of the Metaverse? For reasons I will make clear below, I will engage the Metaverse with the evolutionary wholemaking tools the Christian Teilhardian tradition provides, as represented in the work of Ilia Delio. 1 Her work is compelling in this particular instance of dialogue, as she currently pioneers a stream of the Christian tradition that pays close attention to evolution, technology, posthumanism, and spirituality, all key tools for engaging the Metaverse. When approached with the sensibilities of the posthuman mystic, the method of evolutionary contextualization enables new ways of seeing, embracing, communing, complexifying, and creating within the Metaverse. Creating a Method of Contextual Engagement In this section, I created a method of social engagement through combining wholemaking activities as rooted in Jesus and continued in the writings of Teilhard (through Delio) with a contextualization model developed by Stephen Bevans. In addition, I bring together these two facets of engagement with the proposed agent of change: Delio's posthuman mystic. Through the convergence of these threads, I explore how the posthuman mystic is equipped to engage in contextual wholemaking in the Metaverse. Wholemaking and Contextualization as Modes of Engagement How Jesus performed his mission in ancient Palestine serves as exemplar for how to engage with others in any context (Wright 1999, pp. 175, 182) and, specifically, for this study, the Metaverse. Jesus experienced oneness with God, and he extended this love into the sphere of relationships through granting forgiveness, drawing people together, and through healing all who came to him (Delio 2013, Loc 2761. He built on people's everyday experiences to demonstrate how God abides in the mundane (Loc 2780). Connecting Jesus' language to Teilhard (Loc 3154), Delio characterizes Jesus' kingdom activities as wholemaking (Loc 2796). Through his life and ministry, Jesus created wholes where previously there was separateness and division (Loc 2804). Jesus' practice of wholemaking expressed an instance (of the highest degree) of how the mission of God might be embodied in the world. In kind, how might those who seek to follow the pattern of Jesus practice wholemaking in the Metaverse? To situate the practices of Jesus in an evolutionary world, and relying on Teilhard (Delio 2013, Loc 2696, I looked at four dynamics that are constants for evolutionary change and that serve as key facets of wholemaking. 2 The first pattern is attraction. "God is the energy of wholeness and the irresistible lure to greater wholeness. God is the integral whole that attracts every whole toward greater wholeness (Delio 2013, Loc 3780)". One must pay close attention to how the Spirit attracts one forward into the future (and one must respond) if one is to participate in wholemaking in a culture. Second, the Spirit continually draws all entities to commune with others-to unite and to become whole (Loc 1659). Humans are to fully join in this process. Third, having experienced communion, each entity is empowered to differentiate and complexify, to become even greater and more unique expressions of themselves both communally and individually (Loc 2719). Fourth, beyond complexity, there exists emergent occasions of absolute novelty and creativity that are utterly unpredictable that arise in many contexts, not only currently but historically (Loc 660). Consisent with the patterns of the life of Jesus, this process of evolutionary change not only forms a fourfold pattern of human convivial life today, but it also describes the very basis by which the universe has been wholemaking from the Big Bang onward (Delio 2013, Loc 1151. In addition to wholemaking, I then explored contextualization as a sphere of engagement. Stephen Bevans originally identified five different models that Christians used throughout history to engage new cultures. 3 The translation model (Bevans 2018, pp. 8-12) describes an approach where the agent brings content (usually the gospel) and inserts it into the receiving culture. The anthropological model (pp. 12-15) reverses the approach-rather than bringing in something from the outside, the anthropological model describes how the agents serve as "treasure hunters (p. 123)", finding the incipient gospel that exists within the recipient culture, e.g., the "seeds of the word" (p. 13). The praxis (or liberation) model (pp. 16-19) works through a praxis-theory-praxis approach, i.e., one first engages the local context, which is then followed by going to Scripture and other resources, and the final step involves a return back to the culture for theologically informed social action. The synthetic model (pp. 19-22) is a dialogical process between the agent and receiver that, in various ways, integrates elements of the three contextualization models already listed. The final approach, the transcendental model, (pp. 22-26) describes a subjective approach whereby either the agent or the recipient connects to God within and describes that experience as particularly revelatory. Agents of Contextual Engagement Through her creation of the posthuman mystic, Delio brings together two very different patterns of engagement and merges them into one: her mystic is not only deeply connected to, permeated by and empowered by the divine, but is also posthuman. Posthuman is not to be confused with transhuman, i.e., going beyond one's biology and improving oneself through technology (p. 115). Instead, the posthuman is much more relational. The posthuman moves beyond the paradigm of the liberal autonomous subject of modernity (p. 115) as well as what is normally considered human (which in Western history established the white male patriarch as normative). For Delio, the posthuman represents a decentered, highly relational mode of reality giving agency to everyone, not only to all other humans, but to natural and technological subjects as well. The posthuman mystic is well suited to be a dialogue partner with the culture and residents of the Metaverse. "The posthuman mystic is the one who has the courage to live in the God-ing moment, connecting and creating the art of life. Every moment is an opportunity to become more conscious of the divine depth in our midst. To participate in God-ing energy means to be aware of the ineffable divine presence and to act from the energy of this presence, to participate creatively in God's becoming (Delio 2020, p. 200)". The Posthuman Mystic and Contextualization as Wholemaking Before beginning the action of evolutionary contextualization, the posthuman mystic must first inculcate a new mode of experiencing reality. They must first see that they are wholes all the way down (Delio 2020, p. 220), i.e., they are made up of wholes, from the energy that exists at the quantum levels inside their bodies up through the elements, molecules, and cells that make up their organs, to the very connections they maintain to all of life (Delio 2020, p. 223), including social groups, societies, the planet, the solar system and the galaxy, to the rest of the universe, and, ultimately, to God (Delio 2020, p. 224). Through an ever expanding body of Christ, each one is deeply connected with the heart and center of the evolutionary universe. As a full participant in God's work of evolution today, the posthuman mystic sees that "there is nothing profane (Delio 2020, p. 183)" (Teilhard quoted by Delio 2020)-that God is immersed in all of creation (xxiii). Posthuman mystics understand that all reality has God as its deepest essence, that God is to be vividly experienced, and it is God who serves as their energy source to engage day-to-day realities (Delio 2020, p. 200). Equipped with these dispositions, the posthuman mystic is better able to see rightly and discern the Spirit's work in recipient cultures. I will now bring the three previously discussed modes of engagement together. For the posthuman mystic, all five of Bevans (2018, pp. 8-26) models play a role in the contextual wholemaking task. In terms of translation, there is a word that the posthuman mystic carries forward (consistent with what was stated above about the posthuman mystic's disposition)-that each subject is created in wholeness, is connected to wholeness, and is carried forward in wholeness. The posthuman mystical agent utilizes the anthropological approach when they recognize that God's Spirit has been at work in the residents of the Metaverse: the messenger is the one who sees the wholeness in the context and gives name to that implicit reality. The posthuman mystic recognizes no secular space, and even the Metaverse, as part of God's connected creation, is undergirded by holiness. The posthuman mystic listens for where God's Spirit, from within the Metaverse, might be whispering for the agent to join in its wholeness. In terms of the Praxis model, wholemaking has dynamics of praxis and theory in its repertoire, and it is that agenda which particularly connects to the dispositions of the posthuman mystic (especially in the prophetic sense). In terms of the transcendental model, it is the subjective experience of the transcendent that one brings to the encounter with the other: neither bringing content nor finding content nor taking action is the pattern for the transcendental model, but it is simply about sharing authentically from one's experience of God. The agent or recipient shares their process of relating to God and that "message" communicates significantly more than the communication of any static presentation of the gospel (Bevans 2018, p. 25). Following Teilhard's dynamics of wholemaking leads the postmodern mystic to come to the context with empty hands while looking for invitations into wholeness. Questions that drive their contextual encounters might be the following: What does the posthuman mystic feel called to embrace? What is the posthuman mystic attracted to? With whom does the posthuman mystic unify? After unification, how might the relationships give rise to new complexity? Finally, what creative novelties might the posthuman mystic engineer or facilitate as a consequence? Towards a Method of Engagement What are the tools through which posthuman mystics might serve as agents of evolutionary contextualization in the Metaverse? Posthuman mystics will follow Jesus' pattern of wholemaking as illuminated by the evolutionary wholemaking of Teilhard/Delio, while being guided by Bevans' five modes of contextualization. Just as Jesus served as a wholemaker in ancient Palestine, posthuman mystics seek to conduct similar work in these new realities. Having received God in one's very depths, they realize that all reality shares the same source. They come to the context with the stance of a wholemaker as they seek to find and listen for wholeness in the four spheres of the Metaverse. As posthuman mystics, they share common cause with wholeness efforts that liberate humans and cultures from historic Western perspectives on whiteness, patriarchy and the notions of the detached liberal subject. For the posthuman mystic, the Metaverse provides unique opportunities for contextual wholemaking through fostering new ways of seeing, embracing, communing, complexifying, and creating. Having developed appropriate tools for engaging the Metaverse, I must first explore exactly what the Metaverse is. Foundation of the Metaverse AI (Artificial Intelligence) is the foundation of the four spheres of the Metaverse. What is AI? AI are those "digital technologies that perform tasks that traditionally required human intelligence, such as visual perception, speech recognition, decision-making and language translation. Until recently, only developers could develop AI. More recent breakthroughs allow computers to teach themselves by observing and collecting data, without the bottleneck of programming (Scoble and Israel 2017, Loc 352)". It is, also, "the simulation of human intelligence using software and accompanying apps or machines" (Cronin and Scoble 2020, pp. 62-63). There would be no Metaverse without AI, as AI is now the foundation of each of the three iterations of the web. In recent history, AI continues to gain more knowledge and establish human connections. Less than 25 years ago, Google 4 set up a question-and-answer algorithm for AI to acquire all human understanding (through its search engine) (Kelly 2010, p. 37). Today, and every day, humans "volunteer" to teach AI through their 3.5 billion searches. 5 Less than twenty years ago, Facebook 6 set up a platform to capture social network data for AI to understand human relationships. Similar to Google, almost four billion people "volunteer" each day on social networks to teach AI all the details of their relational networks. 7 Moving into a third era of Internet and AI, the next task of AI's development will be to understand the physical Earth (Kelly 2019). Right now, various initiatives exist to capture current global mapping efforts (6D.ai, now part of Niantic Labs, being the most significant). 8 It is expected that in the next few years, users will upload photos of all geophysical spaces to the web in three dimensions so that AI will create a digital twin of the Earth. With such a mirrored world in place, there will be a seamless connection between physical and digital realms. Both worlds will be connected at all levels for all people, information, creatures, things, and spaces, and "from now on, we will live parallel lives in both the physical earth and the digital earth (Kim 2021, p. 5)". One name for this integrated physical and digital world is the Metaverse (Smart et al. 2008, p. 1). AI cannot support robots, automation, or self-driving cars until digital maps are made of the entire Earth. (Cronin and Scoble 2020, pp. 54-55). AI cannot support a threedimensional world until a digital twin of all reality is created. As with prior iterations of teaching AI through Google and Facebook, it will not be corporations that upload massive amounts of data, but it will happen in mundane ways as people upload pictures freely on their own (Cronin and Scoble 2020, p. 156). The Metaverse The term Metaverse was coined in the novel, Snowcrash (Stephenson 1992). In that novel, the Metaverse is a follow-on to the internet, a three-dimensional space where one's avatar is a stand-in for one's actions in a simulated world. In 2007, through the facilitation of the Accelerated Studies Foundation, scholars and industry leaders across disciplines developed a 28-page document on the future of the internet which they called the Metaverse (Smart et al. 2008, pp. 1-28). In that study, the authors develop four aspects of the Metaverse,: augmented reality, lifelogging, mirrored worlds, and virtual reality. Augmented Reality Augmented reality is that aspect of the Metaverse that gives new eyes to see the material world in an entirely different way. It is similar to lifelogging in that it seeks to add an additional layer of perception to the current experience of reality, and it is similar to mirror worlds in that it continually communicates with sensors in the environment and faces outward to an external world. Augmented reality occurs when an individual receives an enhanced view of the physical world through the use of an AR-capable device such as a phone, headset, or glasses. These enhancements most frequently consist of information or virtual items appearing on top of what is seen as physical reality. Moreover, these layers of information can be communicated through sound, and if some of these entities are part of the Internet of Things, they may be addressed through voice. AR seeks to bring humans closer to the world that surrounds them (Fink 2019, p. 31). "Augmented reality . . . . has its historical antecedents in tools. Humanity has always sought tools to make people stronger, faster, and smarter. AR is the ultimate expression of man's (sic) quest for mastery. It is a tool, like a club (Fink 2018, Loc 328)". AR has three components: (1) it is primarily real and has virtual components, (2) the virtual components can be interacted with, and (3) the virtual items are connected to the three-dimensional real world (Fink 2019, p. 27). Smart phones will continue to develop more and more AR capability, but this particular part of the Metaverse will not fully come of age until people use smart glasses. It is thought, at the time of writing, that these will emerge, likely by Apple, in 2023 (Cronin and Scoble 2020, p. 45). After that time, what is stored on phones today will likely move to headsets or glasses (Fink 2019, p. 79). Other companies have introduced versions of AR glasses in the recent past (most notably, Google Glass in 2013), but those product rollouts failed for a number of reasons. Typically, it is not until Apple comes out with its signature technology that large public adoption occurs (Cronin and Scoble 2020, p. 148). "The lenses of smart glasses will look a lot like simple eyeglasses . . . . These will contain tiny nano-technological screens that will appear as 90-inch TV screens six feet in front of you, creating an image density eight times greater than HDTV . . . . They can take something that is really in your field of view and replace it with computer-generated images that you will be able to actually touch and manipulate (Scoble and Israel 2017, Loc 285)". AR includes the Internet of Things. The IoT connects home electronics to the internet by adding technology to them. Every item will have small amounts of AI for communication purposes. "The Internet of Things (IoT) is the rapidly expanding network of physical objects such as devices, vehicles, buildings and other objects that contain embedded electronics, software, sensors and network connectivity. This enables things to collect and exchange data (Scoble and Israel 2017, Loc 2868)". These "things" are, frequently, addressed through smart assistants in the home such as Amazon's Alexa, Google's Assistant or Apple's Siri. 9 Smart assistants are increasing in their ability to successfully complete commands, be it to talk to the lights, the thermostat, the door, or appliances connected to the Internet of Things. Some features include actually doing commerce through voice as well. As these assistants become smarter through AI, they will make more and more decisions. "An intelligent agent (IA) is a software agent capable of making decisions on behalf of users. Intelligent agents learn from each task they perform, thus becoming smarter over time, and eventually understanding user patterns so well that they can anticipate requests before users actually make them (Scoble and Israel 2017, Loc 2865)". AR truly offers a new way to see and interact with the world. The complexity of every item is now on display. It is through AR that objects talk and become part of the home (some speak to Alexa as a family member). 10 Everything becomes a subject through AR. There is an opportunity here to raise the value of what was previously considered an object, as each object now has information on it-and in some situations, can now speak. One's divided way of seeing the world can now be overcome through experiencing communication with all things. Lifelogging As with augmented reality, lifelogging is augmented as well, meaning that technologies are utilized to enhance the current practice of reality-tools are given to build on the current experience of everyday life. Moreover, lifelogging is similar to AR in that wearables will likely be worn (in unobtrusive recording devices) to capture what is going on in people's lives (Kelly 2016, pp. 278-79). Where augmented reality is externally focused, lifelogging is personal and intimate (Smart et al. 2008, p. 14). When people record their lives for their friend and family networks, they upload how it is they want their lives to be perceived by the world (Kim 2021, p. 11). These "documents" are not purported to be a fair and accurate rendering of their lives, but social media affords them a way to add another angle on reality in regard to their personal lives, hence, lifelogs are not a simulation but an actual representation of their lives. Because lifelogging is subjective, it is an internal view of each one's life rather than an external one as in AR. With VR, lifelogging is personalized; however, lifelogging coincides with one's personal identity in the real world, whereas agency in VR is mediated through other-worldly avatars. Lifelogging, as a term, has a recent history. Vannevar Bush, Director of the National Institute of Science in 1945, created the term to describe how people may begin to record, through technology such as cameras and recorders, many parts of their lives (Lifelogging 2021). The most common use case of lifelogging is simply uploading elements of one's life for the world to see. Lifelogging occurs when one creates a video in YouTube, shares pictures in Instagram 11 , posts an update on Facebook, mouths a lip-sync video on Tiktok 12 , or writes their own blog post. One growing platform that has become much more significant recently is Twitch. 13 Designed primarily for (video) gamers who want to broadcast and narrate their gameplay, Twitch is a platform that allows users to "life stream" their dayto-day lives for others to follow. These presentations are not necessarily a 1:1 map of the reality of their lives-they unveil the version of themselves that they want the world to see (Kim 2021, p. 11). Many people are starting to conduct live streams 14 of their everyday lives (closer to non-stop recording), and it is likely that this trend will continue. In the near future, people will likely record everything with a small wearable device (could be their AR glasses, could be a small lapel-like camera). The public is not currently ready for these changes, as there was reluctance to adopt earlier versions of this technology 15 . However, live streaming one's life as the norm is forecasted to grow. 16 In social media, people share very freely about their own lives. The draw for connecting to others is huge, and the amount of self-disclosure is unprecedented. After posting comes the waiting. First one records their life, through words, images, videos, and this is followed by posting these contributions, and, then, after the posting, they wait for their community to comment on these same words, images, and videos. Anticipating the responses of one's friends and followers plays a significant part of many people's lives today (Kim 2021, p. 66). Another way people record their lives is through "tracking" apps. 17 There are apps that focus on fitness, nutrition, wellness, meditation, sleep, overall health monitoring, to name a few. People track themselves typically through a mobile device such as a digital watch or phone. The recording, tracking, and analysis of all data about themselves has become a regular routine for many. Lifelogging is a core part of the Metaverse going forward and is full of "sacred hotspots" (Delio 2020, p. 180). It truly is a watershed for humanity that so many people have the ability to disclose intimate details of their lives with such a wide global community. Any medium that allows one to see the world through the eyes of another and offers a glimpse of how others see their day-to-day reality is a potential gift to all. For that reason, posthuman mystics see lifelogging as seeds for the creation of holy spaces. Mirror Worlds Mirror worlds are that part of the Metaverse that create a digital twin of the Earth for immersive experiences. Mirror worlds are similar to virtual worlds in that they are modeling a world as accurately as possible, to be experienced in three dimensions. The only difference is that mirror worlds are modeling the Earth, and VR is modeling an alternative world. A similarity is that both are immersive simulated worlds. In VR, each one's avatar does not represent the real-world version of them, but in mirror worlds each one's avatar does represent their real-world self. Up to this time, Google Earth, presented in 2005, has been the most significant effort to map the Earth digitally (Smart et al. 2008, p. 9). Google maps have conducted a huge portion of the work in regard to mapping the world in both two dimensions and in visual aspects of three dimensions, especially helping drivers on the roads (Kim 2021, p. 104). Just as with body tracking apps, geographically based apps are constantly updated and offer the latest information. For many, apps such as Waze 18 and Google maps give constantly updated information on the world. Mirror worlds are primarily built on Earth maps, and so, similar to AR, the focus is on connecting to a real external world (Smart et al. 2008, p. 9). Mirror worlds are where the real world is mapped in such a way that a three-dimensional rendering can occur. Some call this AR Clouds 19 , others call it "digital twins", "ubiquitous computing", "onlife", and others call it mirror world (Floridi 2014, p. 43;Kelly 2019). However, the key aspect of it is that all global public space will be mapped in a three-dimensional representation, and from that, everything digital, in regard to AR, can be built on this layer (Cronin and Scoble 2020, pp. 155-57). There are companies such as 6D.ai (now Niantic Labs) that plan to map everything in the world. They have a technology that can receive photos from a phone and add to the rendering of space already given by many others (Fink 2019, pp. 23, 145). "Soon every stop sign, tree, pole, lane marker, restaurant sign, and far more insignificant details, will be mapped by many companies. Apple alone drove four million miles with 360-degree cameras and 3D sensors, in an attempt to make its maps better (Cronin and Scoble 2020, p. 56)". Mirror worlds are any type of activity online that mimics something similar in the faceto-face world. Therefore, there are online schools that completely lack physical classrooms, a physical campus, or, in some cases, teachers. These schools may be described as campuses or classrooms, but in essence, these are two-dimensional online versions (twins) of a physical school (Kim 2021, pp. 16-17). The same goes for food apps-people order as they would at a physical restaurant, but what they are doing is ordering from a simulated two-dimensional version of a restaurant. Other two-dimensional "twins" include online fan clubs, or Zoom (for a meeting), or Airbnb (as a hotel). These businesses mimic three-dimensional reality with two-dimensional apps (Kim 2021, pp. 16-17). Airbnb is similar to a hotel that hosts unlimited rooms for two million people a night-in reality, it is a digital twin of what a physical hotel might be (Kim 2021, pp. 110-11). With mirror worlds, object-aware sensors will be in all sorts of public spaces so as to constantly update the three-dimensional twin of the world (Smart et al. 2008, p. 17). Self-driving cars will constantly upload actual images of what is happening, street by street (Cronin and Scoble 2020, p. 55). Autonomous cars rely on highly accurate threedimensional maps-e.g., the cars interact with the map, not physical reality per se-to conduct their actual driving. Each time they drive, and they see something that does not match the map (a new flagpole, road sign), these sensors update the mirror world (p. 55). Mirror worlds are invisible without special glasses. With AR glasses, one sees a three-dimensional space with coordinates that one can manipulate (Cronin and Scoble 2020, p. 52). What if someone wants to leave a note on a park-bench? They might write the note through verbal commands to their glasses, or even write a digital note on their phone, and then leave the digital note on the digital twin of the bench. No one will physically see it, unless they are wearing AR glasses too (and how you keep that note secure is another issue!). Currently, warehouses and factories are being scanned with three-dimensional imaging so that everything can be placed digitally. Factories are using AR glasses in significant ways as well, e.g., put on the glasses (e.g., Microsoft HoloLens) and see where to put the box (arrows on floor, arrows on shelf . . . ) (Fink p. 14). Even workers are scanned by sensors as to their locations. Even before they start work, new employees may be trained on a three-dimensional digital twin of the actual factory floor. Robots and automation will work off of the digital twin to conduct their work. Everything will be updated with sensors that track what has been moved since the last images were taken (Cronin and Scoble 2020, p. 194). As the real world becomes mapped (as in a three-dimensional digital twin), each person will be able to go anywhere in the mapped world through their VR headset. They may visit any mapped city, or they can visit any mapped private space if that has been mapped (such as a living room of a distant friend). Imagine when storefronts gain immersive footholds in mirror worlds-instead of visiting a web page on a browser to place an order, one would enter that store (through their avatar) stroll around, look at things, and make purchases in three-dimensional space. In addition to shopping, the idea of going to work and conversing with co-workers, etc., all the while as one's avatar (and never leaving one's living room) is difficult to comprehend. Virtual Reality Virtual reality (or the virtual world) is the fourth building block of the Metaverse (Smart et al. 2008, p. 6), and it is both simulated and internally focused. Similar to lifelogging, it revolves around people and their relationships: it is internal-everything works from each one's point of view and where one has agency. Similar to mirror worlds, it is an immersive simulation, the only difference is that this is gameplay that is based on an alternative world. Virtual reality involves both gameplay and storytelling in an immersive environment. "Virtual reality is about humanity's quest for immersion. It provides presence and agency in other worlds, in stories and myths, and it stretches from Plato's cave to religious rituals, theater, dark rides, theme parks, film, television, and video games (Fink 2018, Loc 328)". Through their avatar, one exists within the game and plays a key role in how the story develops (Smart et al. 2008, p. 6). These types of experiences were reflected in MMORPG 20 games (such as World of Warcraft) 21 that allowed millions of people across the world to join and participate in online gaming. With virtual reality, it takes those dynamics that much further, and puts one in an environment so vivid and so real that one has the experience, with one's whole body, that one is in the game, that one is a real part of the story. VR requires people to suspend their disbelief, just as one does for TV, movies, novels, etc. That suspension allows one to immerse themselves in the story itself (Fink 2018, Loc 328). 22 In the VR world, most VR games focus on goal-oriented tasks, but there are others that are more focused on the social world (Smart et al. 2008, p. 6). These VR experiences do not contain a story-one simply "hangs out", as their avatar, with others in their spaces. Online platforms, such as Second Life 23 and Roblox 24 and, increasingly, Fortnite 25 , allows one to spend time with friends as an avatar in addition to any gameplay. Facebook, among others, is creating social VR experiences, where avatars hang out with avatars. 26 In VR, interactions are limited by the avatar one chooses in the game (Smart et al. 2008, p. 6). It governs how one can play and chances to win are determined by their working within the capabilities of an avatar. Roblox is an online game incredibly popular with 6-16 year olds (Kim 2021, p. 155). Children are spending more time in Roblox than any other platform, and what they are doing is "hanging out" together and playing (and creating) games. YouTube, the number one platform for Gen Z (1997-2012), lags far beyond Roblox for younger Gen Z and Gen Alpha (2012) (Kim 2021, pp. 155-56). What is different about Roblox is that it is not one game, but hundreds of thousands of games, created not by companies, but by users. The platform itself makes it very easy to create games, and these creators may then charge people to play. Similar to social media before it, Roblox does not make the content, it just hosts the platform and offers tools for regular users to make content. Some envision that this particular platform may soon provide storefronts for an entirely new type of economy, one that would not only impact the gaming world but real world economies as well. 27 VR is a venue that breeds cooperation as gamers freely choose to go into another world with many others. They perform things together, as teammates, or as friends, or acquaintances in virtual games. Although every program has rules and constraints, many of these programs are designed to give as much latitude and flexibility as one needs to go in the game, while keeping to the primary story line. As mentioned, some platforms, however, will be just for socializing and so the constraints are set to a minimum. A Contextual Response to the Metaverse through Wholemaking Having briefly established the characteristics of the four spheres of the Metaverse, I will explore what a posthuman mystic who utilizes contextual wholemaking might discover through engagement in the Metaverse. Seeing the Metaverse in New Ways (There Is Nothing Profane!) Before engaging the Metaverse with contextual wholemaking, the first act of the posthuman mystic is to see rightly. Just as the posthuman mystic would understand that God lives at the depths of each person (Delio 2020, p. 183), filling each with God's own self, the posthuman mystic would understand that God abides in the Metaverse as well, removing any sense of secularity. In AR, through the persistent and (now) observable physical connection of all reality, the augmented life might reveal that all reality is of a whole, and, similarly, for those who can see, that it is God-drenched (Delio 2013, Loc 1866. As posthuman mystics listen to the call of wholeness in AR, they would see an animated world filled with God as opposed to a profane technological world. The posthuman mystic would see that reality, both physical and augmented, are of one piece consisting of interrelated wholes. Through AR, the wholeness and holiness of all reality becomes more accessible as all things are sourced in God -connected and filled with intrinsic worth. A posthuman mystic would come to lifelogging prepared to receive the gift it has to offer, moving themselves towards social media with anticipation (Kelly 2010, p. 173). Just as one might participate in God-ing (Delio 2020, p. 200) in offline spaces, the posthuman mystic would realize that social media offers each person the opportunity to serve as witnesses to God's presence. While recognizing that social media inequitably amplifies some voices while diminishing others, one cannot simply dismiss social media as profane and godless, and returning to Teilhard, "there is nothing profane below here for those who know how to see (Delio 2020, p. 183)". The posthuman mystic realizes that at the base of all things they are one with all others. Just as the physical world oozes with God (Delio 2020, p. 194), three-dimensional simulated worlds may ooze as well. For the posthuman mystic, God dwells in all of reality-including mirror worlds. Having re-trained themselves to see beyond dualism, posthuman mystics will perceive that God inhabits both physical and digital reality-they are of a whole. Therefore, posthuman mystics will understand that these three-dimensional alternative spaces can truly be sacred spaces, even if the interactions are avatar-to-avatar. Posthhuman mystics see mirror worlds as "pregnant with God" (Angela of Foligno) (Delio 2013, Loc 1872. When posthuman mystics engage in meditation, the inner spaciousness (Delio 2020, p. 181) they develop with God engages the entire universe (Delio 2020, p. 199). They do not leave that expanse when they join another world, such as in VR. The posthuman mystics' inner freedom would fill their avatar(s) as well, and through their avatar they would see and experience God in all things. Through virtual worlds, posthuman mystics learn to see how they are connected and related to everything. Through these mythic spaces, one observes how each thing is its own "I am", i.e., has its own identity and even its own being of some sort. It might be easier for a posthuman mystic to see the interconnectedness of all things through their avatar within a VR experience and, then, when they leave the virtual world, they might see their own world a bit clearer: all things connected in God. Embracing, Responding to Evolutionary Attraction in the Metaverse Having acquired a new way of seeing, the posthuman mystic is now ready to engage the Metaverse with the skills of a contextual wholemaker. The first task of wholemaking is to respond to the attraction of the whole. Being attracted to AR might feel a bit similar to magic. If a person can now talk to the stove-what to say when it talks back? What will the conversation be like? These invisible connections to all creation will feel surprising and new. Already humans have a real attraction to technological objects, e.g., to Alexa, to Roomba, and to other automated products in American homes, but these only represent the very beginning. There will be no limit to human communication with material reality in the home, and increasingly elsewhere. The number of things that will have interfaces for communication will continue to grow with no end in sight. The task of posthuman mystics, in these AR contexts, will be to discern where wholeness might be leading them, through technological interfaces. Lifelogging, through social media, draws people to one another as each one yearns for more wholeness in their connections. People share intimate details from their personal lives with the hopes to create new relationships and deepen others. When people are attracted to social media with the hopes for friendship, novelty, connection, and compassion, they are actually being invited into a deeper wholeness by the Holy Spirit. Posthuman mystics may listen to how wholeness might be leading them into these very familiar yet foreign spaces. Mirror worlds resemble their physical locales, but they are distinct because these spaces are accessed through their avatars. Posthuman mystics must ask to what and to whom they are attracted, and how are they being drawn by the Spirit. Do they seek to encounter a people, a culture, a space that is completely unknown? Who is it that may offer them wholeness in the mirror worlds, and to whom may they return the favor? When posthuman mystics are drawn into VR, searching for novelty and camaraderie, they are being drawn into wholeness. It is the Spirit who leads them to connect with others-even if those others are simulated creatures of one kind or another. They need to listen to what or to whom they hope to encounter, and to pursue these invitations from the virtual worlds. The postmodern mystic must ask, what is it that I am allured to in this alternative world? Where is wholeness calling me? Communing/Uniting in the Metaverse Having experienced a new way of seeing as well as a new way to listen for wholeness' call, the posthuman mystic is now prepared to unify with others in the Metaverse. God becomes more, when, through AR, each one becomes more closely unified with others (Delio 2020, pp. 180-81). Posthuman mystics may be filled with God and then extend that grace to all the beings with whom they come in contact. Each one who they encounter in their space is an "I am" and worthy of respect for what they are in themselves. Posthuman mystics commune with others through the technology that is available to them. To participate with evolution in lifelogging through social media is to celebrate the diverse peoples of the world and one's newfound connection to them. God has not brought people to this place to reinforce differences, but for each to hold these differences in a new way as one experiences wholeness with others in a true community. A sense of wholeness is created through telling stories that inspire, through healing, through loving their enemies, and by peace-making, bringing people together, speaking against oppression, opposing hierarchy, and more. Once unified, each one might celebrate the many gifts they have experienced through their many diverse connections. Each one is invited to achieve an expanded consciousness (Delio 2020, p. 178) with others as they enter the space of the other through technology. The mapping of the entirety of the world gives humans access to the Earth in a way that has never been experienced. Through mirror worlds, posthuman mystics will be able to travel to anywhere in the world-through an immersive environment-and experience that space and other people through their avatars. These are locations and peoples that they may never have met in the real world. It will be different than their physical lives, of course, but their experience of diverse peoples and locales through their avatar will be much more meaningful than they ever imagined possible. They will see that, through mirror worlds, they can learn to overcome difference in a way that gives them a wholeness they have never experienced before. Posthuman mystics may conceive of mirror worlds as spaces where God creates a hospitable environment for them to grow and to become one with the world through their avatars. In virtual worlds, team play is paramount. Although there are countless solo games, many of the most impactful games are the MMORPG games of the last twenty years. Players form connections around a quest, and are given certain tools, possible pathways, and specific skill sets, which together engage players at a level of community that many have never experienced in the real world. It is through these connections, avatar to avatar, that one might experience wholeness (Delio 2020, p. 183) through the shared experiences of camaraderie and friendship in virtual worlds. Complexifying the Metaverse Having learned to see anew, listen anew, and commune anew, the posthuman mystic is now ready to experience a new level of their own being as they travel more deeply into wholeness in the Metaverse. As each mystic becomes more unified within their AR space, their own consciousness becomes more complex, and they can witness their own evolution as a person here, a wholeness beyond wholeness. In AR, posthuman mystics connect and unify with other humans and machines at all levels, seeing them as extensions of themselves. Posthuman mystics celebrate the joint becoming and complexification of technology and themselves, together. Just as with AR where posthuman mystics unify with all things, in lifelogging each one seeks to make deeper and more complex connections with people. Through each loving connection that is created from person to person, through technology, the posthuman mystic shares "more being" with God (Delio 2020, p. 191). Complex consciousness is formed when people share the depths of their lives with one another. Through these more compelling interactions, God is born in their midst. When people notice, through their connections, that something more is appearing, an overflow of some kind that they cannot explain, they can be sure that God has complexified these relationships and is creating something new: God is that unaccounted for overflow (p. 191). Mirror worlds make room for a different kind of friendship, connection, and novelty through one's avatars. Through the experience of unity and wholeness that they feel amidst the diversity and beauty of mirror worlds, posthuman mystics awaken to the depths within one another, and they recognize that where diverse peoples, through their avatars, connect and make room for one another, there is God with them. Through these experiences, something new is born among them-they have experienced a shared communion as avatars with those who were formerly strangers-and they have become a new kind of complex people. In VR, one is drawn to experience wholeness with team members as they take on new challenges to go beyond what was previously thought possible. It is in these peak experiences that one is invited into the new that brings a whole way of being to any world. Once a posthuman mystic has collaborated with their team, inhabiting alternative worlds becomes a part of their overall identity. Even as an avatar, God works through the team as they participate in God-ing the space. Depending on the constraints on the platform and the individual avatars, one's team may be able to lean towards inclusion, towards the marginalized, towards generativity, creativity, generosity, and, when they connect to another avatar in love, Christ is born. These encounters broaden their world, and they evolve through these life-changing events. One's team may self-empty on behalf of another whether it is part of a game or simply in a social VR space. Giving oneself over to another in love and creating that hospitable space for the other to be themselves fosters a kind of complex community that extends the deep sense of shared life even further. Creating in the Metaverse Having experienced a new way of seeing, hearing, communing, and complexifying, postmodern mystics in AR will co-create as wholemakers in God (Delio 2020, p. 210). As each posthuman mystic personalizes their AR space through creative pursuits, they watch as God creates beauty with and through them in these new digital spaces. These posthuman mystics live out their ultimate purpose in communion with AR, now that all these new tools have become a part of their very identity. Having found their new complex voice, posthuman mystics serve as wholemakers who inspire others to begin a similar path. Having experienced a complex unity in social media through lifelogging, posthuman mystics will then personalize and create freely in that space. Wholeness manifests when people have freely loved, especially across existing social divisions. Through their lifelogging, posthuman mystics might share creatively in these spaces through art, photos, videos, live streaming, or in some other way. They may also inspire the creativity of others through their newfound identity and freedom. Posthuman mystics know they are working as wholemakers when they enlarge the possibilities (Kelly 2010, p. 349) for others to experience their true creative selves. Having experienced wholemaking with diverse avatars in mirror worlds, posthuman mystics will then consider how they might create as they build on their newfound identity. These mystical avatars might create entirely new kinds of art or new kinds of service to others (or even new kinds of churches)? Their creativity will recognize and welcome God's actions in these spaces as they serve and support the diverse avatars in their midst. Once one has experienced the very depths of trusted team play in a VR game, players move from that of a new user to that of a creator. Going further, players might align themselves more fully with the game, giving themselves to areas of the world they may have skipped or ignored before. They become more committed to the game by learning more skills and taking on more responsibilities. They might explore the outer limits of the platform, possibly creating new things not considered possible. At some point, they might even learn the platform so well that they figure out how to modify the very structure of the game, or even how to break it, in a sense, for their own purposes (for example, finding a way to socialize in a game that is very story driven, or writing a "mod" for the game that changes its very structure and makes it "whole" in a new way). In some of these worlds, and in the gaming world in general, moving from an apprentice to a master is the ultimate path, and again is a type of experienced wholeness. Through the method of contextual wholemaking, posthuman mystics embrace AR, and they discern how wholeness is drawing them into unity and more complexity, as everything is connected and communicates with one another. New creations emerge from these complex AR connections, as posthuman mystics are equipped and ready to be agents of wholeness in these hybrid spaces. Posthuman mystics embrace lifelogging as they hear where wholeness is calling them to share themselves deeply with one another so that they become unified and begin to acquire a complex consciousness with their greater community. After losing themselves in community on behalf of others, creativity is born for all who make the journey. As with the other spheres, posthuman mystics embrace mirror worlds as wholeness invites them to make their home in these spaces as new complex relationships emerge through their avatars, ultimately creating a new reality together. Through these new synergies, posthuman mystics are equipped to discern how this particular sphere of the Metaverse may emerge going forward. As with the other Metaverse spheres, posthuman mystics embrace virtual worlds as they transform these spheres from within. Posthuman mystics love virtual worlds, seek community there, fully give themselves to their team to gain an even deeper and more complex community, and then are empowered to create new expressions of life. Conclusions In this study, I created a contextual method to engage the four characteristics of the Metaverse: augmented reality, lifelogging, mirror worlds, and virtual reality. Building on Jesus' pattern of wholemaking with the evolutionary paradigm of Teilhard/Delio, while taking cue from five of Bevans' modes of contextualization, the tool enables the work of the posthuman mystic. The posthuman mystic is one who, having received God in one's depth, realizes that all reality shares the same source. The posthuman mystic comes to the context with a message of wholeness as she seeks to find and listen for wholeness in every space embedded in the Metaverse. Having developed these appropriate tools for engaging the Metaverse, I engaged the Metaverse in dialogue with evolutionary contextualization-after exploring exactly what the four features of the Metaverse are. Through this brief study, I have demonstrated how posthuman mystics, using the method of contextual wholemaking developed here, and by joining with machines and avatars in diverse worlds and by giving themselves over to new ways of seeing (nothing profane!), responding (to evolutionary attraction), uniting (in community), and complexifying (inspiring a depth of creativity), reveal an evolutionary approach of Christian engagement to the coming Metaverse. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Conflicts of Interest: The author declares no conflict of interest.
v3-fos-license
2021-12-18T16:16:54.266Z
2021-12-01T00:00:00.000
245270647
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/13/24/6320/pdf", "pdf_hash": "3f0015230194e5b27257f4673833da3c2aacc87f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1561", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "0ff647ddccf943b755324b8a69a68526bf04ad16", "year": 2021 }
pes2o/s2orc
Multiple Treatment Cycles of Neural Stem Cell Delivered Oncolytic Adenovirus for the Treatment of Glioblastoma Simple Summary The human body’s ten trillion cells are constantly assailed with environmental insults and genetic susceptibilities that can initiate tumor formation. Yet, most people live cancer-free for decades. This bewildering feat is due, in part, to the remarkable ability of our immune system to recognize and eliminate tumor cells. Unfortunately, 12,000 Americans/year are diagnosed with a rare, aggressive, and fatal tumor that escapes immune recognition: glioblastoma. Here, we continue efforts to develop a treatment capable of stimulating immune recognition of glioblastoma. The treatment is based on an oncolytic virus that causes tumor-selective infections. Neural stem cells are used to enhance viral distribution throughout the tumor. This study selects a dosing strategy to enable more comprehensive viral inoculation of the tumor than was possible in our previous clinical trial. This research demonstrates to the broader community that multiple-cycle oncolytic virotherapy may be therapeutically beneficial despite an anti-viral response after the first administration. Abstract Tumor tropic neural stem cells (NSCs) can improve the anti-tumor efficacy of oncovirotherapy agents by protecting them from rapid clearance by the immune system and delivering them to multiple distant tumor sites. We recently completed a first-in-human trial assessing the safety of a single intracerebral dose of NSC-delivered CRAd-Survivin-pk7 (NSC.CRAd-S-pk7) combined with radiation and chemotherapy in newly diagnosed high-grade glioma patients. The maximum feasible dose was determined to be 150 million NSC.CRAd-Sp-k7 (1.875 × 1011 viral particles). Higher doses were not assessed due to volume limitations for intracerebral administration and the inability to further concentrate the study agent. It is possible that therapeutic efficacy could be maximized by administering even higher doses. Here, we report IND-enabling studies in which an improvement in treatment efficacy is achieved in immunocompetent mice by administering multiple treatment cycles intracerebrally. The results imply that pre-existing immunity does not preclude therapeutic benefits attainable by administering multiple rounds of an oncolytic adenovirus directly into the brain. Introduction Oncolytic virotherapy is a promising approach for treating drug-or radiation-refractory brain cancer. Oncolytic viruses (OV) kill tumor cells directly via oncolysis; and also indirectly, by stimulating anti-tumor immune responses. Among oncolytic viral species used in clinical trials, adenoviruses possess several advantageous properties, including relatively simple genetic modification, inherent immunogenicity, and high viral titer production [1]. To date, OVs from nine different families, including both DNA and RNA viruses, have been successfully transitioned from preclinical studies into 31 early-phase clinical trials in patients with brain tumors. Although a robust survival benefit remains to be shown in larger, randomized phase II/III trials, seven phase I/II trials have demonstrated remarkable tumor regressions in isolated patients. One reason that positive responses were not observed in the majority of OV-treated patients may be that optimal colonization of the tumor by the virus was limited by immune inactivation of the virus. Additionally, there was likely poor viral access to scattered infiltrative GBM cells that were separated from main tumor mass by normal tissue [2][3][4][5]. Neural stem cells (NSCs) have the intrinsic capability to migrate to invasive primary and secondary brain tumor sites in various preclinical models [6], whether delivered intracranially in the opposite hemisphere, the lateral ventricle, or intravenously [7]. NSCs can be used to deliver anti-cancer agents, including oncolytic adenoviruses, specifically to brain tumors [8]. Their use results in several advantages, including (1) protection of the virus from immune inactivation on route to tumor sites; (2) improved tumor penetration and distribution; and (3) an ability to carry virus across normal tissue to seed distant invasive tumor foci [9]. Our group uses NSCs as a delivery vehicle to improve oncolytic viral delivery, with a strong focus on a particular virus with potent anti-GBM activity (CRAd-S-pk7) [9]. We have previously demonstrated that NSC-mediated CRAd-S-pk7 delivery provides therapeutic added value when treating human glioma xenografts in immunodeficient mice [10,11]. Most recently, the clinical safety of CRAd-S-pk7-transduced NSCs (NSC.CRAd-S-pk7) was demonstrated in a first-in-human study in newly diagnosed high-grade glioma patients. The highest administered dose was a single, intracavitary injection of 150 million NSC.CRAd-S-pk7s (1.875 × 10 11 Viral Particles) [12,13]-the maximum dose deliverable within a single infusion due to volume limitations for intracerebral administration [13]. Further dose escalation may be safe and provide additional efficacy benefits [14]. Here, we report IND-enabling data supporting the use of multiple administrations. A concern with repeated OV dosing is the inactivation of the virus due to the emergence of neutralizing antibodies [15]. Indeed, in the first-in-human study, anti-Ad5 neutralizing antibodies were detected in participants' blood within a week after administration of the single intracerebral dose of NSC-CRAd-S-pk7 [13]. However, it is unclear if these neutralizing antibodies were able to eliminate the OV. We hypothesize that, by being packaged within NSCs, CRAd-S-pk7 will be protected from destruction by neutralizing antibodies and complement while being transported to tumor foci by the NSCs. Here, we test this hypothesis and investigate the therapeutic advantage of multiple administrations within both immunodeficient and immunocompetent mouse models. Materials and Methods Tumor cell culture. All cell lines were cultured in Dulbecco's Modified Eagle's Medium (DMEM) (Invitrogen, Waltham, MA, USA) supplemented with 10% fetal bovine serum (Gemini Bio, West Sacramento, CA, USA), 1% l-glutamine (Invitrogen), and 1% penicillin/streptomycin (Invitrogen), and maintained at 37 • C in a humidified incubator (Thermo Electron Corporation, Waltham, MA, USA) containing 6% CO 2 . Cells were passaged when they reached 80% confluency using a 0.25% trypsin/EDTA solution (Invitrogen); media wase changed every 2-3 days. U251.eGFP was provided by Christine Brown. The GL261.ffluc line used for this study was a murine glioma cell line of C57BL/6J origin further modified in Benham Badie's laboratory (City of Hope, Duarte, CA, USA) to stably express firefly luciferase. He generously provided some frozen vials to Aboody's laboratory (City of Hope, Duarte, CA, USA). Wild-type GL261 cells express low levels of MHC Class 1, but not class II molecules, and express some costimulatory molecules, resulting in a classification as moderately immunogenic. In vitro verification of CRAd-S-pk7 tumor lysis. Tumor cells were plated at 5 × 10 5 cells per well in 6-well plates 24 h prior to exposure to CRAd-S-pk7 (MOI = 10). Incucyte software was utilized to capture time-elapsed photos based on the phase contrast channel over a period of 3 days. Neural stem cells. The v-myc-immortalized human HB1.F3.CD21 NSC line (approved by the Food and Drug Administration for human clinical trials via local injection, Identifier: NCT01172964) was obtained from Seung Kim (University of British Columbia, Vancouver, BC, Canada). Permission to use fetal tissue was granted to S. U. Kim (University of British Columbia, Vancouver, BC, Canada) by the University of British Columbia Clinical Research Screening Committee for Studies Involving Human Subjects. Tissue was obtained from the Anatomical Pathology Department of Vancouver General Hospital. The HB1.F3 immortalized human NSC line was derived from primary cultures of fetal telencephalon (15 weeks gestation) by immortalization with an amphotropic, replication-incompetent retrovirus with the v-myc gene [16][17][18]. Clones were isolated, expanded, and designated as HB1 NSC lines [17,19]. One of these clones, HB1.F3, was transduced with the retroviral vector pMSCV-puro/CD, and clones were then isolated and expanded. HB1.F3.CD clone 21 was given to the City of Hope under a Material Transfer Agreement. Production of HB1.F3.CD21.CRAd-S-pk7 cell banks. The research-grade clinical equivalent NSC-CRAd-S-pk7 cell banks (Banks 2 and QB53) were manufactured and releasetested in the Aboody Lab (City of Hope, Duarte, CA, USA). For bank 2, one vial of HB1.F3.CD21 (NSC) passage 26 from Quantum Bank 30 was thawed at 37 • C and plated in T-182 flasks (Genessee Scientific, San Diego, CA, USA) at 2 × 10 4 /cm 2 . Cells were cultured in complete growth media (DMEM supplemented with 10% Fetal Bovine Serum and 1% GlutaMAX™) and were incubated at 37 • C and 6% CO 2 . NSCs were passaged twice post-thaw by washing with 1 × DPBS (without calcium and magnesium) and detaching with 0.25% Trypsin/EDTA (Gemini). A representative T-182 flask was harvested for a cell count of NSCs per flask. NSCs were transduced with Master Viral Seed Stock CRAd-S-pk7 (Batch #0806-349-0001-1) at a multiplicity of infection (MOI) of 50 with a titer of 6.7 × 10 10 IFU/mL. After a 2-h incubation, a total yield of 2.6 × 10 8 viable cells was harvested with 97% viability. Cells were vialed at 8.5 × 10 6 cells/mL in cryopreservation medium (Cryostor 10), frozen in a Cryo1 • C Freezing Container (Nalgene) with a cooling rate of −1 • C/min in a −80 • C freezer, and then transferred to vapor phase of liquid nitrogen for long-term storage. For bank 53, three vials of HB1.F3.CD21 (NSC) passage 25 from Quantum Bank 50 were thawed at 37 • C and plated into a fibronectin-coated fiber bioreactor, as previously described [20]. Cells were expanded for seven days, then transduced with CRAd-S-pk7 (MOI = 27). One hour after adding the virus, media were exchanged, and cells were harvested for freezing. A separate bank (QB51, MOI = 14.6, 1 h freeze down) was made using a CRAd-S-pk7 modified to express firefly luciferase (Vector Biolabs, Malvern, PA, USA). Characterization of HB1.F3.CD21.CRAd-S-pk7 cell banks. All NSC.CRAd-S-pk7 cells were release-tested and characterized as >90% viable, >80% recovered post-thaw, >95% nestin +, and free of mycoplasma. Infectious viral load per NSC was determined using a commercially available titer kit per manufacturer's instructions (Adeno-X from Takara). Viability was determined using a fluorescent permeability dye (Viacount, Luminex, Austin, TX, USA), and lysis time was monitored via time-lapse photography (Incucyte, Essenbioscience, Newark, UK). Surface Hexon Expression on transduced NSCs was quantified by flow cytometry using standard procedures. Briefly, the transduced NSCs were washed with PBS (with FBS and Sodium Azide). A solution containing both fixation and permeabilization reagents (Fix and Perm, Life Technologies, Carlsbad, CA, USA) was then added to treat the cells, and the cells were then re-washed. Anti-hexon (MAB 8052, Millipore-Sigma, Burlington, MA, USA) was then added and incubated for 30 min. After additional washing, an Alexa fluor-conjugated antimouse IgG secondary antibody was added (SAB4600388, Millipore-Sigma). Positive cells were then assessed using flow cytometry (Guava EasyCyte HT, Luminex). In vivo orthotopic glioma models. Mice were maintained under specific pathogen-free conditions in the City of Hope Animal Resource Center, an AAALAC-accredited facility. All procedures were reviewed and approved by the City of Hope Animal Care Committee. For the immunocompetent models, 6-8-week-old C57BL/6J mice (weight 18-21 g) were used in this study. C57BL/6J mice are immunocompetent and can, therefore, develop effective adenoviral clearance responses. For the immunodeficient models, 6-8-week-old athymic nude (the Jackson Laboratory, Bar Harbor, ME, USA) were used because they are unable to provide mouse adaptive immune responses including: (1) CD4-dependent antibody formation; and (2) CD8-dependent killing of virus-infected or malignant cells. Tumor implantation. On study day 0, all groups received an intraperitoneal (IP) injection of ketamine-xylazine cocktail (dose 132 mg/kg of ketamine and 8.8 mg/kg of xylazine), followed by an intracranial (IC) stereotactic injection of tumor cells (GL261.dsRed, GL261.ffluc, or U251.ffluc depending on experiment) in the right frontal lobe. Surgical coordinates were 2 mm right of bregma and 0.5 mm rostral. Tumor cells were injected at three levels (0.667 µL of tumor cells injected 2.5 mm deep, another 0.667 µL injected at 2.25 mm, and then 0.667 µL injected at 2.00 mm). The skull was sealed with bone wax and the scalp gently closed with surgical glue. Analgesia (Slow release buprenex) was administered immediately upon waking. Live animal imaging. To confirm this increased viral load transferred to tumor cells when NSC.CRAd-S-pk7 cells were frozen at 24 vs. 1 h, CRAd-S-pk7.ffluc signal was monitored in vivo for four days after administration via bioluminescence using the SPEC-TRAL Ami X imaging system. Before imaging, mice were anesthetized by isoflurane (1.5 L/oxygen, 4% isoflurane) in an induction chamber and injected IP with D-luciferin substrate suspended in PBS at 4.29 mg/mouse. Mice were maintained under anesthesia in a chamber, and 7 min after injection of luciferin, the NSCs were imaged using a chargecoupled device camera (the SPECTRAL Ami X) coupled to Ami X image acquisition and analysis software. Light emission was measured over an integration time of 300 s. Clinical observations. For long-term survival studies, all study mice were weighed weekly and observed daily Monday through Friday for general good health, e.g., good/water intake, urine/feces production, no signs of scruffy hair coat, emaciation, or hunched posture. Any debilitating terminal criteria including misshapen skull, seizures, tremors, labored or difficult breathing, weight loss (>20% body weight), hypo-or hyperthermia, impaired ambulation, obvious illness, or inability to remain upright warranted immediate euthanasia. Blood chemistry. In a pilot study involving immunocompetent mice (n = 6), terminal intracardiac blood was collected using heparinized needles and transported to the Aboody Laboratory for chemistry analysis. The blood samples were tested using the VetScan Comprehensive Diagnostic Profile, which consists of 14 analytes, to evaluate the toxicity profile of 3 rounds of NSC.CRAd-S-pk7 and 1 round of NSC.CRAd-S-pk7. See Table 1 for an explanation of each analyte. Tumor size. In the pilot immunocompetent study (n = 6), two mice from each group were euthanized and their brains harvested on D28 (7 days after the last treatment). Brains were post-fixed in 10% paraformaldehyde for histopathology analysis. The tissue was sent to the City of Hope pathology core (Duarte, CA, USA), where it was processed using routine histological methods: paraffin-embedded; sectioned; mounted on slides; and stained with hematoxylin, eosin, anti-F4/80, and anti-PD1. Slides were returned to the Aboody lab and scanned via automated light microscopy (Zeiss) to visualize tumor size and immune infiltration. NanoString analysis. Total RNA was extracted from brain quartiles containing the treated tumor (TriReagent, Sigma Aldrich, St. Louis, MO, USA), following the manufacturer's instructions. RNA was further purified (RNeasy Mini Kit, Qiagen, Hilden, Germany), then quantified (NanoDrop-1000, Thermo Fisher, Waltham, MA, USA). Samples with RNA integrity values of >7.0 were included for gene expression analysis (NanoString nCounter, NanoString Technologies, Seattle, WA, USA). RNA (n = 4/gp) was analyzed using the nCounter Mouse PanCancer Immune Profiling Panel. Raw gene expression data were analyzed (nSolver v3.0.22, NanoString Technologies, Beijing, China). Pathway scores summarize data from a pathway's genes into a single, normalized, and standardized z-scaled score. Cell profiling uses marker genes stably expressed in immune cell types to estimate relative abundance in sample groups by measuring the average log-scale expression of characteristic genes [21]. Statistical methods. Sample sizes for in vivo studies were powered based on the survival analysis observed in a pilot n = 6 study. A 1-sided exact log-rank test with 11 mice per group was expected give 90% power at a 0.05 significance level to detect a hazard ratio between an active group (3-rounds of NSC.CRAd-S-pk7, 1-round NSC.CRAd-S-pk7) and the control group of approximately 0.12 when the control group had a median survival time of 40 days, while the active group had a median survival time of 100 days using a 2-sided log-rank test. All other data are presented as mean ± SEM unless otherwise stated. Statistical significance (p < 0.05) was determined using two-tailed Student's t-tests unless otherwise stated. Tumor Line Selection Our question requires a study within an immunocompetent syngeneic model. Adenoviruses cannot typically lyse and release new infectious particles when infecting mouse cells, including B16 cells (Figure 1) [22]. It was, thus, necessary to identify a mouse tumor line that was permissive for adenoviral replication. The murine glioma cell line, GL261, is currently accepted as the gold standard line to use when generating rodent glioma models. Here, we demonstrate that GL261 express survivin ( Figure 1A) and can be infected by CRAd-S-k7 in vitro. GL261 supports CRAd-S-pk7 replication ( Figure 1B), resulting in an apparent cytopathic effect ( Figure 1C). While the lysis is not as extensive as that observed in human U251 glioma cells, GL261 cells can serve as a semi-permissive cell line when testing NSC.CRAd-S-pk7 efficacy in immunocompetent GL261 mouse models of glioma. We next prepared research banks of CRAd-S-pk7-transduced NSCs to be used during in vivo efficacy studies. Clinical-equivalent NSC.CRAd-S-pk7 banks were manufactured and release tested, as previously described [13]. CRAd-S-k7 in vitro. GL261 supports CRAd-S-pk7 replication ( Figure 1B), resulting in an apparent cytopathic effect ( Figure 1C). While the lysis is not as extensive as that observed in human U251 glioma cells, GL261 cells can serve as a semi-permissive cell line when testing NSC.CRAd-S-pk7 efficacy in immunocompetent GL261 mouse models of glioma. We next prepared research banks of CRAd-S-pk7-transduced NSCs to be used during in vivo efficacy studies. Clinical-equivalent NSC.CRAd-S-pk7 banks were manufactured and release tested, as previously described [13]. Increasing NSC.CRAd-S-pk7 Dose Using Multiple Administrations Increasing the administered volume is not a possibility, either through intratumoral injections or intraventricular, due to safety concerns [23]. We therefore considered if multiple, weekly NSC.CRAd-S-pk7 intracerebral administrations would increase therapeutic efficacy over a single dose, despite the emergence of anti-Ad5 antibodies following the Increasing NSC.CRAd-S-pk7 Dose Using Multiple Administrations Increasing the administered volume is not a possibility, either through intratumoral injections or intraventricular, due to safety concerns [23]. We therefore considered if multiple, weekly NSC.CRAd-S-pk7 intracerebral administrations would increase therapeutic efficacy over a single dose, despite the emergence of anti-Ad5 antibodies following the first treatment. We hypothesized that NSCs could protect CRAd-S-pk7 by physically shielding the virus from serum antibodies, thus preventing initiation of the classical complement cascade that rapidly eliminates viruses. Our rationale was that even the first administration is vulnerable to complement-mediated destruction via the alternative pathway (antibody independent) [24], yet substantial therapeutic efficacy is still observed. NSCs Protect CRAd-S-pk7 from Serum Neutralization The ability of complement proteins in serum to neutralize free CRAd-S-pk7 independent of pre-existing antibody presence is demonstrated both in vitro and in vivo (Figure 2). We conducted in vitro assays using cultures of U251 brain tumor cells with or without 20% human serum. Although treatment with free CRAd-S-pk7 or NSC-CRAd-S-pk7 led to tumor cell killing in the absence of human serum, only treatment with NSC-CRAd-S-pk7 led to tumor cell killing in the presence of human serum (Figure 2A), supporting our hypothesis that NSCs provide protection from complement-mediated CRAd-S-pk7 neutralization. We also performed an in vivo experiment in immunocompetent mice in which CRAd-S-pk7 presence was visualized 1 day after administering either one, two, or three rounds of intra-tumoral treatment with either free CRAd-S-pk7 or NSC-CRAd-S-pk7. CRAd-S-pk7 transduction of GL261 glioma cells was prevented even with the first round of treatment, before the naive mouse had anti-Ad5 anti-bodies present. Furthermore, NSC-mediated CRAd-S-pk7 delivery enabled intra-tumoral CRAd-S-pk7 to be present across all three rounds of administration ( Figure 2B). Long-Term Survival Studies Two GLP pre-clinical studies were conducted, and each was designed to compare the long-term survival of immunocompetent C57BL/6J mice inoculated orthotopically with a firefly luciferase expressing GL261 mouse glioma cell line, revealing one vs. three weekly treatments at a dose of 5 × 10 5 NSC.CRAd-S-pk7s ( Figure 3). The first was a pilot study in which mice with relatively small syngeneic orthografts (5000 GL261 cells per mouse) were randomly placed into the following treatment groups 7 days post-tumor inoculation: (1) one round of perfusion fluid central nervous system (PFCNS) + 2% human Representative fluorescence images of day 7 U251 brain cancer cell cultures stained with calcein-AM and ethidium bromide to visualize live (green) and dead (red) cells, respectively. Cultures were treated with either free CRAd-S-pk7 or dose-matched NSC-Crad-s-pk7 with and without the addition of 20% human serum. Scale bar = 50 µm and applies to all images. (B) Immunocompetent C57/BL-6 mice (8 weeks old females) bearing 4-day old intracranial GL261 glioma (2 × 10 3 cells) received either 1, 2, or 3 weekly rounds of intra-tumoral CRAd-S-pk7 (2.5 × 10 7 IU) or dose-matched NSC-CRAd-S-pk7. Brains were harvested, fixed, and cryosectioned 1 day after treatment. Brain slices were stained with anti-hexon FITC to visualize CRAd-S-pk7, and nuclei counterstained with DAPI. Scale bar = 100 µm and applies to all images. We also performed an in vivo experiment in immunocompetent mice in which CRAd-S-pk7 presence was visualized 1 day after administering either one, two, or three rounds of intra-tumoral treatment with either free CRAd-S-pk7 or NSC-CRAd-S-pk7. CRAd-S-pk7 transduction of GL261 glioma cells was prevented even with the first round of treatment, before the naive mouse had anti-Ad5 anti-bodies present. Furthermore, NSC-mediated CRAd-S-pk7 delivery enabled intra-tumoral CRAd-S-pk7 to be present across all three rounds of administration ( Figure 2B). Long-Term Survival Studies Two GLP pre-clinical studies were conducted, and each was designed to compare the long-term survival of immunocompetent C57BL/6J mice inoculated orthotopically with a firefly luciferase expressing GL261 mouse glioma cell line, revealing one vs. three weekly treatments at a dose of 5 × 10 5 NSC.CRAd-S-pk7s ( Figure 3). The first was a pilot study in which mice with relatively small syngeneic orthografts (5000 GL261 cells per mouse) were randomly placed into the following treatment groups 7 days post-tumor inoculation: (1) one round of perfusion fluid central nervous system (PFCNS) + 2% human serum albumin (HSA) (control); (2) one round of NSC-CRAd-S-pk7 treatment; and (3) three weekly rounds of NSC-CRAd-S-pk7 treatment. Results showed a trend towards improved long-term survival upon administering three vs. one treatment round of NSC.CRAd-S-pk7 ( Figure 3A). However, a larger study was needed to achieve statistical significance. The pilot experiment was repeated with a study powered for statistical significance (n = 11/group) in a more challenging, larger tumor model (10,000 tumor cells/mouse, treatment initiated on day 12). In the powered experiment, no survival benefit was seen with a single administration of NSC-CRAd-S-pk7. The median survival was 35 days in the group receiving a single dose of NSC. CRAd-S-pk7, while the mice treated with three weekly rounds was extended to 41 days [log-rank, 3 vs. 1 Round CRAd-S-pk7 NSCs, p = 0.0067] ( Figure 3B). liver and kidney enzymes using GLP standards. There was no observable difference between any of the groups (p > 0.05, one-way ANOVA), suggesting no obvious adverse effects of repeated NSC-CRAd-S-pk7 treatment ( Figure 3C). In addition, we did not observe significant differences in body weight ( Figure 3D) or in the following daily observations of mouse behavior: posture, seizures, tremors, labored breathing, and ambulation, suggesting that no gross neurological effects resulted from multiple administration of NSC-CRAd-S-pk7. On day 28, blood samples were collected from select mice in pilot study and assayed for the following parameters: albumin, alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, total bilirubin, blood urea nitrogen, calcium, phosphorus, creatine, glucose, sodium, potassium, total protein, and globulin. (D) Post-treatment mouse weights: each measurement is that of an individual mouse normalized to its initial weight on day 0 (W D /W 0 × 100% where D is the day post-tumor injection). Preliminary preclinical systemic toxicity assessments (i.e., inflammation, immunodeficiency states) for multiple rounds of NSC-CRAd-S-pk7 were performed by monitoring liver and kidney enzymes using GLP standards. There was no observable difference between any of the groups (p > 0.05, one-way ANOVA), suggesting no obvious adverse effects of repeated NSC-CRAd-S-pk7 treatment ( Figure 3C). In addition, we did not observe significant differences in body weight ( Figure 3D) or in the following daily observations of mouse behavior: posture, seizures, tremors, labored breathing, and ambulation, suggesting that no gross neurological effects resulted from multiple administration of NSC-CRAd-S-pk7. Immunological Characterization Interestingly, there was no survival advantage afforded by three rounds of NSC.CRAd-S-pk7 when administered to tumor-bearing nude mice, which contain innate but not adaptive immune functions ( Figure S1). This suggests that repeated NSC.CRAd-S-pk7 treatment efficacy relies on the host's adaptive immune response, not simply an increase in the administered viral load or improved viral distribution. The nature of the immune response that occurs after NSC-CRAd-S-pk7 treatment has not been fully established. To provide broad characterization of immune-mediated changes in the brain that occur following NSC-CRAd-S-pk7 treatment, NanoString transcriptome analysis was utilized in mice treated with a single dose of either sham saline, free CRAd-S-pk7, or NSC-CRAd-S-pk7 ( Figure 4A). Brains were harvested 7 days after treatment in an effort to obtain a transcriptome snapshot when the adaptive immune response peaks (7-10 days). Substantial differences were observed in both pathway signature scores and immune infiltration scores when comparing tumors treated with either free CRAd-S-pk7 or NSC.CRAd-S-pk7 vs. Sham ( Figure 4A). The only pathway signature score that remained relatively unchanged across all treatment groups was autophagy. The interpretation of how these complex changes impact overall tumor progression is still unclear, given that both suppressive and inflammatory cell types seem to be recruited. Tumor regression was not confirmed prior to harvesting brains for NanoString analysis. Discussion Novel treatments that can improve survival for glioblastoma patients are urgently needed. NSC.CRAd-S-pk7 is a novel therapeutic product under clinical investigation, but its true therapeutic potential cannot be adequately tested until the dose-limiting volume restrictions are resolved. The results presented here motivated our decision to pursue intracerebral administration of multiple weekly doses. This approach will be clinically Plots depicting mean normalized expression of NanoString analysis pathway signature and cell type differentiation scores obtained using RNA isolated from C57/Bl-6 mice bearing CT-2A glioma seven days after treatment with either sham saline, CRAd-S-pk7, or dose-matched NSC-CRAd-S-pk7 (n = 3 per group). (C,D) Tumors were established using 5.0 × 10 3 GL261 cells/mouse, then treated using either a Day 7 single dose or three doses of 5.0 × 10 5 HB1.F3.CD_CRAd-S-pk7 cells on days 7, 14, and 21 (n = 2 per group). Twenty-eight days after the first treatment, brains were cryosectioned and processed using standard immunological techniques. Positive cell quantification was automated using ImagePro segmentation software. Results are expressed normalized with respect to Ki-67 positive cells. (E) Representative brain sections stained with F4/80, then 10× scanned images were tiled to showing the relative sizes of GL261 tumor in mice brains that received either sham, one round of NSC.CRAd-S-pk7, or three rounds of NSC.CRAd-S-pk7. Inset enlarged to aid visualization (10×). Sister brain sections stained with anti-Ki-67, anti-PD1, anti-CD3, anti-CD8, and anti-CD4 are shown. To assess the spatial distribution of select T-cell subsets, two mice per group were harvested on day 28 of the pilot efficacy study, whereby immunocompetent mice were treated with a single dose either sham or NSC.CRAd-Spk7 or three weekly doses of NSC.CRAd-S-pk7. Brain slices were immunologically stained for F4/80 to find tumor progression foci and Ki-67 to identify replicating tumor cells, in addition to the following T-cell phenotypic markers (CD4+, CD3+, CD8+, Foxp3, PD1) ( Figure 4). F4/80 staining demonstrates that tumors appear noticeably smaller after one round of NSC.CRAd-S-pk7 treatment when compared to the sham control group ( Figure 4E). The brains of select mice that received three treatment rounds appeared to have no tumor remaining ( Figure 4E). Instead, there was a sizable tissue defect where the tumor was assumed to have engrafted, then cleared. Quantification was, therefore, performed in an adjacent tissue populated with tumor foci evident by light F4/80 staining ( Figure 4C,D). These pilot data suggest that treating mice with three cycles, rather than a single cycle, may result in increased CD3 and CD8 positive tumor-localized T-cells. A decrease in PD-1 positive cells was also observed in mice treated with three cycles rather than a single cycle. Discussion Novel treatments that can improve survival for glioblastoma patients are urgently needed. NSC.CRAd-S-pk7 is a novel therapeutic product under clinical investigation, but its true therapeutic potential cannot be adequately tested until the dose-limiting volume restrictions are resolved. The results presented here motivated our decision to pursue intracerebral administration of multiple weekly doses. This approach will be clinically tested for safety in an upcoming Phase 1 trial (IND 19532). Multiple NSC.CRAd-S-pk7 Administration Prolongs Survival More Than a Single Dose The most important result of these IND-enabling studies is that three weekly intracerebral rounds of a clinical-equivalent NSC.CRAd-S-pk7 dose improved the survival of brain tumor-bearing immunocompetent mice relative to a single administration. Possible mechanisms that may explain the superior efficacy of three administrations are: (1) a higher viral load more effectively competes with replicating tumor cells; (2) temporally spaced intra-tumoral injections improve NSC.CRAd-S-pk7 distribution and enable virus deposition in different and noncontiguous tumor regions [15,25,26]; and (3) anti-tumor immune responses are repeatedly stimulated by successive NSC.CRAd-S-pk7 treatments. Our observation that multiple treatment cycles afford a survival advantage that may be dependent on, rather than hindered by the presence of the adaptive immune system is slightly surprising given precedent literature describing rapid anti-viral clearance upon repeated administration of oncovirus [15]. In fact, there is no precedent for administering multiple OV doses within the GBM setting. However, pre-existing viral immunity is emerging as paradoxically beneficial in other tumor settings [27]. Other Reports Showing Pre-Existing Viral Immunity Does Not Preclude Treatment Efficacy Even though our result is one of only a few studies demonstrating the superiority of repeat dosing versus single-cycle OV therapy, this idea is consistent with a growing body of work, indicating that pre-existing immunity does not eliminate the therapeutic efficacy of oncolytic virotherapies, particularly with intra-tumoral administration. In fact, preclinical studies show increased effectiveness due to the anti-viral immune response against oncolytic virotherapy for Newcastle disease virus, maraba, and reovirus, although the mechanisms are unknown [27][28][29]. Similarly, pre-existing antibodies did not affect the anti-tumor efficacy of an oncolytic Ad vector, INGN 007, after intra-tumoral administration in immunocompetent animals (though it did in immunosuppressed mice) [30,31]. Furthermore, in a Syrian hamster model, which is both immunocompetent and permissive for human Ad5 replication [30][31][32][33][34][35], it was observed that pre-existing immunity reduced off-target vector spread [30]. Potential explanations of why anti-viral immunity may be paradoxically beneficial include: (1) adjuvant-like properties of anti-viral innate responses prime for an anti-tumor immune response, or (2) immune responses to the virus within the tumor supports the recruitment of anti-tumor effector immune cells [36]. When reflecting on the clinical tumor regressions in select brain tumor patients, it is worth noting that these regressions occurred months after OV administration, when no active viral replication was detectable. Furthermore, the tumors initially increased in size, similar to the pseudo-progression observed after other immunotherapy treatments [37]. Lastly, regressions occurred just as frequently in the adenovirus trials as the others, even though~70% of the adult population in the US is seropositive to Ad5 [38]. Pre-existing anti-Ad5 antibodies should have presented a serious hindrance to the clinical application of CRAds, but this was not the case, suggesting that pre-immunization did not preclude therapeutic efficacy after all. Notably, a recent landmark study showed that repetitive systemic OV dosing led to durable responses in several patients with cervical cancer [39,40]. Immunogenicity of NSC.CRAd-S-pk7 Our study differs from others in that we are using a cell delivery vehicle that is not necessarily inert with respect to how the immune system recognizes and clears the virus, or how it interacts with the host immune system. Our NanoString transcriptome analysis suggests that, in general, the immune changes that occur in response to NSC.CRAd-S-pk7 treatment seem to be driven by the virus itself, as only minor fluctuation is seen when delivered via NSC. This finding is consistent with our previous findings that NSCs are generally immune-privileged, characterized by lack of major histocompatibility complex class 2 (MHC-II) expression and low MHC class 1 (MHC-I) expression [9], and thought to be poorly recognized within human leukocyte antigen (HLA) incompatible hosts. In a recent phase I trial of administering multiple intracerebral doses of HB1.F3.CD21 NSCs, only 3 of the 14 tested patients treated with intracranial NSCs developed anti-NSC antibodies after the third dose. No anti-NSC antibodies were detected with the other 11 tested patients, even after receiving as many as 10 doses of NSCs. Furthermore, no evidence of anti-NSC T-cell responses or immune-mediated toxicity was observed in study patients [41]. We acknowledge that CRAd infection may alter these favorable immunogenicity results by inducing the expression of stress antigens that serve as markers for the elimination of virally infected cells (by NK cells, NK T cells, gamma-delta T cells, and macrophages). Thus, the possibility an anti-NSC.CRAd-S-pk7 response developing will be monitored in our upcoming trial. Study Limitations Our immediate translational objective restricted our study to a dose-equivalent to 150 million NSC.CRAd-S-pk7s (1.875 × 10 11 Viral Particles). In a study of one vs. six cycles of measles virotherapy for ovarian cancer [42], multiple cycles were only advantageous at lower doses. While our study demonstrates improved treatment efficacy after three treatment cycles, it is possible that, at higher doses, no benefit would be seen. Furthermore, long-term repeat dosing regimens might eventually result in no additional therapeutic benefit. The present study also does not address the optimal frequency for repeated dosing. An interval of 1 week was selected in both preclinical and clinical settings to allow sufficient time for an adaptive immune response to occur in response to each successive treatment. Lastly, these IND-enabling studies assume that the basic features of the immune response to NSC.CRAd-S-pk7 treatments are similar in mice and humans [43]. While this assumption is generally valid for acute viral infections, there are some notable differences, particularly in FoxP3 and CD4 expression levels [44]. Conclusions These IND-enabling studies set the foundation for a phase I study of intracerebral administration of multiple-round stem cell-based oncolytic therapy in high-grade glioma patients. This NSC-CRAd-S-pk7 treatment regimen may also enhance the therapeutic efficacy of other anti-glioma oncolytic virotherapies-each successive NSC.CRAd-S-pk7 administration serves to (1) attract further immune surveillance to the otherwise cold tumor microenvironment and (2) alert the immune system with updated information regarding tumor adaptations. Future work will focus on maximizing activation of anti-tumor immunity, perhaps through co-administration with checkpoint inhibitors or rapamycin, or in conjunction with anti-GBM CAR-T/NK cell treatments. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2021-05-11T00:03:54.499Z
2021-01-15T00:00:00.000
234197424
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/geofluids/2021/6694689.pdf", "pdf_hash": "668dbe9ac6852179d717c1e67b34551edb1c3383", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1562", "s2fieldsofstudy": [ "Engineering", "Geology", "Environmental Science" ], "sha1": "1c6b600b54a7c4dd2f0771b3563af1f1c1690d88", "year": 2021 }
pes2o/s2orc
Study on Wellbore Stability and Failure Regions of Shale considering the Anisotropy of Wellbore Seepage The hard and brittle shale formation is prone to collapse and instability, and the penetration of drilling fl uid along the bedding reduces the mechanical properties of rock near the borehole wall, resulting in serious downhole accidents. Therefore, in this paper, the geomechanical parameters of the reservoir in the Longmaxi formation of Jiaoshiba were determined by fi eld hydraulic fracturing and laboratory experiments. Then, the stress distribution model of borehole wall under the condition of underbalanced seepage fl ow is established based on the experimental results obtained by mechanical experiments on underground cores. The instability zone of borehole wall under the condition of underbalance is calculated and analyzed. The results show that the two-way horizontal ground stress of the Longmaxi formation is higher than 2.2MPa/100 m, and the original ground stress is high. Moreover, the mechanical parameters of the strati fi ed shale stratum matrix and weak surface are signi fi cantly di ff erent. The cohesion (4.7 MPa) and the angle of internal friction (26.9 ° ) of bedding plane are signi fi cantly lower than that of the matrix (7.77 MPa) and the angle of internal friction (46.7 ° ). Hard and brittle shale is easy to be destroyed along the strati fi cation. Under the condition of underbalanced seepage, the mechanical properties of borehole shale can be stable. It is found that when the borehole axis is vertically strati fi ed, the collapse pressure is the lowest, while in other drilling directions, the drilling fl uid density needs to be increased by 0.5g/cm 3 to maintain the borehole stability. With the increase of the inclination angle of bedding plane, the wall failure area increases. The results of this study can provide guidance and suggestions for drilling in Jiaoshiba block and other permeable hard and brittle shale formations. Introduction Unconventional oil and gas resources account for about 80% of the world's oil and gas resources, among which shale oil and gas reservoirs account for a large proportion [1]. Therefore, the effective development of shale oil and gas reservoirs is particularly important. Shale accounts for more than 75% of the world's drilling, and most of them cause wellbore instability, resulting in drilling costs of more than $5 billion per year [2]. The drilling test and production results show that the collapse of the shale section of the Longmaxi formation is very serious in China's Changning-Weiyuan and Ful-ing shale gas demonstration areas [3], so it is necessary to study the wellbore stability of the shale formation in Fuling area. After analysis, the factors affecting wellbore stability of stratified shale mainly include hydration mechanical properties, pore pressure, and in situ stress [4]. Meanwhile, the strength of shale decreases with the time of mud exposure [5,6], especially in the Shale area of the Longmaxi formation [7,8]. Moreover, in situ stress is very important for safe drilling [9], and the stress concentration is different from that of isotropic formations on the borehole wall of stratified shale [10]. Among them, the maximum horizontal ground stress has the greatest influence on the stability of shale formation [11]. In addition, anisotropic seepage will cause changes in the stress field near the shaft wall [12,13], which will affect the accurate calculation of shaft wall collapse pressure. Bradley [14] has studied borehole wall instability based on the linear elasticity hypothesis of homogeneous rocks. Subsequently, the problem of borehole wall instability has attracted the attention of researchers all over the world. The influence of strength anisotropy on wellbore stability is not considered in previous models. However, shale contains natural bedding surface, and experimental studies have shown that the failure form of shale with certain stress state is significantly different from that of homogeneous rock [15][16][17][18][19][20]. Based on the special experimental phenomena of shale, various mathematical models are established to study the instability of shale formation. Hikweon [21] considered the anisotropic strength of shale, converted in situ stress to bedding plane, and the results showed that compared with isotropic formation, the critical collapse pressure of shale formation increased significantly. At the same time, additional stress is generated due to the influence of temperature gradient, and a wellbore stability model with thermal hole elasticity considered is established [22][23][24][25]. The influence of anisotropic seepage in shale formation cannot be ignored. The drilling fluid penetrates into the formation along the bedding, changing the stress state around the well and reducing the strength of the rock on the shaft wall [26,27]. The Longmaxi formation belongs to the hard and brittle shale, where the water absorption and diffusion coefficient and drilling fluid activity have obvious influences on wellbore expansion and strong sensitivity. Then, the drilling fluid immersion causes instability [28]. For the shale of the Longmaxi formation, water absorption and expansion experiments were carried out [8], and the results showed that hydration effect permeated along the bedding surface, and shale samples were stripped and dropped. As the shale water activity is less than the drilling fluid, in addition to the predicted hydraulic gradient, additional fluids will flow into the shale formation due to the chemical imbalance applied [29]. This creates abnormal pore pressure near the borehole, followed by additional fluid-induced stress, and prolonged exposure to drilling fluid in the wellbore results in reduced strength. Underbalanced drilling has high efficiency and low cost. In UBD drilling, the bottom hole is completely compressed [30], which can improve the bottom hole seepage capacity and minimize formation damage [31]. In UBD drilling, when considering fluid seepage, the calculation of collapse pressure is more accurate, but the mud density window is narrower [32]. [33,34] proposed a wellbore stability model of anisotropic strength formation. Before the formation of macroscopic failure surface in the stratigraphic weak plane, sporadic failure occurs in the rock [35]. Previous wellbore stability models failed to predict the damage area around the wellbore. Thus, it is particularly important to accurately calculate and control the area of minor damage. Therefore, this paper obtained ground stress data of the Longmaxi formation through field hydraulic fracturing combined with laboratory experiments. The mechanical parameters of shale matrix and weak surface were obtained by combining the direct shear test and triaxial compression test. Using these parameters, a wellbore stability model considering anisotropic seepage under the condition of underbalanced drilling is established. The collapse pressure and failure area are analyzed, and the slight failure pattern of wellbore is studied in the same horizon at different shale inclination and inclination angle. Stress Distribution Model of Shale Formation 2.1. Coordinate System and Transformation. In the process of establishing the anisotropic wellbore stability model, five reference coordinate systems are needed. These five coordinate systems are called global coordinate systems (GCS), in situ stress coordinate system (ICS), borehole coordinate system (BCS), polar coordinate system (PCS), and facture coordinate system (FCS). The transformation relationship of these coordinate systems is shown in Figures 1 and 2 [36]. In GCS, the positive direction of axis-X n is defined as the geographic North Pole, axis-Y n is defined as East, and axis-Z n is perpendicular to the ground. In ICS, this paper stipulates that X o is the horizontal maximum principal stress direction, Y o is the horizontal minimum principal stress direction, and Z o is consistent with GCS. It is worth noting that in the study of [33,34], it is specified that the positive direction of the x -axis is the direction of the minimum horizontal principal stress, while the positive direction of the y-axis is the direction of the maximum horizontal principal stress. GCS and ICS for the transformation of the process, according to the right-hand rule, first round Z n shaft turn α o , Y n shaft rotate β o again. The rotation matrix for the transformation from GCS to ICS can be expressed by Equation (1). By the same token, GCS and BCS conversion process, α o , β o replace α b , β b , including α b or wellbore azimuth, β b inclination angle, transformation matrix of Equation (2). In polar coordinates, r is defined as the distance from the borehole axis to the remote formation, and θ is the angle that rotates counterclockwise from the axis-X b to the axis-Y b , in the borehole rectangular coordinate system. Therefore, from BCS and PCS, you just rotate the BCS about the axis-Z b by θ , the conversion formula of Equation (3). As shown in Figure 2, FCS is based on the direction of the 2 Geofluids Horizontal plane Figure 1: The relationship between different coordinate systems. Figure 2: The relationship between GCS and WCS. 3 Geofluids weak plane, where X w determines the tendency of the weak plane and Y w is perpendicular to the weak plane at the azimuth angle of the weak plane. Z w is perpendicular to the weak plane. Therefore, the rotation matrix of the transformed stress components from GCS to FCS can be obtained by Equation (4). 2.2. Borehole Stress Analysis. As drilling destroys the in situ stress state of the shale formation, in situ stress will form a stress concentration around the wellbore. The conversion process is to convert in situ stresses into GCS and then convert the stress components in the GCS into different stress components in the BCS. The above transformation matrix can be used to represent the mutual transformation of ground stress in different coordinate systems. Therefore, the ground stress component in the BCS can be obtained by using Equation (5). Among them, in situ stress is σ ics = fσ H , 0, 0 ; 0, σ h , 0 ; 0, 0, σ v g. M T NtO and M T NtB are the matrix transpose of M NtO and M NtB . Based on the stress model around drilling proposed by [14], Equation (6) of elastic isotropic wellbore stress expression is presented. It is worth noting that in this model, when r = rw, the model can be simplified to the stress model at the previous shaft wall [34]. Since the radius r from borehole axis to deep formation is retained in this model, the stress state of the formation near the borehole wall can be calculated by the model. where r ðθ, rÞ, σ θ ðθ, rÞ, σ z ðθ, rÞ, τ rθ ðθ, rÞ, τ rz ðθ, rÞ, τ θz ðθ, r Þ is the normal and tangential stress of the effective stress components near the wellbore, r w is the borehole radius, and r is the distance from the borehole axis to the distant formation. P p is the formation pressure, which can be affected by formation fluid seepage. α is the Biot's parameter, and θ is the circumferential angle of the borehole rotation counterclockwise centered on the borehole axis. Equation (6) is used to obtain the tensor of effective stress distribution near the borehole wall, and then, the stress component is converted to the coordinate system FCS in the weak plane by using the formula in Equation (7) [5,12,34]. In the stress component tensor σ FCS , the normal and shear stresses in the near-wellbore range can be obtained by Equation (8). Equation (9) can be established to distinguish rock failure by strength criterion. When Equation (9) is equal to 0, under a certain drilling fluid density, the value r of borehole circumference radius can be determined when damage occurs at any borehole circumference angle near the borehole wall, and then, the morphological characteristics of borehole wall caving under a certain drilling fluid density can be obtained. For the most part, models were simplified by r = r w , bringing the stress of the sidewall into strength criterion to judge whether the rock damage, unable to determine whether wall surrounding rock damage due to stress concentration. However, in this model, both the well circumference angle and the distance from the borehole axis radius r are unknown, so the stress state of the rock around the borehole wall can be directly brought into the strength criterion to determine the failure location of the rock around the borehole wall, and then, the failure area and shape of the rock can be obtained. Failure Criteria. Most of the existing mechanisms of borehole wall instability are based on the elastic mechanics theory of porous media and rock stability in continuous media is mainly controlled by borehole stress and rock strength. In fractured rock mass, the existence of discontinuous fractures changes the mechanical properties of rock to a large extent, which not only reduces the strength of rock but also reduces the overall cohesion and internal friction angle [18]. It also changes the stress distribution during borehole formation, making surrounding rock more vulnerable to damage. Except for obvious loose or plastic formations, the weak plane criterion is effective in predicting wellbore failure [37]. The weak plane criterion can be expressed as Equation (10). where σ θ , σ r are the circumferential stress and the radial stress of borehole wall (MPa), C W , ϕ W are cohesion (MPa) and internal friction angle (°) of weak plane, P P is pore formation stress (MPa), β is the included angle between the normal direction of fracture surface and the maximum principal stress (°), and c, ϕ are cohesion (MPa) and internal friction angle (°) of the sample matrix. According to Equation (7), the maximum and minimum principal stresses of borehole wall are substituted into the formula of weak surface strength criterion (Equation (10)). According to Equation (11), it can be judged whether the shale has shear failure in the rock matrix or sliding failure along the soft surface. The wellbore failure area and failure pattern flow are shown in Figure 3. The Influence of Underbalanced Drilling Seepage Effect During UBD operation, the effective fluid column pressure is lower than the formation pore fluid pressure, which means that the fluid flows into the wellbore. Previous studies ignored the influence of fluid seepage. However, the fluid penetrates into the wellbore from the formation and causes additional stress, which affects the wellbore stability to a certain extent. 3.1. Isotropic Seepage Model. Mody [38] first proposed the method of using equivalent pore pressure to evaluate the physical and chemical interaction between mud shale and drilling fluid. Considering the mud shale as isotropic semipermeable membrane, the permeable flow of water in the mud shale is the result of the joint action of pressure potential and chemical potential. The seepage under chemical potential difference can be equivalent to seepage under a certain pressure gradient, and the chemical potential is equivalent to pore pressure (as shown in Equation (12)) [32]. Here, R is the perfect gas constant, T is the absolute temperature (K), V is the partial molar volume of water (1/mole), α shale is the activity of water for pore fluid, α mud is the activity of water for drilling mud, P is pore pressure (MPa), and P 0 is the original pore pressure (MPa). Anisotropic Seepage Model. The permeability of stratified shale is different between vertical stratification and along stratification. Therefore, the effect of anisotropic seepage on pore pressure is different from that of isotropic formation and, thus, affects the stress field near the borehole wall [39]. On the basis of isotropic seepage research, the Darcy formula can be expressed as Equation (13) when considering the seepage anisotropy of stratified strata. where q x , q y , q z is the seepage velocity (m/s), k ii is permeability in all directions (D), μ is fluid viscosity (mPa·s), and p is formation pressure (MPa). Therefore, anisotropic permeability can be expressed as tensor from Equation (14). It can be seen that the permeability tensor has a similar expression form to the stress tensor and is also a symmetric tensor. The permeability tensor has properties similar to those of the stress tensor and can be expressed as an assertion 5 Geofluids in relation to the principal stress form of the stress. The permeability assertion is expressed as follows Equation (15). When the local layer contains obvious bedding, the main direction of penetration can be expressed as the permeability along the bedding and perpendicular to the bedding. The relationship between the permeability and the bedding structure is shown in Figure 4. In particular, for stratified strata, the same flow characteristics are often found within the same bedding plane. Therefore, the pressure transfer around the well can be 6 Geofluids regarded as a plane problem. Therefore, Darcy's twodimensional anisotropy formula can be expressed as Equation (16). When the permeability spindle coincides with the coordinate axis, the permeability tensor can be written as Equation (17). According to the measurement of vertical bedding and permeability along the bedding, the additional stress on the formation under different borehole conditions can be obtained by Equation (18). where P ij is the pore pressure of any well trajectory (MPa), Φ is porosity (%), r is distance from borehole center to far sidewall (m), μ is the viscosity of the pore fluid (D), C t is fluid compressibility, and t is time (s). Borehole Stability and Caving under Underbalanced Condition The in situ stress state of the Longmaxi formation was determined by on-site hydraulic fracturing and laboratory experiment, and the downhole core was subjected to triaxial compression experiment to obtain the mechanical parameters of shale matrix. The layered shale was subjected to direct shear test, and the cohesion and internal friction angle of the shale weak surface were obtained by using the mole line [39]. On this basis, the minimum mud density and the unstable region under different bedding distribution considering the seepage effect are studied. Geological Mechanics and Rock Mechanics Parameters of the Longmaxi Formation. Horizontal in situ stress values are calculated using hydraulic fracturing data. Hydraulic pressure method calculates ground stress according to hole's stress state and fracture mechanism. The hydraulic pressure overcomes the ultimate tensile strength of the formation and causes it to break, causing fractures. Fracturing fluid enters the formation and causes a sudden drop in pressure. When the fracture extends beyond the wellbore stress concentration area, the pump is stopped instantaneously, and the pressure value is the minimum horizontal ground stress. The maximum horizontal ground stress is calculated by opening the fracture again by opening the pump. And the laboratory in situ stress experiment measured the horizontal maximum and horizontal minimum in situ stress through the Kaiser effect of the rock. Through field test of hydraulic fracturing in the formation of the Longmaxi formation and laboratory experiment, horizontal ground stress in this area is obtained (as shown in Table 1). In order to obtain more practical mechanical properties of layered strata, triaxial rock mechanics experiment and direct shear test were carried out with the research object of underground shale and surface outcrop in the Longmaxi formation. At the same time, according to the field logging data, the main parameters required by the model are obtained (as shown in Table 2). Underbalanced Drilling Collapse Pressure. Under the condition of underbalanced drilling, the stress around the well has changed. Although the permeability of shale formation is low, it is not a nonpermeable formation. When considering both the weak plane of bedding and the condition of underbalance, the collapse pressure of the Longmaxi formation block has been studied. By comparing and analyzing the distribution law of the collapse pressure around the well with different bedding inclination angles of 30°and 60°, it is found that along the direction of bedding inclination or its relative direction, the collapse pressure is generally low, while on the direction with an included angle of 90°with the bedding inclination, the collapse pressure generally reaches its maximum. In addition, the distribution law of collapse pressure is also different for different beddings. In the direction of bedding inclination, the minimum value is generally obtained at the position of inclination angle corresponding to the bedding inclination angle. While drilling horizontal wells in other directions, the drilling fluid density is usually increased by more than 0.5 g/cm 3 to maintain borehole stability (as shown in Figures 5 and 6). The above figure shows that different bedding plane profiles have a significant influence on the collapse pressure density. When the inclination angle of bedding plane is 30°and the inclination direction is 0°, the collapse pressure of the shaft wall is the largest when the inclination angle of borehole is 0-10°, so it is necessary to pay attention to prevent collapse in drilling straight wells under this condition. With the increase of the inclination angle of borehole, the law of collapse pressure on different azimuth angles of borehole is different. When the azimuth angle of borehole is 90°and 270°, the collapse occurs always in high pressure, above 1.7 g/cm 3 . The collapse pressure decreases gradually with the increase of well inclination angle in the range of azimuth 330-30°and 150-210°. In this range, when the inclination angle of borehole exceeds 20°, the collapse pressure will decrease significantly, with an average decrease of 0.2-0.3. Special attention is paid to the fact that the minimum collapse pressure is 1.2 g/cm 3 when the azimuth angle of borehole is 15°and the inclination angle of borehole is 30°. When the inclination angle of bedding plane is 30°and the inclination direction is 30°, the collapse pressure distribution has changed significantly. In the same way, the azimuth angle of borehole 90°and 270°direction to achieve maximum value, but the area where the collapse pressure is decreasing 7 Geofluids expands. Directional wells with a certain angle can be drilled within the azimuth range of 320-70°and 140-250°. In particular, in the azimuth angle of borehole of 30°and 210°, the collapse pressure first decreases, then increases, and then decreases. The minimum value of collapse pressure is 1.1 g/cm 3 when the inclination angle of borehole is about 30°. When the inclination angle of bedding plane is 30°and the inclination direction is 60°, the collapse pressure is rela-tively low in the range of azimuth angle of borehole of 30-75°and 210-255°, and the well inclination angle is lower than 60°. While in other directions, the collapse pressure is higher, averaging around 1.6 g/cm 3 . At the inclination angle of bedding plane of 30°and the inclination direction of 90°, the collapse pressure along the azimuth angle of borehole of 90°and 270°is small, but when the well inclination angle exceeds 60°, the collapse pressure will increase significantly. While in the igure 5: The inclination angle of bedding plane is 30°. 8 Geofluids azimuth angle of borehole of 0°and 180°, the maximum collapse pressure is more than 1.7 g/cm 3 . Compared with the inclination angle of bedding plane of 30°, when the bedding dip angle is 60°, the distribution law of collapse pressure will change significantly. When the bedding tendency is 0°, in the azimuth range of 120-180°and 300-360°, the collapse pressure density is relatively low, and the directional hole with the drilling inclination angle less than 80°is relatively stable. In the azimuth angle of borehole of 45°and 225°, the collapse pressure is relatively large, generally above 1.6 g/cm 3 . When the bedding tendency is 30°and the azimuth angle is 30°and 210°, the collapse pressure is above 1.6 g/cm 3 with a small inclination angle of borehole. With the increase of the well inclination angle, the collapse pressure will gradually decrease. When the inclination angle of borehole is 60°, the collapse pressure drops to 1.1 g/cm 3 , a decrease of 0.5 g/cm 3 . When the bedding tendency is 60°, the collapse pressure is lower in the azimuths of 60°and 240°. While in other directions, the collapse pressure is above 1.5 g/cm 3 . When the bedding tendency is 90°, the collapse pressure along the azimuth direction 90°and 270°is lower, and when the inclination angle of the well in this direction exceeds 30°, the borehole is generally stable. Therefore, the bedding occurrence has a great influence on the collapse pressure distribution. In the drilling construction process, the distribution of bedding occurrence should be noted to determine the optimal drilling construction parameters. Unbalanced Drilling Wellbore Failure Zone. During drilling, the design of collapse pressure density provides a favorable guarantee. However, when the borehole wall collapse occurs, the situation of borehole wall instability is often unclear. Therefore, based on the results of the abovementioned collapse pressure, the collapse characteristics of the bedding strata in the Longmaxi formation are studied (as shown in Figures 7 and 8). According to Figure 7, when the dip angle of the weak surface is 30°, the caving patterns of shaft walls with different azimuths are greatly different. When the tendency of bedding surface is 30°, if the borehole wall collapses or brittle caving occur, the diameter will be expanded along the well circumference angles of 75°, 165°, 255°, and 345°, and the degree of caving along the four directions will be roughly equal. When the bedding azimuth angle is 45°, the caving point will rotate clockwise, and the amount of caving along the directions of 0°8 Inclination direction ( ) = 30°F igure 6: The inclination angle of bedding plane is 60°. 9 Geofluids and 180°will exceed the other two directions. When the inclined azimuth angle of bedding plane is 60°, the shape characteristics of shaft wall caving will change further. The degree of caving will decrease in the direction of 15°and 195°, and the degree of caving will increase in the direction of 90°and 180°. When the bedding inclination azimuth is 90°, the sidewall will collapse along the perimeter, in which case the sidewall will expand along the perimeter. It can be seen from Figure 8 that when the bedding inclination angle is 60°, when the inclination azimuth of the bedding plane is 30°, 45°, and 60°, the shaft wall will collapse along the four perpendicular directions, and the collapse scale is roughly the same. With the increase of the bedding inclination azimuth angle, the collapse direction will rotate clockwise. When the bedding tendency is 90°, the shaft wall will collapse along the directions of 45°, 135°, 225°, and 315°, and the amount of collapse is basically the same. On the whole, the caving is more regular when the bedding surface dip angle is 60°than 30°. It can be concluded that the anisotropy of bedding has a great influence on the shape characteristics of shaft wall caving, and the shape of shaft wall caving should be analyzed according to the specific formation conditions. Conclusion In order to solve the prominent problem of borehole instability in shale formation in the Jiaoshiba area, a collapse pressure calculation model and a model of well cycle instability area are proposed, which take into account the underbalanced seepage flow condition and anisotropy of rock strength. According to the depth of the unstable section of the shaft wall, the in situ stress parameters of the Longmaxi formation were obtained by hydraulic fracturing and laboratory test. It is generally considered that high ground stress is greater than 2 MPa/100 m. The results show that the two-way horizontal ground stress is higher than 2.2 MPa/100 m, so the original ground stress of the rational stratum of the Longmaxi formation is in a state of high stress. Through downhole core mechanics experiments, the mechanical parameters of the shale matrix and the weak surface of the bedding were obtained, indicating that the cohesion and internal friction angle of the bedding surface (4.7 MPa, 26.9°) were significantly lower than that of the rock body (7.77 MPa, 46.7°). Combined with the site, the stratigraphy of the Longmaxi Formation is developed, which is prone to loss during drilling, and the permeability of shale matrix is low. Therefore, it is believed that the drilling fluid mainly intruded along the stratigraphy. At the same time, the mechanical parameters of bedding surface are obviously lower than that of rock matrix, which further affects the stability of wellbore. It is considered that underbalanced drilling can reduce the damage of drilling fluid to near-wellbore formation and improve the stability. For horizontal wells, the collapse pressure law of different bedding occurrences (inclination angle and tendency) is studied. Generally, the minimum value is obtained at the inclined angle position of the well corresponding to the bedding inclination angle, that is, when the borehole axis is perpendicular to the bedding plane. Otherwise, the drilling fluid density is often increased by 0.5 g/cm 3 in other directions to maintain borehole stability. The bedding occurrence has a great influence on the collapse pressure distribution. During drilling construction, the distribution of bedding occurrence must be noted to determine the most accurate drilling construction parameters. In this paper, the characteristics of wellbore instability in bedding strata are studied. When the bedding dip angle is low, during the rotation of the bedding tendency from the direction of maximum horizontal ground stress to the direction of minimum horizontal ground stress, the shaft wall collapse pattern begins with several specific directions and then collapses uniformly around the well. With the increase of bedding angle, the diameter expansion rate also increases gradually. Therefore, it is necessary to understand the collapse state more clearly when the wellbore collapses in the stratified formation, so as to provide the basis for safe production and accident treatment. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest.
v3-fos-license